ndsampler.coco_sampler
¶
The CocoSampler is the ndsampler interface for efficiently sampling windowed
data from a kwcoco.CocoDataset
.
- CommandLine:
xdoctest -m ndsampler.coco_sampler __doc__ –show
Example
>>> # Imagine you have some images
>>> import kwimage
>>> image_paths = [
>>> kwimage.grab_test_image_fpath('astro'),
>>> kwimage.grab_test_image_fpath('carl'),
>>> kwimage.grab_test_image_fpath('airport'),
>>> ] # xdoctest: +IGNORE_WANT
['~/.cache/kwimage/demodata/KXhKM72.png',
'~/.cache/kwimage/demodata/flTHWFD.png',
'~/.cache/kwimage/demodata/Airport.jpg']
>>> # And you want to randomly load subregions of them in O(1) time
>>> import ndsampler
>>> import kwcoco
>>> # First make a COCO dataset that refers to your images
>>> dataset = {
>>> 'images': [{'id': i, 'file_name': fpath} for i, fpath in enumerate(image_paths)],
>>> 'annotations': [],
>>> 'categories': [],
>>> }
>>> coco_dset = kwcoco.CocoDataset(dataset)
>>> # (and possibly annotations)
>>> category_id = coco_dset.ensure_category('face')
>>> image_id = 0
>>> coco_dset.add_annotation(image_id=image_id, category_id=category_id, bbox=kwimage.Boxes([[140, 10, 180, 180]], 'xywh'))
>>> print(coco_dset)
<CocoDataset(tag=None, n_anns=1, n_imgs=3, ... n_cats=1)>
>>> # Now pass the dataset to a sampler and tell it where it can store temporary files
>>> workdir = ub.ensure_app_cache_dir('ndsampler/demo')
>>> sampler = ndsampler.CocoSampler(coco_dset, workdir=workdir)
>>> # Now you can load arbirary samples by specifing a target dictionary
>>> # with an image_id (gid) center location (cx, cy) and width, height.
>>> target = {'gid': 0, 'cx': 220, 'cy': 100, 'width': 300, 'height': 300}
>>> sample = sampler.load_sample(target)
>>> # The sample contains the image data, any visible annotations, a reference
>>> # to the original target, and params of the transform used to sample this
>>> # patch
...
>>> print(sorted(sample.keys()))
['annots', 'classes', 'im', 'kp_classes', 'params', 'target', 'tr']
>>> im = sample['im']
>>> print(f'im.shape={im.shape}')
im.shape=(300, 300, 3)
>>> dets = sample['annots']['frame_dets'][0]
>>> print(f'dets={dets}')
>>> print('dets.data = {}'.format(ub.repr2(dets.data, nl=1, sv=1)))
dets=<Detections(1)>
dets.data = {
'aids': [1],
'boxes': <Boxes(xywh, array([[ 70., 60., 180., 180.]]))>,
'cids': [1],
'keypoints': <PointsList(n=1)>,
'segmentations': <SegmentationList(n=1)>,
}
>>> # xdoctest: +REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> kwplot.imshow(im)
>>> dets.draw(labels=False)
>>> kwplot.show_if_requested()
>>> # The load sample function is at the core of what ndsampler does
>>> # There are other helper functions like load_positive / load_negative
>>> # which deal with annotations. See those for more details.
>>> # For random negative sampling see coco_regions.
Module Contents¶
Classes¶
Samples patches of positives and negative detection windows from a COCO |
Functions¶
|
Transforms a center and window dimensions into a start/stop slice |
|
|
|
Attributes¶
- ndsampler.coco_sampler.profile¶
- class ndsampler.coco_sampler.CocoSampler(dset, workdir=None, autoinit=True, backend=None, verbose=0)¶
Bases:
ndsampler.abstract_sampler.AbstractSampler
,ndsampler.utils.util_misc.HashIdentifiable
,ubelt.NiceRepr
Samples patches of positives and negative detection windows from a COCO dataset. Can be used for training FCN or RPN based classifiers / detectors.
Does data loading, padding, etc…
- Parameters
dset (kwcoco.CocoDataset) – a coco-formatted dataset
backend (str | Dict) – either ‘cog’ or ‘npy’, or a dict with {‘type’: str, ‘config’: Dict}. See AbstractFrames for more details. Defaults to None, which does not do anything fancy.
Example
#print >>> from ndsampler.coco_sampler import * >>> self = CocoSampler.demo(‘photos’) … >>> print(sorted(self.class_ids)) [0, 1, 2, 3, 4, 5, 6, 7, 8] >>> print(self.n_positives) 4
Example
>>> import ndsampler >>> self = ndsampler.CocoSampler.demo('photos') >>> p_sample = self.load_positive() >>> n_sample = self.load_negative() >>> self = ndsampler.CocoSampler.demo('shapes') >>> p_sample2 = self.load_positive() >>> n_sample2 = self.load_negative() >>> for sample in [p_sample, n_sample, p_sample2, n_sample2]: >>> assert 'annots' in sample >>> assert 'im' in sample >>> assert 'rel_boxes' in sample['annots'] >>> assert 'rel_ssegs' in sample['annots'] >>> assert 'rel_kpts' in sample['annots'] >>> assert 'cids' in sample['annots'] >>> assert 'aids' in sample['annots']
- property classes¶
- property catgraph¶
DEPRICATED, use self.classes instead
- property n_positives¶
- property n_annots¶
- property n_samples¶
- property n_images¶
- property n_categories¶
- property class_ids¶
- property image_ids¶
- classmethod demo(key='shapes', workdir=None, backend=None, **kw)¶
Create a toy coco sampler for testing and demo puposes
- SeeAlso:
kwcoco.CocoDataset.demo
- classmethod coerce(data, **kwargs)¶
Attempt to coerce the input data into a sampler. Generally this can be anything that is already a sampler, or somthing that can be coerced into a kwcoco dataset.
- Parameters
data (str | PathLike | CocoDataset | CocoSampler) – something that can be coerced into a CocoSampler.
- Returns
CocoSampler
- _init()¶
- _depends()¶
- lookup_class_name(class_id)¶
- lookup_class_id(class_name)¶
- __len__()¶
- preselect(**kwargs)¶
Setup a pool of training examples before the epoch begins
- new_sample_grid(task, window_dims, window_overlap=0)¶
- load_image_with_annots(image_id, cache=True)¶
- Parameters
image_id (int) – the coco image id
cache (bool, default=True) – if True returns the fast subregion-indexable file reference. Otherwise, eagerly loads the entire image.
- Returns
img: the coco image dict augmented with imdata anns: the coco annotations in this image
- Return type
Tuple[Dict, List[Dict]]
Example
>>> import ndsampler >>> self = ndsampler.CocoSampler.demo() >>> img, anns = self.load_image_with_annots(1) >>> dets = kwimage.Detections.from_coco_annots(anns, dset=self.dset) >>> # xdoctest: +REQUIRES(--show) >>> import kwplot >>> kwplot.autompl() >>> kwplot.imshow(img['imdata'][:], doclf=1) >>> dets.draw() >>> kwplot.show_if_requested()
- load_annotations(image_id)¶
Loads the annotations within an image
- Parameters
image_id (int) – the coco image id
- Returns
list of coco annotation dictionaries
- Return type
List[Dict]
- load_image(image_id, cache=True)¶
Loads the annotations within an image
- Parameters
image_id (int) – the coco image id
cache (bool, default=True) – if True returns the fast subregion-indexable file reference. Otherwise, eagerly loads the entire image.
- Returns
either ndarray data or a indexable reference
- Return type
ArrayLike
- load_item(index, with_annots=True, target=None, rng=None, **kw)¶
Loads item from either positive or negative regions pool.
Lower indexes will return positive regions and higher indexes will return negative regions.
The main paradigm of the sampler is that sampler.regions maintains a pool of target regions, you can influence what that pool is at any point by calling sampler.regions.preselect (usually either at the start of learning, or maybe after every epoch, etc..), and you use load_item to load the index-th item from that preselected pool. Depending on how you preselected the pool, the returned item might correspond to a positive or negative region.
- Parameters
index (int) – index of target region
with_annots (bool | str, default=True) – if True, also extracts information about any annotation that overlaps the region of interest (subject to visibility_thresh). Can also be a List[str] that specifies which specific subinfo should be extracted. Valid strings in this list are: boxes, keypoints, and segmenation.
target (Dict) – Extra target arguments that update the positive target, like window_dims, pad, etc…. See
load_sample()
for details on allowed keywords.rng (None | int | RandomState) – a seed or seeded random number generator.
**kw – other arguments that can be passed to
CocoSampler.load_sample()
- Returns
- sample: dict containing keys
im (ndarray): image data target (dict): contains the same input items as the input
target but additionally specifies inferred information like rel_cx and rel_cy, which gives the center of the target w.r.t the returned padded sample.
annots (dict): Dict of aids, cids, and rel/abs boxes
- Return type
Dict
- load_positive(index=None, with_annots=True, target=None, rng=None, **kw)¶
Load an item from the the positive pool of regions.
- Parameters
index (int) – index of positive target
with_annots (bool | str, default=True) – if True, also extracts information about any annotation that overlaps the region of interest (subject to visibility_thresh). Can also be a List[str] that specifies which specific subinfo should be extracted. Valid strings in this list are: boxes, keypoints, and segmentation.
target (Dict) – Extra target arguments that update the positive target, like window_dims, pad, etc…. See
load_sample()
for details on allowed keywords.rng (None | int | RandomState) – a seed or seeded random number generator.
**kw – other arguments that can be passed to
CocoSampler.load_sample()
- Returns
- sample: dict containing keys
im (ndarray): image data tr (dict): contains the same input items as tr but additionally
specifies rel_cx and rel_cy, which gives the center of the target w.r.t the returned padded sample.
annots (dict): Dict of aids, cids, and rel/abs boxes
- Return type
Dict
Example
>>> import ndsampler >>> self = ndsampler.CocoSampler.demo() >>> sample = self.load_positive(pad=(10, 10), tr=dict(window_dims=(3, 3))) >>> assert sample['im'].shape[0] == 23 >>> # xdoctest: +REQUIRES(--show) >>> import kwplot >>> kwplot.autompl() >>> kwplot.imshow(sample['im'], doclf=1) >>> kwplot.show_if_requested()
- load_negative(index=None, with_annots=True, target=None, rng=None, **kw)¶
Load an item from the the negative pool of regions.
- Parameters
index (int) – if specified loads a specific negative from the presampled pool, otherwise the next negative in the pool is returned.
with_annots (bool | str, default=True) – if True, also extracts information about any annotation that overlaps the region of interest (subject to visibility_thresh). Can also be a List[str] that specifies which specific subinfo should be extracted. Valid strings in this list are: boxes, keypoints, and segmentation.
target (Dict) – Extra target arguments that update the positive target, like window_dims, pad, etc…. See
load_sample()
for details on allowed keywords.rng (None | int | RandomState) – a seed or seeded random number generator.
- Returns
- sample: dict containing keys
im (ndarray): image data tr (dict): contains the same input items as tr but additionally
specifies rel_cx and rel_cy, which gives the center of the target w.r.t the returned padded sample.
annots (dict): Dict of aids, cids, and rel/abs boxes
- Return type
Dict
Example
>>> import ndsampler >>> self = ndsampler.CocoSampler.demo() >>> rng = None >>> sample = self.load_negative(rng=rng, pad=(0, 0)) >>> # xdoctest: +REQUIRES(--show) >>> import kwplot >>> import kwimage >>> kwplot.autompl() >>> abs_sample_box = sample['params']['sample_tlbr'] >>> tf_rel_from_abs = kwimage.Affine.coerce(sample['params']['tf_rel_to_abs']).inv() >>> wh, ww = sample['target']['window_dims'] >>> abs_window_box = kwimage.Boxes([[sample['target']['cx'], sample['target']['cy'], ww, wh]], 'cxywh') >>> rel_window_box = abs_window_box.warp(tf_rel_from_abs) >>> rel_sample_box = abs_sample_box.warp(tf_rel_from_abs) >>> kwplot.imshow(sample['im'], fnum=1, doclf=True) >>> rel_sample_box.draw(color='kw_green', lw=10) >>> rel_window_box.draw(color='kw_blue', lw=8) >>> kwplot.show_if_requested()
Example
>>> import ndsampler >>> self = ndsampler.CocoSampler.demo() >>> rng = None >>> sample = self.load_negative(rng=rng, pad=(10, 20), target=dict(window_dims=(64, 64))) >>> # xdoctest: +REQUIRES(--show) >>> import kwplot >>> import kwimage >>> kwplot.autompl() >>> abs_sample_box = sample['params']['sample_tlbr'] >>> tf_rel_from_abs = kwimage.Affine.coerce(sample['params']['tf_rel_to_abs']).inv() >>> wh, ww = sample['target']['window_dims'] >>> abs_window_box = kwimage.Boxes([[sample['target']['cx'], sample['target']['cy'], ww, wh]], 'cxywh') >>> rel_window_box = abs_window_box.warp(tf_rel_from_abs) >>> rel_sample_box = abs_sample_box.warp(tf_rel_from_abs) >>> kwplot.imshow(sample['im'], fnum=1, doclf=True) >>> rel_sample_box.draw(color='kw_green', lw=10) >>> rel_window_box.draw(color='kw_blue', lw=8) >>> kwplot.show_if_requested()
- load_sample(target=None, with_annots=True, visible_thresh=0.0, **kwargs)¶
Loads the volume data associated with the bbox and frame of a target
- Parameters
target (dict) – target dictionary (often abbreviated as tr) indicating an nd source object (e.g. image or video) and the coordinate region to sample from. Unspecified coordinate regions default to the extent of the source object.
For 2D image source objects, target must contain or be able to infer the key gid (int), to specify an image id.
For 3D video source objects, target must contain the key vidid (int), to specify a video id (NEW in 0.6.1) or gids List[int], as a list of images in a video (NEW in 0.6.2)
In general, coordinate regions can specified by the key slices, a numpy-like “fancy index” over each of the n dimensions. Usually this is a tuple of slices, e.g. (y1:y2, x1:x2) for images and (t1:t2, y1:y2, x1:x2) for videos.
You may also specify: space_slice as (y1:y2, x1:x2) for both 2D images and 3D videos and time_slice as t1:t2 for 3D videos.
- Spatial regions can be specified with keys:
‘cx’ and ‘cy’ as the center of the region in pixels.
‘width’ and ‘height’ are in pixels.
‘window_dims’ is a height, width tuple or can be a
special string key ‘square’, which overrides width and height to both be the maximum of the two.
Temporal regions are specifiable by slices, time_slice or an explicit list of gids.
The aid key can be specified to indicate a specific annotation to load. This uses the annotation information to infer ‘gid’, ‘cx’, ‘cy’, ‘width’, and ‘height’ if they are not present. (NEW in 0.5.10)
- The channels key can be specified as a channel code or
kwcoco.ChannelSpec
object. (NEW in 0.6.1)- as_xarray (bool, default=False):
if True, return the image data as an xarray object
- interpolation (str, default=’auto’):
type of resample interpolation
- antialias (str, default=’auto’):
antialias sample or not
nodata: override function level nodata
- use_native_scale (bool): If True, the “im” field is returned
as a jagged list of data that are as close to native resolution as possible while still maintaining alignment up to a scale factor. Currently only available for video sampling.
- scale (float | Tuple[float, float]):
if specified, the same window is sampled, but the data is returned warped by the extra scale factor. This augments the existing image or video scale factor. Any annotations are also warped according to this factor such that they align with the returned data.
- pad (tuple): (height, width) extra context to add to window dims.
This helps prevent augmentation from producing boundary effects
- padkw (dict): kwargs for numpy.pad.
Defaults to {‘mode’: ‘constant’}.
- dtype (type | None):
Cast the loaded data to this type. If unspecified returns the data as-is.
- nodata (int | None, default=None):
If specified, for integer data with nodata values, this is passed to kwcoco delayed image finalize. The data is converted to float32 and nodata values are replaced with nan. These nan values are handled correctly in subsequent warping operations.
with_annots (bool | str, default=True) – if True, also extracts information about any annotation that overlaps the region of interest (subject to visibility_thresh). Can also be a List[str] that specifies which specific subinfo should be extracted. Valid strings in this list are: boxes, keypoints, and segmentation.
visible_thresh (float) – does not return annotations with visibility less than this threshold.
**kwargs – handles deprecated arguments which are now specified in the target dictionary itself.
- Returns
- sample: dict containing keys
im (ndarray | DataArray): image / video data target (dict): contains the same input items as the input
target but additionally specifies inferred information like rel_cx and rel_cy, which gives the center of the target w.r.t the returned padded sample.
- annots (dict): containing items:
- frame_dets (List[kwimage.Detections]): a list of detection
objects containing the requested annotation info for each frame.
aids (list): annotation ids DEPRECATED cids (list): category ids DEPRECATED rel_ssegs (ndarray): segmentations relative to the sample DEPRECATED rel_kpts (ndarray): keypoints relative to the sample DEPRECATED
- Return type
Dict
- CommandLine:
xdoctest -m ndsampler.coco_sampler CocoSampler.load_sample:2 –show
xdoctest -m ndsampler.coco_sampler CocoSampler.load_sample:1 –show xdoctest -m ndsampler.coco_sampler CocoSampler.load_sample:3 –show
Example
>>> import ndsampler >>> self = ndsampler.CocoSampler.demo() >>> # The target (target) lets you specify an arbitrary window >>> target = {'gid': 1, 'cx': 5, 'cy': 2, 'width': 6, 'height': 6} >>> sample = self.load_sample(target) ... >>> print('sample.shape = {!r}'.format(sample['im'].shape)) sample.shape = (6, 6, 3)
Example
>>> # Access direct annotation information >>> import ndsampler >>> sampler = ndsampler.CocoSampler.demo() >>> # Sample a region that contains at least one annotation >>> target = {'gid': 1, 'cx': 5, 'cy': 2, 'width': 600, 'height': 600} >>> sample = sampler.load_sample(target) >>> annotation_ids = sample['annots']['aids'] >>> aid = annotation_ids[0] >>> # Method1: Access ann dict directly via the coco index >>> ann = sampler.dset.anns[aid] >>> # Method2: Access ann objects via annots method >>> dets = sampler.dset.annots(annotation_ids).detections >>> print('dets.data = {}'.format(ub.repr2(dets.data, nl=1)))
Ignore:
import rtree tree = rtree.Index() tree.insert(0, [10, 10, 20, 20]) tree.insert(0, [20, 20, 30, 30]) tree.insert(0, [20, 50, 80, 80])
qtree = sampler.regions.isect_index.qtrees[1]
Example
>>> import ndsampler >>> self = ndsampler.CocoSampler.demo() >>> target = self.regions.get_positive(0) >>> target['window_dims'] = 'square' >>> target['pad'] = (25, 25) >>> sample = self.load_sample(target) >>> print('im.shape = {!r}'.format(sample['im'].shape)) im.shape = (135, 135, 3) >>> target['window_dims'] = None >>> target['pad'] = (0, 0) >>> sample = self.load_sample(target) >>> print('im.shape = {!r}'.format(sample['im'].shape)) im.shape = (52, 85, 3) >>> # xdoctest: +REQUIRES(--show) >>> import kwplot >>> kwplot.autompl() >>> kwplot.imshow(sample['im']) >>> kwplot.show_if_requested()
Example
>>> # sample an out of bounds target >>> from ndsampler.coco_sampler import * >>> self = CocoSampler.demo('vidshapes8') >>> test_vidspace = 1 >>> target = self.regions.get_positive(0) >>> # Toggle to see if this test works in both cases >>> space = 'image' >>> if test_vidspace: >>> space = 'video' >>> target = target.copy() >>> target['gids'] = [target.pop('gid')] >>> target['scale'] = 1.3 >>> #target['scale'] = 0.8 >>> #target['use_native_scale'] = True >>> #target['realign_native'] = 'largest' >>> target['window_dims'] = (364, 364) >>> sample = self.load_sample(target) >>> annots = sample['annots'] >>> assert len(annots['aids']) > 0 >>> #assert len(annots['rel_cxywh']) == len(annots['aids']) >>> # xdoctest: +REQUIRES(--show) >>> import kwplot >>> kwplot.autompl() >>> tf_rel_to_abs = sample['params']['tf_rel_to_abs'] >>> rel_dets = annots['frame_dets'][0] >>> abs_dets = rel_dets.warp(tf_rel_to_abs) >>> # Draw box in original image context >>> #abs_frame = self.frames.load_image(sample['target']['gid'], space=space)[:] >>> abs_frame = self.dset.coco_image(sample['target']['gid']).delay(space=space).finalize() >>> kwplot.imshow(abs_frame, pnum=(1, 2, 1), fnum=1) >>> abs_dets.data['boxes'].translate([-.5, -.5]).draw() >>> abs_dets.data['keypoints'].draw(color='green', radius=10) >>> abs_dets.data['segmentations'].draw(color='red', alpha=.5) >>> # Draw box in relative sample context >>> if test_vidspace: >>> kwplot.imshow(sample['im'][0], pnum=(1, 2, 2), fnum=1) >>> else: >>> kwplot.imshow(sample['im'], pnum=(1, 2, 2), fnum=1) >>> rel_dets.data['boxes'].translate([-.5, -.5]).draw() >>> rel_dets.data['segmentations'].draw(color='red', alpha=.6) >>> rel_dets.data['keypoints'].draw(color='green', alpha=.4, radius=10) >>> kwplot.show_if_requested()
Example
>>> from ndsampler.coco_sampler import * >>> self = CocoSampler.demo('photos') >>> target = self.regions.get_positive(1) >>> target['window_dims'] = (300, 150) >>> target['pad'] = None >>> sample = self.load_sample(target) >>> assert sample['im'].shape[0:2] == target['window_dims'] >>> # xdoctest: +REQUIRES(--show) >>> import kwplot >>> kwplot.autompl() >>> kwplot.imshow(sample['im'], colorspace='rgb') >>> kwplot.show_if_requested()
Example
>>> # Multispectral video sample example >>> from ndsampler.coco_sampler import * >>> self = CocoSampler.demo('vidshapes1-multispectral', num_frames=5) >>> sample_grid = self.new_sample_grid('video_detection', (3, 128, 128)) >>> target = sample_grid['positives'][0] >>> target['channels'] = 'B1|B8' >>> target['as_xarray'] = False >>> sample = self.load_sample(target) >>> print(ub.repr2(sample['target'], nl=1)) >>> print(sample['im'].shape) >>> assert sample['im'].shape == (3, 128, 128, 2) >>> target['channels'] = '<all>' >>> sample = self.load_sample(target) >>> assert sample['im'].shape == (3, 128, 128, 5)
Example
>>> # Multispectral-multisensor jagged video sample example >>> from ndsampler.coco_sampler import * >>> self = CocoSampler.demo('vidshapes1-msi-multisensor', num_frames=5) >>> sample_grid = self.new_sample_grid('video_detection', (3, 128, 128)) >>> target = sample_grid['positives'][0] >>> target['channels'] = 'B1|B8' >>> target['as_xarray'] = False >>> sample1 = self.load_sample(target) >>> target['scale'] = 2 >>> sample2 = self.load_sample(target) >>> target['use_native_scale'] = True >>> sample3 = self.load_sample(target) >>> #### >>> assert sample1['im'].shape == (3, 128, 128, 2) >>> assert sample2['im'].shape == (3, 256, 256, 2) >>> box1 = sample1['annots']['frame_dets'][0].boxes >>> box2 = sample2['annots']['frame_dets'][0].boxes >>> box3 = sample3['annots']['frame_dets'][0].boxes >>> assert np.allclose((box2.width / box1.width), 2) >>> # Jagged annotations are still in video space >>> assert np.allclose((box3.width / box1.width), 2) >>> jagged_shape = [[p.shape for p in f] for f in sample3['im']] >>> jagged_align = [[a for a in m['align']] for m in sample3['params']['jagged_meta']]
- _infer_target_attributes(target, **kwargs)¶
Infer unpopulated target attribues
Example
>>> # sample using only an annotation id >>> from ndsampler.coco_sampler import * >>> self = CocoSampler.demo() >>> target = {'aid': 1, 'as_xarray': True} >>> target_ = self._infer_target_attributes(target) >>> print('target_ = {}'.format(ub.repr2(target_, nl=1))) >>> assert target_['gid'] == 1 >>> assert all(k in target_ for k in ['cx', 'cy', 'width', 'height'])
>>> self = CocoSampler.demo('vidshapes8-multispectral') >>> target = {'aid': 1, 'as_xarray': True} >>> target_ = self._infer_target_attributes(target) >>> assert target_['gid'] == 1 >>> assert all(k in target_ for k in ['cx', 'cy', 'width', 'height'])
>>> target = {'vidid': 1, 'as_xarray': True} >>> target_ = self._infer_target_attributes(target) >>> print('target_ = {}'.format(ub.repr2(target_, nl=1))) >>> assert 'gids' in target_
>>> target = {'gids': [1, 2], 'as_xarray': True} >>> target_ = self._infer_target_attributes(target) >>> print('target_ = {}'.format(ub.repr2(target_, nl=1)))
- _load_slice(target)¶
Example
>>> # sample an out of bounds target >>> from ndsampler.coco_sampler import * >>> self = CocoSampler.demo() >>> target = self.regions.get_positive(0) >>> target = self._infer_target_attributes(target) >>> target['as_xarray'] = True >>> sample = self._load_slice(target) >>> print('sample = {!r}'.format(ub.map_vals(type, sample)))
>>> # sample an out of bounds target >>> from ndsampler.coco_sampler import * >>> self = CocoSampler.demo('vidshapes2') >>> target = self._infer_target_attributes({'vidid': 1}) >>> target = self._infer_target_attributes(target) >>> target['as_xarray'] = True >>> sample = self._load_slice(target) >>> print('sample = {!r}'.format(ub.map_vals(type, sample)))
>>> target = self._infer_target_attributes({'gids': [1, 2]}) >>> target['as_xarray'] = True >>> sample = self._load_slice(target) >>> print('sample = {!r}'.format(ub.map_vals(type, sample)))
- CommandLine:
xdoctest -m ndsampler.coco_sampler CocoSampler._load_slice –profile
- Ignore:
from ndsampler.coco_sampler import * # NOQA from ndsampler.coco_sampler import _center_extent_to_slice, _ensure_iterablen import ndsampler import xdev globals().update(xdev.get_func_kwargs(ndsampler.CocoSampler._load_slice))
Example
>>> # Multispectral video sample example >>> from ndsampler.coco_sampler import * >>> self = CocoSampler.demo('vidshapes1-multispectral', num_frames=5) >>> sample_grid = self.new_sample_grid('video_detection', (3, 128, 128)) >>> target = sample_grid['positives'][0] >>> target = self._infer_target_attributes(target) >>> target['channels'] = 'B1|B8' >>> target['as_xarray'] = False >>> sample = self.load_sample(target) >>> print(ub.repr2(sample['target'], nl=1)) >>> print(sample['im'].shape) >>> assert sample['im'].shape == (3, 128, 128, 2) >>> target['channels'] = '<all>' >>> sample = self.load_sample(target) >>> assert sample['im'].shape == (3, 128, 128, 5)
Example
>>> # Multispectral video sample example >>> from ndsampler.coco_sampler import * >>> self = CocoSampler.demo('vidshapes1-multisensor-msi', num_frames=5) >>> sample_grid = self.new_sample_grid('video_detection', (3, 128, 128)) >>> target = sample_grid['positives'][0] >>> target = self._infer_target_attributes(target) >>> target['channels'] = 'B1|B8' >>> target['as_xarray'] = False >>> target['space_slice'] = (slice(-64, 64), slice(-64, 64)) >>> sample = self.load_sample(target) >>> print(ub.repr2(sample['target'], nl=1)) >>> print(sample['im'].shape) >>> assert sample['im'].shape == (3, 128, 128, 2) >>> target['channels'] = '<all>' >>> sample = self.load_sample(target) >>> assert sample['im'].shape[2] > 5 # probably 16
>>> # Test jagged native scale sampling >>> target['use_native_scale'] = True >>> target['as_xarray'] = True >>> target['channels'] = 'B1|B8|r|g|b|disparity|gauss' >>> sample = self.load_sample(target) >>> jagged_meta = sample['params']['jagged_meta'] >>> frames = sample['im'] >>> jagged_shape = [[p.shape for p in f] for f in frames] >>> jagged_chans = [[p.coords['c'].values.tolist() for p in f] for f in frames] >>> jagged_chans2 = [m['chans'] for m in jagged_meta] >>> jagged_align = [[a.concise() for a in m['align']] for m in jagged_meta] >>> # all frames should have the same number of channels >>> assert len(frames) == 3 >>> assert all(sum(p.shape[2] for p in f) == 7 for f in frames) >>> frames[0] == 3 >>> print('jagged_chans = {}'.format(ub.repr2(jagged_chans, nl=1))) >>> print('jagged_shape = {}'.format(ub.repr2(jagged_shape, nl=1))) >>> print('jagged_chans2 = {}'.format(ub.repr2(jagged_chans2, nl=1))) >>> print('jagged_align = {}'.format(ub.repr2(jagged_align, nl=1)))
>>> # Test realigned native scale sampling >>> target['use_native_scale'] = True >>> target['realign_native'] = 'largest' >>> target['as_xarray'] = True >>> target = self._infer_target_attributes(target) >>> gid = None >>> for coco_img in self.dset.images().coco_images: >>> if coco_img.channels & 'r|g|b': >>> gid = coco_img.img['id'] >>> break >>> assert gid is not None, 'need specific image' >>> target['gids'] = [gid] >>> # Test channels that are good early fused groups >>> target['channels'] = 'r|g|b' >>> sample1 = self.load_sample(target) >>> target['channels'] = 'B8|B11' >>> sample2 = self.load_sample(target) >>> target['channels'] = 'r|g|b|B11' >>> sample3 = self.load_sample(target) >>> shape1 = sample1['im'].shape[1:3] >>> shape2 = sample2['im'].shape[1:3] >>> shape3 = sample3['im'].shape[1:3] >>> print(f'shape1={shape1}') >>> print(f'shape2={shape2}') >>> print(f'shape3={shape3}') >>> assert shape1 != shape2 >>> assert shape2 == shape3
- _load_slice_3d(target)¶
Breakout the 2d vs 3d logic so they can evolve somewhat independently.
TODO: the 2D logic needs to be updated to be more consistent with 3d logic
Or at least the differences between them are more clear.
Example
>>> # Test time padding case >>> # xdoctest: +SKIP('not implemented') >>> from ndsampler.coco_sampler import * >>> self = CocoSampler.demo('vidshapes-multisensor-msi', num_frames=1, num_videos=1, image_size=(32, 32)) >>> sample_grid = self.new_sample_grid('video_detection', (2, 32, 32)) >>> target = sample_grid['positives'][0] >>> target = self._infer_target_attributes(target) >>> sample = self.load_sample(target)
- _load_slice_2d(target)¶
Breakout the 2d vs 3d logic so they can evolve somewhat independently.
TODO: the 2D logic needs to be updated to be more consistent with 3d logic
Or at least the differences between them are more clear.
- _populate_overlap(sample, visible_thresh=0.1, with_annots=True)¶
Add information about annotations overlapping the sample.
- with_annots can be a + separated string or list of the the special keys:
‘segmentation’ and ‘keypoints’.
Example
>>> # sample an out of bounds target >>> import ndsampler >>> self = ndsampler.CocoSampler.demo() >>> target = self.regions.get_item(0) >>> target = self._infer_target_attributes(target) >>> sample = self._load_slice(target) >>> sample = self._populate_overlap(sample) >>> print('sample = {}'.format(ub.repr2(ub.util_dict.dict_diff(sample, ['im']), nl=-1)))
- ndsampler.coco_sampler._center_extent_to_slice(center, window_dims)¶
Transforms a center and window dimensions into a start/stop slice
- Parameters
center (Tuple[float]) – center location (cy, cx)
window_dims (Tuple[int]) – window size (height, width)
- Returns
the slice corresponding to the centered window
- Return type
Tuple[slice, …]
Example
>>> center = (2, 5) >>> window_dims = (6, 6) >>> slices = _center_extent_to_slice(center, window_dims) >>> assert slices == (slice(-1, 5), slice(2, 8))
- Example:
>>> center = (2, 5) >>> window_dims = (64, 64) >>> slices = _center_extent_to_slice(center, window_dims) >>> assert slices == (slice(-30, 34, None), slice(-27, 37, None))
Example
>>> # Test floating point error case >>> center = (500.5, 974.9999999999999) >>> window_dims = (100, 100) >>> slices = _center_extent_to_slice(center, window_dims) >>> assert slices == (slice(450, 550, None), slice(924, 1024, None))
- ndsampler.coco_sampler._ensure_iterablen(scalar, n)¶
- ndsampler.coco_sampler._coerce_pad(pad, ndims)¶