ndsampler.coco_sampler module

The CocoSampler is the ndsampler interface for efficiently sampling windowed data from a kwcoco.CocoDataset.

CommandLine

xdoctest -m ndsampler.coco_sampler __doc__ --show

Example

>>> # Imagine you have some images
>>> import kwimage
>>> image_paths = [
>>>     kwimage.grab_test_image_fpath('astro'),
>>>     kwimage.grab_test_image_fpath('carl'),
>>>     kwimage.grab_test_image_fpath('airport'),
>>> ]  # xdoctest: +IGNORE_WANT
['~/.cache/kwimage/demodata/KXhKM72.png',
 '~/.cache/kwimage/demodata/flTHWFD.png',
 '~/.cache/kwimage/demodata/Airport.jpg']
>>> # And you want to randomly load subregions of them in O(1) time
>>> import ndsampler
>>> import kwcoco
>>> # First make a COCO dataset that refers to your images
>>> dataset = {
>>>     'images': [{'id': i, 'file_name': fpath} for i, fpath in enumerate(image_paths)],
>>>     'annotations': [],
>>>     'categories': [],
>>> }
>>> coco_dset = kwcoco.CocoDataset(dataset)
>>> # (and possibly annotations)
>>> category_id = coco_dset.ensure_category('face')
>>> image_id = 0
>>> coco_dset.add_annotation(image_id=image_id, category_id=category_id, bbox=kwimage.Boxes([[140, 10, 180, 180]], 'xywh'))
>>> print(coco_dset)
<CocoDataset(tag=None, n_anns=1, n_imgs=3, ... n_cats=1...)>
>>> # Now pass the dataset to a sampler and tell it where it can store temporary files
>>> workdir = ub.Path.appdir('ndsampler/demo').ensuredir()
>>> sampler = ndsampler.CocoSampler(coco_dset, workdir=workdir)
>>> # Now you can load arbirary samples by specifing a target dictionary
>>> # with an image_id (gid) center location (cx, cy) and width, height.
>>> target = {'gid': 0, 'cx': 220, 'cy': 100, 'width': 300, 'height': 300}
>>> sample = sampler.load_sample(target)
>>> # The sample contains the image data, any visible annotations, a reference
>>> # to the original target, and params of the transform used to sample this
>>> # patch
...
>>> print(sorted(sample.keys()))
['annots', 'classes', 'im', 'kp_classes', 'params', 'target', 'tr']
>>> im = sample['im']
>>> print(f'im.shape={im.shape}')
im.shape=(300, 300, 3)
>>> dets = sample['annots']['frame_dets'][0]
>>> print(f'dets={dets}')
>>> print('dets.data = {}'.format(ub.urepr(dets.data, nl=1, sv=1, sort=1)))
dets=<Detections(1)>
dets.data = {
    'aids': [1],
    'boxes': <Boxes(xywh, array([[ 70.,  60., 180., 180.]]))>,
    'cids': [1],
    'keypoints': <PointsList(n=1)>,
    'segmentations': <SegmentationList(n=1)>,
}
>>> # xdoctest: +REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> kwplot.imshow(im)
>>> dets.draw(labels=False)
>>> kwplot.show_if_requested()
>>> # The load sample function is at the core of what ndsampler does
>>> # There are other helper functions like load_positive / load_negative
>>> # which deal with annotations. See those for more details.
>>> # For random negative sampling see coco_regions.
_images/fig_ndsampler_coco_sampler_002.jpeg
class ndsampler.coco_sampler.CocoSampler(dset, workdir=None, autoinit=True, backend=None, verbose=0)[source]

Bases: AbstractSampler, HashIdentifiable, NiceRepr

Samples patches of positives and negative detection windows from a COCO dataset. Can be used for training FCN or RPN based classifiers / detectors.

Does data loading, padding, etc…

Parameters:
  • dset (kwcoco.CocoDataset) – a coco-formatted dataset

  • backend (str | Dict) – Can be None, ‘cog’ or ‘npy’, or a dict. In the case of a dict, it takes the format: {‘type’: str, ‘config’: Dict}. See AbstractFrames for more details. Defaults to None, which does not do anything fancy.

Example

#print >>> from ndsampler.coco_sampler import * >>> self = CocoSampler.demo(‘photos’) … >>> print(sorted(self.class_ids)) [0, 1, 2, 3, 4, 5, 6, 7, 8] >>> print(self.n_positives) 4

Example

>>> import ndsampler
>>> self = ndsampler.CocoSampler.demo('photos')
>>> p_sample = self.load_positive()
>>> n_sample = self.load_negative()
>>> self = ndsampler.CocoSampler.demo('shapes')
>>> p_sample2 = self.load_positive()
>>> n_sample2 = self.load_negative()
>>> for sample in [p_sample, n_sample, p_sample2, n_sample2]:
>>>     assert 'annots' in sample
>>>     assert 'im' in sample
>>>     assert 'rel_boxes' in sample['annots']
>>>     assert 'rel_ssegs' in sample['annots']
>>>     assert 'rel_kpts' in sample['annots']
>>>     assert 'cids' in sample['annots']
>>>     assert 'aids' in sample['annots']
classmethod demo(key='shapes', workdir=None, backend=None, **kw)[source]

Create a toy coco sampler for testing and demo puposes

SeeAlso:
  • kwcoco.CocoDataset.demo

classmethod coerce(data, **kwargs)[source]

Attempt to coerce the input data into a sampler. Generally this can be anything that is already a sampler, or somthing that can be coerced into a kwcoco dataset.

Parameters:

data (str | PathLike | CocoDataset | CocoSampler) – something that can be coerced into a CocoSampler.

Returns:

CocoSampler

property classes
property catgraph

DEPRICATED, use self.classes instead

lookup_class_name(class_id)[source]
lookup_class_id(class_name)[source]
property n_positives
property n_annots
property n_samples
property n_images
property n_categories
property class_ids
property image_ids
preselect(**kwargs)[source]
new_sample_grid(task, window_dims, window_overlap=0)[source]
load_image_with_annots(image_id, cache=True)[source]
Parameters:
  • image_id (int) – the coco image id

  • cache (bool) – if True returns the fast subregion-indexable file reference. Otherwise, eagerly loads the entire image. Defaults to True.

Returns:

img: the coco image dict augmented with imdata anns: the coco annotations in this image

Return type:

Tuple[Dict, List[Dict]]

Example

>>> import ndsampler
>>> self = ndsampler.CocoSampler.demo()
>>> img, anns = self.load_image_with_annots(1)
>>> dets = kwimage.Detections.from_coco_annots(anns, dset=self.dset)
>>> # xdoctest: +REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> kwplot.imshow(img['imdata'][:], doclf=1)
>>> dets.draw()
>>> kwplot.show_if_requested()
_images/fig_ndsampler_coco_sampler_CocoSampler_load_image_with_annots_002.jpeg
load_annotations(image_id)[source]

Loads the annotations within an image

Parameters:

image_id (int) – the coco image id

Returns:

list of coco annotation dictionaries

Return type:

List[Dict]

load_image(image_id, cache=True)[source]

Loads the annotations within an image

Parameters:
  • image_id (int) – the coco image id

  • cache (bool) – if True returns the fast subregion-indexable file reference. Otherwise, eagerly loads the entire image. Defaults to True.

Returns:

either ndarray data or a indexable reference

Return type:

ArrayLike

load_item(index, with_annots=True, target=None, rng=None, **kw)[source]

Loads item from either positive or negative regions pool.

Lower indexes will return positive regions and higher indexes will return negative regions.

The main paradigm of the sampler is that sampler.regions maintains a pool of target regions, you can influence what that pool is at any point by calling sampler.regions.preselect (usually either at the start of learning, or maybe after every epoch, etc..), and you use load_item to load the index-th item from that preselected pool. Depending on how you preselected the pool, the returned item might correspond to a positive or negative region.

Parameters:
  • index (int) – index of target region

  • with_annots (bool | str) – if True, also extracts information about any annotation that overlaps the region of interest (subject to visibility_thresh). Can also be a List[str] that specifies which specific subinfo should be extracted. Valid strings in this list are: boxes, keypoints, and segmenation. Defaults to True.

  • target (Dict) – Extra target arguments that update the positive target, like window_dims, pad, etc…. See load_sample() for details on allowed keywords.

  • rng (None | int | RandomState) – a seed or seeded random number generator.

  • **kw – other arguments that can be passed to CocoSampler.load_sample()

Returns:

sample: dict containing keys

im (ndarray): image data target (dict): contains the same input items as the input

target but additionally specifies inferred information like rel_cx and rel_cy, which gives the center of the target w.r.t the returned padded sample.

annots (dict): Dict of aids, cids, and rel/abs boxes

Return type:

Dict

load_positive(index=None, with_annots=True, target=None, rng=None, **kw)[source]

Load an item from the the positive pool of regions.

Parameters:
  • index (int) – index of positive target

  • with_annots (bool | str) – if True, also extracts information about any annotation that overlaps the region of interest (subject to visibility_thresh). Can also be a List[str] that specifies which specific subinfo should be extracted. Valid strings in this list are: boxes, keypoints, and segmentation. Defaults to True.

  • target (Dict) – Extra target arguments that update the positive target, like window_dims, pad, etc…. See load_sample() for details on allowed keywords.

  • rng (None | int | RandomState) – a seed or seeded random number generator.

  • **kw – other arguments that can be passed to CocoSampler.load_sample()

Returns:

sample: dict containing keys

im (ndarray): image data tr (dict): contains the same input items as tr but additionally

specifies rel_cx and rel_cy, which gives the center of the target w.r.t the returned padded sample.

annots (dict): Dict of aids, cids, and rel/abs boxes

Return type:

Dict

Example

>>> import ndsampler
>>> self = ndsampler.CocoSampler.demo()
>>> sample = self.load_positive(pad=(10, 10), tr=dict(window_dims=(3, 3)))
>>> assert sample['im'].shape[0] == 23
>>> # xdoctest: +REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> kwplot.imshow(sample['im'], doclf=1)
>>> kwplot.show_if_requested()
_images/fig_ndsampler_coco_sampler_CocoSampler_load_positive_002.jpeg
load_negative(index=None, with_annots=True, target=None, rng=None, **kw)[source]

Load an item from the the negative pool of regions.

Parameters:
  • index (int) – if specified loads a specific negative from the presampled pool, otherwise the next negative in the pool is returned.

  • with_annots (bool | str) – if True, also extracts information about any annotation that overlaps the region of interest (subject to visibility_thresh). Can also be a List[str] that specifies which specific subinfo should be extracted. Valid strings in this list are: boxes, keypoints, and segmentation. Defaults to True.

  • target (Dict) – Extra target arguments that update the positive target, like window_dims, pad, etc…. See load_sample() for details on allowed keywords.

  • rng (None | int | RandomState) – a seed or seeded random number generator.

Returns:

sample: dict containing keys

im (ndarray): image data tr (dict): contains the same input items as tr but additionally

specifies rel_cx and rel_cy, which gives the center of the target w.r.t the returned padded sample.

annots (dict): Dict of aids, cids, and rel/abs boxes

Return type:

Dict

Example

>>> import ndsampler
>>> self = ndsampler.CocoSampler.demo()
>>> rng = None
>>> sample = self.load_negative(rng=rng, pad=(0, 0))
>>> # xdoctest: +REQUIRES(--show)
>>> import kwplot
>>> import kwimage
>>> kwplot.autompl()
>>> abs_sample_box = sample['params']['sample_tlbr']
>>> tf_rel_from_abs = kwimage.Affine.coerce(sample['params']['tf_rel_to_abs']).inv()
>>> wh, ww = sample['target']['window_dims']
>>> abs_window_box = kwimage.Boxes([[sample['target']['cx'], sample['target']['cy'], ww, wh]], 'cxywh')
>>> rel_window_box = abs_window_box.warp(tf_rel_from_abs)
>>> rel_sample_box = abs_sample_box.warp(tf_rel_from_abs)
>>> kwplot.imshow(sample['im'], fnum=1, doclf=True)
>>> rel_sample_box.draw(color='kw_green', lw=10)
>>> rel_window_box.draw(color='kw_blue', lw=8)
>>> kwplot.show_if_requested()
_images/fig_ndsampler_coco_sampler_CocoSampler_load_negative_002.jpeg

Example

>>> import ndsampler
>>> self = ndsampler.CocoSampler.demo()
>>> rng = None
>>> sample = self.load_negative(rng=rng, pad=(10, 20), target=dict(window_dims=(64, 64)))
>>> # xdoctest: +REQUIRES(--show)
>>> import kwplot
>>> import kwimage
>>> kwplot.autompl()
>>> abs_sample_box = sample['params']['sample_tlbr']
>>> tf_rel_from_abs = kwimage.Affine.coerce(sample['params']['tf_rel_to_abs']).inv()
>>> wh, ww = sample['target']['window_dims']
>>> abs_window_box = kwimage.Boxes([[sample['target']['cx'], sample['target']['cy'], ww, wh]], 'cxywh')
>>> rel_window_box = abs_window_box.warp(tf_rel_from_abs)
>>> rel_sample_box = abs_sample_box.warp(tf_rel_from_abs)
>>> kwplot.imshow(sample['im'], fnum=1, doclf=True)
>>> rel_sample_box.draw(color='kw_green', lw=10)
>>> rel_window_box.draw(color='kw_blue', lw=8)
>>> kwplot.show_if_requested()
_images/fig_ndsampler_coco_sampler_CocoSampler_load_negative_003.jpeg
load_sample(target=None, with_annots=True, annot_ids=None, visible_thresh=0.0, **kwargs)[source]

Loads the volume data associated with the bbox and frame of a target

Parameters:
  • target (dict) – target dictionary (often abbreviated as tr) indicating an nd source object (e.g. image or video) and the coordinate region to sample from. Unspecified coordinate regions default to the extent of the source object.

    For 2D image source objects, target must contain or be able to infer the key gid (int), to specify an image id.

    For 3D video source objects, target must contain the key vidid (int), to specify a video id (NEW in 0.6.1) or gids List[int], as a list of images in a video (NEW in 0.6.2)

    In general, coordinate regions can specified by the key slices, a numpy-like “fancy index” over each of the n dimensions. Usually this is a tuple of slices, e.g. (y1:y2, x1:x2) for images and (t1:t2, y1:y2, x1:x2) for videos.

    You may also specify: space_slice as (y1:y2, x1:x2) for both 2D images and 3D videos and time_slice as t1:t2 for 3D videos.

    Spatial regions can be specified with keys:
    • ‘cx’ and ‘cy’ as the center of the region in pixels.

    • ‘width’ and ‘height’ are in pixels.

    • ‘window_dims’ is a height, width tuple or can be a

    special string key ‘square’, which overrides width and height to both be the maximum of the two.

    Temporal regions are specifiable by slices, time_slice or an explicit list of gids.

    The aid key can be specified to indicate a specific annotation to load. This uses the annotation information to infer ‘gid’, ‘cx’, ‘cy’, ‘width’, and ‘height’ if they are not present. (NEW in 0.5.10)

    The channels key can be specified as a channel code or

    kwcoco.ChannelSpec object. (NEW in 0.6.1)

    as_xarray (bool):

    if True, return the image data as an xarray object. default=False

    interpolation (str):

    type of resample interpolation. Defaults to ‘auto’.

    antialias (str):

    antialias sample or not. Defaults to ‘auto’.

    nodata: override function level nodata

    use_native_scale (bool):

    If True, the “im” field is returned as a jagged list of data that are as close to native resolution as possible while still maintaining alignment up to a scale factor. Currently only available for video sampling.

    scale (float | Tuple[float, float]):

    if specified, the same window is sampled, but the data is returned warped by the extra scale factor. This augments the existing image or video scale factor. Any annotations are also warped according to this factor such that they align with the returned data. By default this scale is applied to videospace, unless use_native_scale is given, in which case it is applied to the native resolution (generally you dont want to combine these).

    pad (tuple): (height, width) extra context to add to window dims.

    This helps prevent augmentation from producing boundary effects

    padkw (dict): kwargs for numpy.pad.

    Defaults to {‘mode’: ‘constant’}.

    dtype (type | None):

    Cast the loaded data to this type. If unspecified returns the data as-is.

    nodata (int | None):

    If specified, for integer data with nodata values, this is passed to kwcoco delayed image finalize. The data is converted to float32 and nodata values are replaced with nan. These nan values are handled correctly in subsequent warping operations. Defaults to None.

  • with_annots (bool | str) – if True, also extracts information about any annotation that overlaps the region of interest (subject to visibility_thresh). Can also be a List[str] that specifies which specific subinfo should be extracted. Valid strings in this list are: boxes, keypoints, and segmentation. Defaults to True.

  • annot_ids (List[int]) – if specified, assume the user has precomputed which annotations should be loaded for the target region. Skip the spatial lookup step and just load the data for these annotations instead.

  • visible_thresh (float) – does not return annotations with visibility less than this threshold.

  • **kwargs – handles deprecated arguments which are now specified in the target dictionary itself.

Returns:

sample: dict containing keys

im (ndarray | DataArray): image / video data target (dict): contains the same input items as the input

target but additionally specifies inferred information like rel_cx and rel_cy, which gives the center of the target w.r.t the returned padded sample.

annots (dict): containing items:
frame_dets (List[kwimage.Detections]): a list of detection

objects containing the requested annotation info for each frame.

aids (list): annotation ids DEPRECATED cids (list): category ids DEPRECATED rel_ssegs (ndarray): segmentations relative to the sample DEPRECATED rel_kpts (ndarray): keypoints relative to the sample DEPRECATED

Return type:

Dict

CommandLine

xdoctest -m ndsampler.coco_sampler CocoSampler.load_sample:2 --show

xdoctest -m ndsampler.coco_sampler CocoSampler.load_sample:1 --show
xdoctest -m ndsampler.coco_sampler CocoSampler.load_sample:3 --show

Example

>>> import ndsampler
>>> self = ndsampler.CocoSampler.demo()
>>> # The target (target) lets you specify an arbitrary window
>>> target = {'gid': 1, 'cx': 5, 'cy': 2, 'width': 6, 'height': 6}
>>> sample = self.load_sample(target)
...
>>> print('sample.shape = {!r}'.format(sample['im'].shape))
sample.shape = (6, 6, 3)

Example

>>> # Access direct annotation information
>>> import ndsampler
>>> sampler = ndsampler.CocoSampler.demo()
>>> # Sample a region that contains at least one annotation
>>> target = {'gid': 1, 'cx': 5, 'cy': 2, 'width': 600, 'height': 600}
>>> sample = sampler.load_sample(target)
>>> annotation_ids = sample['annots']['aids']
>>> aid = annotation_ids[0]
>>> # Method1: Access ann dict directly via the coco index
>>> ann = sampler.dset.anns[aid]
>>> # Method2: Access ann objects via annots method
>>> dets = sampler.dset.annots(annotation_ids).detections
>>> print('dets.data = {}'.format(ub.urepr(dets.data, nl=1)))

Example

>>> import ndsampler
>>> self = ndsampler.CocoSampler.demo()
>>> target = self.regions.get_positive(0)
>>> target['window_dims'] = 'square'
>>> target['pad'] = (25, 25)
>>> sample = self.load_sample(target)
>>> print('im.shape = {!r}'.format(sample['im'].shape))
im.shape = (135, 135, 3)
>>> target['window_dims'] = None
>>> target['pad'] = (0, 0)
>>> sample = self.load_sample(target)
>>> print('im.shape = {!r}'.format(sample['im'].shape))
im.shape = (52, 85, 3)
>>> # xdoctest: +REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> kwplot.imshow(sample['im'])
>>> kwplot.show_if_requested()
_images/fig_ndsampler_coco_sampler_CocoSampler_load_sample_002.jpeg

Example

>>> # sample an out of bounds target
>>> from ndsampler.coco_sampler import *
>>> self = CocoSampler.demo('vidshapes8')
>>> test_vidspace = 1
>>> target = self.regions.get_positive(0)
>>> # Toggle to see if this test works in both cases
>>> space = 'image'
>>> if test_vidspace:
>>>     space = 'video'
>>>     target = target.copy()
>>>     target['gids'] = [target.pop('gid')]
>>>     target['scale'] = 1.3
>>>     #target['scale'] = 0.8
>>>     #target['use_native_scale'] = True
>>>     #target['realign_native'] = 'largest'
>>> target['window_dims'] = (364, 364)
>>> sample = self.load_sample(target)
>>> annots = sample['annots']
>>> assert len(annots['aids']) > 0
>>> #assert len(annots['rel_cxywh']) == len(annots['aids'])
>>> # xdoctest: +REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> tf_rel_to_abs = sample['params']['tf_rel_to_abs']
>>> rel_dets = annots['frame_dets'][0]
>>> abs_dets = rel_dets.warp(tf_rel_to_abs)
>>> # Draw box in original image context
>>> #abs_frame = self.frames.load_image(sample['target']['gid'], space=space)[:]
>>> abs_frame = self.dset.coco_image(sample['target']['gid']).delay(space=space).finalize()
>>> kwplot.imshow(abs_frame, pnum=(1, 2, 1), fnum=1)
>>> abs_dets.data['boxes'].translate([-.5, -.5]).draw()
>>> abs_dets.data['keypoints'].draw(color='green', radius=10)
>>> abs_dets.data['segmentations'].draw(color='red', alpha=.5)
>>> # Draw box in relative sample context
>>> if test_vidspace:
>>>     kwplot.imshow(sample['im'][0], pnum=(1, 2, 2), fnum=1)
>>> else:
>>>     kwplot.imshow(sample['im'], pnum=(1, 2, 2), fnum=1)
>>> rel_dets.data['boxes'].translate([-.5, -.5]).draw()
>>> rel_dets.data['segmentations'].draw(color='red', alpha=.6)
>>> rel_dets.data['keypoints'].draw(color='green', alpha=.4, radius=10)
>>> kwplot.show_if_requested()
_images/fig_ndsampler_coco_sampler_CocoSampler_load_sample_003.jpeg

Example

>>> from ndsampler.coco_sampler import *
>>> self = CocoSampler.demo('photos')
>>> target = self.regions.get_positive(1)
>>> target['window_dims'] = (300, 150)
>>> target['pad'] = None
>>> sample = self.load_sample(target)
>>> assert sample['im'].shape[0:2] == target['window_dims']
>>> # xdoctest: +REQUIRES(--show)
>>> import kwplot
>>> kwplot.autompl()
>>> kwplot.imshow(sample['im'], colorspace='rgb')
>>> kwplot.show_if_requested()
_images/fig_ndsampler_coco_sampler_CocoSampler_load_sample_004.jpeg

Example

>>> # Multispectral video sample example
>>> from ndsampler.coco_sampler import *
>>> self = CocoSampler.demo('vidshapes1-multispectral', num_frames=5)
>>> sample_grid = self.new_sample_grid('video_detection', (3, 128, 128))
>>> target = sample_grid['positives'][0]
>>> target['channels'] = 'B1|B8'
>>> target['as_xarray'] = False
>>> sample = self.load_sample(target)
>>> print(ub.urepr(sample['target'], nl=1))
>>> print(sample['im'].shape)
>>> assert sample['im'].shape == (3, 128, 128, 2)
>>> target['channels'] = '<all>'
>>> sample = self.load_sample(target)
>>> assert sample['im'].shape == (3, 128, 128, 5)

Example

>>> # Multispectral-multisensor jagged video sample example
>>> from ndsampler.coco_sampler import *
>>> self = CocoSampler.demo('vidshapes1-msi-multisensor', num_frames=5)
>>> sample_grid = self.new_sample_grid('video_detection', (3, 128, 128))
>>> target = sample_grid['positives'][0]
>>> target['channels'] = 'B1|B8'
>>> target['as_xarray'] = False
>>> sample1 = self.load_sample(target)
>>> target['scale'] = 2
>>> sample2 = self.load_sample(target)
>>> target['use_native_scale'] = True
>>> sample3 = self.load_sample(target)
>>> ####
>>> assert sample1['im'].shape == (3, 128, 128, 2)
>>> assert sample2['im'].shape == (3, 256, 256, 2)
>>> box1 = sample1['annots']['frame_dets'][0].boxes
>>> box2 = sample2['annots']['frame_dets'][0].boxes
>>> box3 = sample3['annots']['frame_dets'][0].boxes
>>> assert np.allclose((box2.width / box1.width), 2)
>>> # Jagged annotations are still in video space
>>> assert np.allclose((box3.width / box1.width), 2)
>>> jagged_shape = [[p.shape for p in f] for f in sample3['im']]
>>> jagged_align = [[a for a in m['align']] for m in sample3['params']['jagged_meta']]