Visualization Classes - API

SlicePicker API

class mrivis.SlicePicker(image_in, view_set=(0, 1, 2), num_slices=(10, ), sampler='linear', min_density=0.01)[source]

Bases: object

Class to pick non-empty slices along the various dimensions for a given image.

The term slice here refers to one cross-section in a 3D image,
towards which this class is designed for. However there are no explicit restrictions placed on dicing N=4+ array and receiving a n-1 dim array.

Methods

Class to pick non-empty slices along the various dimensions for a given image.

Parameters:

image_in : ndarray

3D array to be sliced. there are no explicit restrictions placed on number of dimensions for image_in,

to get a n-1 dim array, but appropriate reshaping may need to be performed.

view_set : iterable

List of integers selecting the dimesnions to be sliced.

num_slices : int or iterable of size as view_set

Number of slices to be selected in each view.

sampler : str or list or callable

selection strategy: to identify the type of sampling done to select the slices to return. All sampling is done between the first and last non-empty slice in that view/dimension.

  • if ‘linear’ : linearly spaced slices

  • if list, it is treated as set of percentages at which slices to be sampled

    (must be in the range of [1-100], not [0-1]). This could be used to more/all slices in the middle e.g. range(40, 60, 5)

    or at the end e.g. [ 5, 10, 15, 85, 90, 95]

  • if callable, it must take a 2D image of arbitray size, return True/False

    to indicate whether to select that slice or not. Only non-empty slices (atleas one non-zero voxel) are provided as input. Simple examples for callable could be based on 1) percentage of non-zero voxels > x etc 2) presence of desired texture ? 3) certain properties of distribution (skewe: dark/bright, energy etc) etc

    If the sampler returns more than requested num_slices,

    only the first num_slices will be selected.

min_density : float or None

mininum density of non-zero voxels within a given slice to consider it non-empty Default: 0.01 (1%). if None, include all slices.

Methods

get_slice_indices()[source]

Returns indices for the slices selected (each a tuple : (dim, slice_num))

get_slices(extended=False)[source]

Generator over all the slices selected, each time returning a cross-section.

Parameters:

extended : bool

Flag to return just slice data (default, extended=False), or

return a tuple of axis, slice_num, slice_data (extended=True)

Returns:

slice_data : an image (just slice data, default, with extended=False), or

a tuple of axis, slice_num, slice_data (extended=True)

get_slices_multi(image_list, extended=False)[source]

Returns the same cross-section from the multiple images supplied.

All images must be of the same shape as the original image defining this object.

Parameters:

image_list : Iterable

containing atleast 2 images

extended : bool

Flag to return just slice data (default, extended=False), or

return a tuple of axis, slice_num, slice_data (extended=True)

Returns:

tuple_slice_data : tuple of one slice from each image in the input image list

Let’s denote it by as TSL.

if extended=True, returns tuple(axis, slice_num, TSL)

save_as_gif(gif_path, duration=0.25)[source]

Package the selected slices into a single GIF for easy sharing and display (on web etc).

You must install imageio module separately to use this feature.

Parameters:

gif_path : str

Output path for the GIF image

duration : float

Duration of display of each frame in GIF, in sec.

MiddleSlicePicker API

class mrivis.MiddleSlicePicker(image)[source]

Bases: mrivis.base.SlicePicker

Convenience class to select the classic one middle slice from all views.

Methods

Returns the middle slice from all views in the image.

Parameters:

attach_image : ndarray

The image to be attached to the collage, once it is created. Must be atleast 3d.

Methods

Collage API

class mrivis.Collage(view_set=(0, 1, 2), num_rows=2, num_slices=(10, ), sampler='linear', attach_image=None, bounding_rect=(0.02, 0.02, 0.98, 0.98), fig=None, figsize=(14, 10), display_params=None)[source]

Bases: object

Class exhibiting multiple slices from a 3D image,
with convenience routines handling all the cross-sections as a single set.

Methods

Class exhibiting multiple slices from a 3D image, with convenience routines handling all the cross-sections as a single set.

Once created with certain display_params (containing vmin and vmax),
this class does NOT automatically rescale the data, as you attach different images. Ensure the input images are rescaled to [0, 1] BEFORE attaching.
Parameters:

view_set : iterable

List of integers selecting the dimesnions to be sliced.

num_slices : int or iterable of size as view_set

Number of slices to be selected in each view.

num_rows : int

Number of rows per view.

sampler : str or list or callable

selection strategy to identify the type of sampling done to select the slices to return. All sampling is done between the first and last non-empty slice in that view/dimension.

  • if ‘linear’ : linearly spaced slices

  • if list, it is treated as set of percentages at which slices to be sampled

    (must be in the range of [1-100], not [0-1]). This could be used to more/all slices in the middle e.g. range(40, 60, 5)

    or at the end e.g. [ 5, 10, 15, 85, 90, 95]

  • if callable, it must take a 2D image of arbitray size, return True/False

    to indicate whether to select that slice or not. Only non-empty slices (atleas one non-zero voxel) are provided as input. Simple examples for callable could be based on 1) percentage of non-zero voxels > x etc 2) presence of desired texture ? 3) certain properties of distribution (skewe: dark/bright, energy etc) etc

    If the sampler returns more than requested num_slices,

    only the first num_slices will be selected.

attach_image : ndarray

The image to be attached to the collage, once it is created. Must be atleast 3d.

display_params : dict

dict of keyword parameters that can be passed to matplotlib’s Axes.imshow()

fig : matplotlib.Figure

figure handle to create the collage in. If not specified, creates a new figure.

figsize : tuple of 2

Figure size (width, height) in inches.

bounding_rect : tuple of 4

The rectangular area to bind the collage to (in normalized figure coordinates)

Methods

attach(image_in, sampler=None, show=True)[source]

Attaches the relevant cross-sections to each axis.

Parameters:

attach_image : ndarray

The image to be attached to the collage, once it is created. Must be atleast 3d.

sampler : str or list or callable

selection strategy: to identify the type of sampling done to select the slices to return. All sampling is done between the first and last non-empty slice in that view/dimension.

  • if ‘linear’ : linearly spaced slices

  • if list, it is treated as set of percentages at which slices to be sampled

    (must be in the range of [1-100], not [0-1]). This could be used to more/all slices in the middle e.g. range(40, 60, 5)

    or at the end e.g. [ 5, 10, 15, 85, 90, 95]

  • if callable, it must take a 2D image of arbitray size, return True/False

    to indicate whether to select that slice or not. Only non-empty slices (atleas one non-zero voxel) are provided as input. Simple examples for callable could be based on 1) percentage of non-zero voxels > x etc 2) presence of desired texture ? 3) certain properties of distribution (skewe: dark/bright, energy etc) etc

    If the sampler returns more than requested num_slices,

    only the first num_slices will be selected.

show : bool

Flag to request immediate display of collage

clear()[source]

Clears all the axes to start fresh.

hide(grid=None)[source]

Removes the collage from view.

Parameters:

grid : int or None

index (into original view_set) to which grid/view needs to be hidden

save(annot=None, output_path=None)[source]

Saves the collage to disk as an image.

Parameters:

annot : str

text to annotate the figure with a super title

output_path : str

path to save the figure to. Note: any spaces in the filename will be replace with _

show(grid=None)[source]

Makes the collage visible.

Parameters:

grid : int or None

index (into original view_set) to which grid/view needs to be hidden

transform_and_attach(image_list, func, show=True)[source]
Displays the transformed (combined) version of the cross-sections from each image,

(same slice and dimension). So if you input n>=1 images, n slices are obtained from each image, which are passed to the func (callable) provided, and the result will be displayed in the corresponding cell of the collage. Useful applications: - input two images, a function to overlay edges of one image on the other - input two images, a function to mix them in a checkerboard pattern - input one image, a function to saturate the upper half of intensities

(to increase contrast and reveal any subtle ghosting in slices)
func must be able to receive as many arguments as many elements in image_list.
if your func needs additional parameters, make them keyword arguments, and use functools.partial to obtain a new callable that takes in just the slices.
Parameters:

image_list : list or ndarray

list of images or a single ndarray

func : callable

function to be applied on the input images (their slices)

to produce a single slice to be displayed.

show : bool

flag to indicate whether make the collage visible.

MidCollage API

class mrivis.MidCollage(image, bounding_rect=(0.02, 0.02, 0.98, 0.98), fig=None, display_params=None)[source]

Bases: mrivis.base.Collage

Convenience class to display the mid-slices from all the views.

Methods

Display mid-slices from all the views.

image : ndarray
The image to be attached to the collage, once it is created. Must be atleast 3d.
fig : matplotlib.Figure
figure handle to create the collage in. If not specified, creates a new figure.
bounding_rect : tuple of 4
The rectangular area to bind the collage to (in normalized figure coordinates)
display_params : dict
dict of keyword parameters that can be passed to matplotlib’s Axes.imshow()

Methods

Carpet API

class mrivis.Carpet(image_nD, fixed_dim=-1, roi_mask='auto', rescale_data=True, num_frames_to_skip=2)[source]

Bases: object

Class to unroll the 4D or higher dimensional data into 2D images.

Typical examples include functional or diffusion MRI data.

Methods

Constructor

This class can optionally,

  • can cluster data in the fixed dimension (typically time/gradient dimensions in functional/diffusion MR imaging data)
  • label the resulting clusters.
Parameters:

image_nD : ndarray or str

input image, or a path to an image, from which the carpet needs to be made.

fixed_dim : int

the dimension to be fixed while unrolling the rest. Default: last as indicated by -1

roi_mask : ndarray or str or None

if an image of same size as the N-1 dim array, it is interpreted to be a mask to be applied to each 3D/(N-1)D volume If its ‘auto’, an auto background (zeros) mask is computed and removed If its None, all the voxels in original image are retained

rescale_data : bool

Whether to rescale the input image over the chosen fixed_dim Default is to rescale to maximize the contrast.

num_frames_to_skip : int

number of frames to skip displaying to avoid pixel decimation on screen. Choose a number such that to get a total ~600-1000 vertical lines to display.

Methods

cluster_rows_in_roi(roi_mask=None, num_clusters_per_roi=5, metric='minkowski')[source]

Clusters the data within all the ROIs specified in a mask.

Parameters:

roi_mask : ndarray or None

volumetric mask defining the list of ROIs, with a label for each voxel. This must be the same size in all dimensions except the fixed_dim i.e. if you were making a Carpet from an fMRI image of size 125x125x90x400 fixing the 4th dimension (of size 400), then roi_mask must be of size 125x125x90.

num_clusters_per_roi : int

number of clusters (n) to form each ROI specified in the roi_mask if n (say 20) is less than number of voxels per a given ROI (say 2000), then data from approx. 2000/20=100 voxels would summarized (averaged by default), into a single cluster. So if the ROI mask had m ROIs (say 10), then the final clustered carpet would have m*n rows (200), regardless of the number of voxels in the 3D image.

metric : str

distance metric for the hierarchical clustering algorithm; default : ‘minkowski’ Options: anything accepted by scipy.spatial.distance.pdist, which can be: ‘braycurtis’, ‘canberra’, ‘chebyshev’, ‘cityblock’, ‘correlation’, ‘cosine’, ‘dice’, ‘euclidean’, ‘hamming’, ‘jaccard’, ‘kulsinski’, ‘mahalanobis’, ‘matching’, ‘minkowski’, ‘rogerstanimoto’, ‘russellrao’, ‘seuclidean’, ‘sokalmichener’, ‘sokalsneath’, ‘sqeuclidean’, ‘yule’.

save(output_path=None, title=None)[source]

Saves the current figure with carpet visualization to disk.

Parameters:

output_path : str

Path to where the figure needs to be saved to.

title : str

text to overlay and annotate the visualization (done via plt.suptitle())

show(clustered=False, ax_carpet=None, label_x_axis='time point', label_y_axis='voxels/ROI')[source]

Displays the carpet in the given axis.

Parameters:

clustered : bool, optional

Flag to indicate whether to show the clustered/reduced carpet or the original. You must run .cluster_rows_in_roi() before trying to show clustered carpet.

ax_carpet : Axis, optional

handle to a valid matplotlib Axis

label_x_axis : str

String label for the x-axis of the carpet

label_y_axis : str

String label for the y-axis of the carpet

Returns:

ax_carpet : Axis

handle to axis where carpet is shown