VisualQC

https://zenodo.org/badge/105958496.svg https://img.shields.io/pypi/v/visualqc.svg https://app.codacy.com/project/badge/Grade/c67fbfc899d04f5aacd7a0070aeb704d

VisualQC : assistive tool to improve the quality control workflow of neuroimaging data.

_images/vqc_logo_small.png

Assessing and assuring the quality of imaging data, be it an raw acquisition (fMRI run or T1w MRI) or an automatic segmentation (be it gray or white surfaces for cortical thickness, or a subcortical segmentation) requires visual inspection manually. Not just one slice. Or one view. But many slices in all the views to ensure the 3d segmentation is accurate at the voxel-level. Often, looking at raw data is not sufficient to spot subtle errors, wherein statistical measurements (across space or time) assist greatly in rating the quality of image or severity of artefacts spotted.

This manual process, in its simplest form, is quite cumbersome and time-consuming. Without any assistive tool, it requires opening both the MRI and segmentation for one subject in an editor that can overlay and color them properly, and manually reviewing one slice at a time, navigate through many many slices, and record your rating in a spreadsheet. And repeat this process for multiple subjects. In some even more demanding tasks (such as assessing the accuracy of cortical thickness e.g. generated by Freesurfer, or in reviewing an EPI sequence), you may need to review multiple types of visualizations (such as surface-rendering of pial surface or carpet plots with specific temporal stats in fMRI), in addition to voxel-wise data. Without an automatic tool, this logistics process allows too many human mistakes, esp. as you flip through 100s of subjects over many weeks jumping through multiple visualization software and spreadsheets. Moreover, with careful use of outlier detection technique on dataset-wide statistics (across all the subjects in a dataset) can help us identify subtle errors (such as a small ROI with unrealistic thickness value) that would otherwise go undetected.

VisualQC, purpose-built for rigorous quality control, aims to reduce this laborious process to a single command to seamlessly present relevant composite visualizations while alerting user of any outliers, offer an easy way to record the ratings, and quickly navigate through 100s of subjects with ease. All you need to do is sit back, focus your expert eye on data and VisualQC takes care of the flow and bookkeeping.

Use-cases supported

VisualQC supports the following use cases:

  • Functional MRI scans
  • Freesurfer cortical parcellations (accuracy of pial/white surfaces on T1w mri)
  • Structural T1w MRI scans (artefact identification and rating)
  • Volumetric/anatomical segmenation accuracy (general as well as Freesurfer)
  • Defacing algorithm accuracy
  • Diffusion MRI scans
  • Registration quality (spatial alignment) within a single modality or across different modalities
  • NEW: Batch generation of screenshots of the visualizations generated by the above modules
  • For your own important use case, feel free to contact me
  • Some others are being discussed - might be coming soon.

Features

Each use case aims to offer the following features:

  • Ability to zoom-in slices displayed to to ensure you won’t miss any detail (down to the voxel-level), so you can rate its quality with confidence.
  • Automatically detect and flag outliers during review (multivariate high-dimensional outlier detection) (available for a few modules only)
  • Display multiple slices in multiple views, and easily navigate all subjects in a dataset
  • Keyboard shortcuts to speed up the process, no need to lift your fingers!
  • Allows to make arbitrary notes on the current review session
  • Allows you to customize the visualizations to your expert preference (such as removing certain overlays, control the transparency, change how two images blended together).

Manual

A detailed manual for Freesurfer QC is available at the VisualQC repo

If you have any questions, please open an issue and we would appreciate your feedback.