-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue #1: Open MRI Image #1
Comments
Issue 1: Start visualizing some data!Let's jump in to visualizing some MRI! We'll start with some sample data to get you going. First, please follow the instructings outlined in CONTRIBUTING.md to get set up with the OpenDIVE repository. Once you have the repository cloned and the virtual environment set up, you can create a Python script to start visualizing MRI data. We will take advantage of Dipy, a Python library for diffusion MRI. You can work in a directory called 1. Downloading the dataDipy has several example datasets available for download. Let's download a few example images. This might take a minute! from dipy.data import fetch_bundles_2_subjects, get_fnames
# Download data from bundles_2_subjects and small_25
fetch_bundles_2_subjects()
small_fname, small_bval_fname, small_bvec_fname = get_fnames(name="small_25") This should save the data to your home directory in a folder called 2. Visualizing a T1-weighted imageFirst, let's look at a T1-weighted image. This image is a 3D volume that shows the structure of the brain. OpenDIVE is aimed at visualizing diffusion MRI models like tractograms, diffusion tensors, and fiber orientation distribution functions. These models require us to use a visualization software A 3D scene consists of 3D world that contains a scene. The scene contains actors that are objects that can be rendered in the scene. The package First, let's load our data: from dipy.data import read_bundles_2_subjects
from dipy.viz import actor, window
# Load subject's data
subj1_data = read_bundles_2_subjects(subj_id="subj_1", metrics="t1", bundles=["af.left"])
t1_data = subj1_data['t1']
t1_affine = subj1_data['affine'] Now, let's create our scene and visualize the T1-weighted image: # Create a scene
scene = window.Scene()
# Add an actor for the 2D slice of the T1-weighted image
slice_actor = actor.slicer(t1_data, affine=t1_affine, value_range=(0, 500), interpolation='nearest')
scene.add(slice_actor)
# Set up camera
scene.reset_camera()
scene.zoom(1.5)
# Show the scene
window.show(scene) ![]() 3. Visualizing diffusion MRI tensorsThe real power of First, what is diffusion MRI? Diffusion MRI is a type of MRI that is sensitive to the random thermal motion of water molecules in the brain. This motion is restricted by things like cell membranes, meaning the diffusion motion of water molecules can be restricted along certain directions. By measuring the direction of diffusion, we can infer the structure of the brain. A diffusion MRI is a 4D image - the first three dimensions are the spatial dimensions, and the fourth dimension is the direction of diffusion. In particular, diffusion MRI is useful for studying the white matter of the brain - the interior structures that connect different parts of the brain. The white matter is composed of white matter fibers, which are long, thin structures that connect different parts of the brain. We can reconstruct information about these fibers using diffusion MRI models. [If you want an in-depth overview of diffusion MRI models, see Issue #4 "What is Diffusion MRI?"] Let's visualize one of these models - the diffusion tensor model. This model represents the diffusion of water molecules as an ellipsoid. The ellipsoid has three axes representing the largest orthogonal directions of diffusion. First, let's fit the diffusion tensor model. This process is out of scope for this tutorial, but you can find more information in the Dipy documentation. from dipy.data import fetch_sherbrooke_3shell, read_sherbrooke_3shell
from dipy.core.gradients import gradient_table
from dipy.reconst.dti import TensorModel
# Step 1: Fetch and load the example dataset
fetch_sherbrooke_3shell() # Downloads the dataset
img, gtab = read_sherbrooke_3shell() # Reads the NIfTI image and gradient table
# Step 2: Extract data and affine
data = img.get_fdata() # Diffusion data as a numpy array
affine = img.affine # Affine transformation matrix
# Step 3: Create and fit a tensor model
tensor_model = TensorModel(gtab)
tensor_fit = tensor_model.fit(data)
# Step 4: Extract eigenvalues and eigenvectors
tensor_eigvals = tensor_fit.evals
tensor_eigvecs = tensor_fit.evecs
# Print some info about the data
print("Data shape:", data.shape)
print("Eigenvalues shape:", tensor_eigvals.shape)
print("Eigenvectors shape:", tensor_eigvecs.shape Now, let's visualize the tensor model. Note that for this model, we need to define a sphere on which to plot the tensor values: from dipy.data import get_sphere
# Create a scene
scene = window.Scene()
# Add an actor for the tensors
sphere = get_sphere('symmetric724')
tensor_actor = actor.tensor_slicer(tensor_eigvals, tensor_eigvecs, sphere=sphere, scale=0.3)
scene.add(tensor_actor)
# Set up camera
scene.reset_camera()
scene.zoom(1.5)
window.show(scene) ![]() 4. Visualizing diffusion MRI tractographyAnother model from diffusion MRI is tractography. Tractography is the process of reconstructing streamlines that model the white matter fibers in the brain. These streamlines are reconstructed by following the direction of diffusion in the brain. Let's visualize some tractography data that has already been fitted for us: from dipy.io.streamline import load_tractogram
from dipy.io.stateful_tractogram import Space
from dipy.viz import actor, window
# Path to the streamline file (update if necessary)
file_path = "</path/to/.dipy>/exp_bundles_and_maps/bundles_2_subjects/subj_1/bundles/bundles_af.left.trk"
# Load the tractogram
tractogram = load_tractogram(file_path, reference="same", to_space=Space.VOX)
streamlines = tractogram.streamlines
# Create a scene
scene = window.Scene()
# Add an actor for the streamlines
streamlines_actor = actor.line(streamlines)
# Add the actor to the scene
scene.add(streamlines_actor)
# Set up camera and show
scene.reset_camera()
scene.zoom(1.5)
window.show(scene) ![]() AttributionThis tutorial was based partially on the tutorials provided by Dipy, licensed under BSD License. |
For color blind simulations, install either Colorblindly for Google Chrome (https://chromewebstore.google.com/detail/colorblindly/floniaahmccleoclneebhhmnjgdfijgg?hl=en) or Color Oracle for desktops (https://colororacle.org/). |
Download data and DIPY (instructions in OpenDIVE GitHub README). Send a screenshot of a brain to a friend and upload as an issue in GitHub.
The text was updated successfully, but these errors were encountered: