This project page contains instructions to reproduce the results for our initial paper, as well as a tutorial on how to use this approach for your own registration tasks.
Paper
Fast Predictive Image Registration
X. Yang (UNC Chapel Hill, USA)
R. Kwitt (University of Salzburg, Austria)
M. Niethammer (UNC Chapel Hill, USA)
MICCAI DLMI workshop (2016)
@inproceedings{@YKM16a,
author = {X. Yang and R. Kwitt and M. Niethammer},
title = {Fast Predictive Image Registration},
booktitle = {DLMI},
year = {2016}}
Requirements
Our approach for momenta prediction is implemented in Torch. The actual mapping/warping of images (using the predicted momenta) is realized using PyCA and VectorMomentum LDDMM, both available from the SCI Computational Anatomy repository on Bitbucket. Regarding the minimum system requirements, we tested our code on Linux running Ubuntu 16.04 with 24GB of main memory and a NVIDIA Titan X (sponsored by NVIDIA, running CUDA 8.0). If the code runs on your system, please let us know your configuration and we are happy to create a list of supported platforms/configurations.
Compilation
Lets assume that all of our code will reside under /code
. First, we are going to install PyCA. Due to compatibility issues between the latest VectorMomentum LDDMM package (which we will use later on) and PyCA, we suggest to checkout a slightly older version of PyCA. Note, that we also need the Insight Toolkit (ITK) to be installed.
apt-get install insighttoolkit4-python libinsighttoolkit4-dev # install ITK v4
cd /code
git clone git@bitbucket.org:scicompanat/pyca.git
cd /code/pyca
git checkout 9954dd5319efaa0ac5f58977e57acf004ad73ed7
mkdir Build
cd Build
ccmake .. # configure PyCA via the cmake ncurses interface
On our system, we had to bump the CUDA_ARCH_VERSION
to 20 for PyCA to compile. This can be done via the ccmake interface in the Advanced Mode. Also, the Python bindings did not compile with SWIG 3.0.3 (which was the version installed on our Ubuntu 16.04 system); we downgraded to swig-2.0.12 instead, e.g., with
sudo apt-get install swig2.0
# Note that you might have to uninstall swig3.0 if errors persist
Then, running
cd /code/pyca/Build
make -j4 # -j4 will run 4 parallel jobs
should compile PyCA. Finally, we set the PYTHONPATH
as (we do recommend to put the following export statement into the .bashrc file, assuming that you are running a bash shell):
export PYTHONPATH=${PYTHONPATH}:PATH_TO_PYCA/Build/python_module
where you have to (1) replace PATH_TO_PYCA
with the name of the directory where you checked out PyCA (we assume that you built PyCA under PATH_TO_PYCA/Build, in our case /code/pyca/Build/python_module
). Next, we are going to clone the VectorMomentum LDDMM code and set the PYTHONPATH
accordingly:
cd /code
git clone https://bitbucket.org/scicompanat/vectormomentum.git
export PYTHONPATH=${PYTHONPATH}:/code/vectormomentum/Code/Python
Installation
To install our encoder-decoder code, just clone our GitHub repository with
cd /code
git clone https://github.com/rkwitt/FastPredictiveImageRegistration.git
In the following example, we will describe (in detail) how to actually run the code.
EXAMPLE 1: 2D atlas-to-image registration
We provide example 2D data (i.e., 2D MR slices from the OASIS brain database) as well as a pre-trained model in the repository under the data
directory. This directory contains
- 100 images (transversal slices) for training
- 50 images (transversal slices) for testing
- 1 atlas image (in
data/images/2D_atlas/atlas.mhd
)
Additionally, the data directory contains computed target momenta for atlas-to-image registration which will be used for training our network (these are the momenta that we would get by running LDDMM registration). In case you don't want to use the pre-trained model for experimenting, you can train your own network for momentum prediction with the provided data as follows:
cd /code/FastPredictiveImageRegistration
th main_nodiff.lua
We will give another example later for the case when no LDDMM registrations (i.e., the momenta) are available for training.
The network definition for the particular model in this example can be found in the file create_model.lua
(in the function VAE_deformation_parallel_small_noDropout
). Per default, we train (using rmsprop) for 10 epochs using a batch size of 300 and 15x15 patches.
Momenta prediction by a trained model
To run the (pre)-trained model for momenta prediction, we execute
cd /code/FastPredictiveImageRegistration
th test_recon_nodiff.lua
Note that this code might fail in case you do not have the Torch matio
module installed. In that
case, you can use luarocks
to install it with
luarocks install matio
Once, test_recon_nodiff.lua
completes, you should see a file named 2D_output.mat
in the /code/FastPredictiveImageRegistration
directory.
Warping images based on predicted momenta
We implement the warping step as follows: first, we will convert all momenta predictions to .mhd
files; second, we will prepare the (YAML) configuration files for VectorMomentum LDDMM and eventually run geodesic shooting using the predicted momenta. Lets first start MATLAB and change to the utils
directory:
cd /code/FastPredictiveImageRegistration/utils
Then, we edit the file change_m0_to_mhd_2D.m
and set the corresponding paths. Per default, the output directory will be /tmp/
and the predicted momenta files will be /tmp/m_1.mhd
to /tmp/m_50.mhd
, since we have N=50 test cases. Next, let us write the YAML configuration files for the VectorMomentum LDDMM code. This is done by editing updateyaml_2D.m
. Running updateyaml_2D
will then create N=50 configuration files in the output directory (/tmp
in our case).
cd /code/FastPredictiveImageRegistration/utils
change_m0_to_mhd_2D
updateyaml_2D
We can now run the VectorMomentum LDDMM code (for our first test case of this example) as follows:
cd /code/vectormomentum/Code/Python/Applications/
python CAvmGeodesicShooting.py ~/deformation-prediction-code/utils/deep_network_1.yaml
This will create the atlas image warped onto the source image in the directory /tmp/case_1
. In case running the geodesic shooting code produces errors (related to plotting), just comment-out GeodesicShootingPlots(g, ginv, I0, It, cf)
on line 151 of CAvmGeodesicShooting.py
.
EXAMPLE 2: 3D atlas-to-image registration
We provide example 3D training and testing data from the OASIS brain database (including the atlas), i.e.,
- 100 images for training
- 50 images for testing
- 1 atlas image (in
data/images/3D_atlas/atlas.mhd
)
These images have been (affinely) pre-aligned to the atlas (thanks to Nikhil Singh). We additionally provide momenta (obtained by PyCA) for all training and testing images. This allows us to train and test our encoder-decoder network.
Downloading the data
The training/testing data can be downloaded here. Note: the total data size is approximately 10GB. We recommend to put the data into the data
subdirectory, i.e., /code/FastPredictiveImageRegistration/data
in our example, so that you do not have to edit
the training/testing code. For now, the paths are hardcoded.
Training
The code for training the 3D encoder-decoder network can be found in main_nodiff_3D_patchprune.lua
, assuming
that the data has been downloaded to /code/FastPredictiveImageRegistration/data
. Executing
cd /code/FastPredictiveImageRegistration/
th main_nodiff_3D_patchprune.lua
will train the encoder-decoder network (on our system with one NVIDIA Titan X, this takes about two days).
Momenta prediction by a trained model
Next, we will run the model in prediction mode to map our N=50 input images to momenta that can be later used by the VectorMomentum LDDMM code. This can simply be done by executing
cd /code/FastPredictiveImageRegistration/
th test_recon_nodiff_3D.lua
mv 3D_output.mat data/
which will create a MATLAB file 3D_output.mat
in the directory from which you executed the
test_recon_nodiff_3D.lua
Torch code, i.e., in our case /code/FastPredictiveImageRegistration/
.
Warping images based on predicted momenta
Once the previous step is finished, we need to convert the outputted momenta (in the output_3D.mat
file)
to .mhd
files. To do this, we edit change_m0_to_mhd_3D.m
MATLAB file (in the utils
subdirectory)
and adjust the variables within CONFIG::START
and CONFIG::END
block. In our example, we set:
momenta_mat_file = '../data/3D_output.mat';
output_dir = '/tmp/';
output_prefix = 'm';
Then, we run
cd /code/FastPredictiveImageRegistration/utils
change_m0_to_mhd_3D
which will create N=50 output files in the /tmp
folder, named m_1.mhd
, m_1.raw
, ..., m_50.mhd
, m_50.raw
.
Before we can run the VectorMomentum LDDMM code on these files, we need to create the required
configuration files. As in the 2D example, we edit the updateyaml_3D.m
MATLAB file and adjust the
variables within the CONFIG::START
and CONFIG::END
block. Eventually, running
cd /code/FastPredictiveImageRegistration/utils
updateyaml_3D
generates a collection of N=50 .yaml
files (in the /tmp
directory) that can be used
to run the VectorMomentum LDDMM code.
As in the 2D example, we run the VectorMomentum LDDMM code with
cd /external/vectormomentum/Code/Python/Applications/
python CAvmGeodesicShooting.py /tmp/deep_network_1.yaml
Steps for training/testing our encoder-decoder network on your own data
will be available shortly
Acknowledgments
We like to thank the creators of the OASIS brain database for freely providing the data. The provided (preprocessed) OASIS data is subject to the same Data Usage Agreement as listed on the OASIS webpage.
Contact
Please contact Xiao Yang for comments, suggestions or bug reports :)