This contains all documentation related to using the BRAINSTools.

  1. Get the software
    Here's the github for BRAINSTools, which you'll need for running AutoWorkup:
    You'll need python 2.7 and CMake for this. 

    [deprecated: Here's the github for BRAINSStandAlone, which you'll need for running AutoWorkup:]

  2. Build the software
    In addition to the instructions at the bottom of the github page, make sure BRAINSMultiSTAPLE is turned on.
    Also make sure Build_Testing is turned on - this builds more stuff than is minimally needed (testing for ITK, VTK etc) but also makes sure it fetches all the necessary data into BRAINSTools-build/ExternalData/TestData, which is necessary for things like BrainsCut to work.
    If you are running into the problem described here: Build BRAINSTools in Helium Cluster then your python version is not built with some options that are necessary for BSA.
       Try to use the solution described on that link if you can. If you are not on Helium and/or can't access the /nfsscratch/PREDICT - then download (which is free for educational use).

  3. Also install FreeSurfer if you want to use the BRAINS wrapper for it.
    Hans recommends this version: freesurfer-x86_64-redhat-linux-gnu-stable5-20130126
  4. Set up your paths. You can either do this by modifying your global path (in .bashrc) or, through the config file (much more strongly recommended). Explanation for this coming soon...

  5. [Here's what I had eventually ended up with when I was trying to do this through my global path - not the cleanest, as it was mostly a trial and error process.

    export PATH=/Users/ioguz/epd-7.3-2-rh5-x86_64:/Users/ioguz/epd-7.3-2-rh5-x86_64/bin:/Users/ioguz/epd-7.3-2-rh5-x86_64/lib:${PATH}:/bin:/usr/bin:/usr/local/bin:$HOME/bin:$HOME/.lcmodel:$HOME/scripts:/opt/dcmtk/bin:/opt/afni:/opt/BRAINS3/bin:/opt/BRAINS3/lib:/Users/ioguz/BRAINSStandAlone-build/bin:/Users/ioguz/BRAINSStandAlone-build/NIPYPE:/Users/ioguz/BRAINSStandAlone-build/lib:/Users/ioguz/BRAINSStandAlone-build/NIPYPE/nipype:/Users/ioguz/BRAINSStandAlone-build/NIPYPE/bin:/Users/ioguz/BRAINSStandAlone-build/SimpleITK-build:/Users/ioguz/BRAINSStandAlone-build/SimpleITK-build/lib/:/opt/cmake-2.8.8-Linux-i386/bin/:/Users/ioguz/BRAINSStandAlone/AutoWorkup
    export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/opt/Slicer3/lib/Teem-1.11.0:/opt/Slicer3/lib/vtk-5.6:/opt/Slicer3/lib/ModuleDescriptionParser:/opt/Slicer3/lib/InsightToolkit:/opt/Slicer3/lib/Python/lib
    export PYTHONPATH=${PYTHONPATH}:/Users/ioguz/BRAINSStandAlone/AutoWorkup:/Users/ioguz/BRAINSStandAlone-build/NIPYPE/
    export SUBJECTS_DIR=/Users/ioguz/bipolar/Analysis/BipolarExperimentOutputs_Results/FREESURFER_SUBJ
    with /Users/ioguz/bipolar/Analysis being my output directory, /Users/ioguz/BRAINSStandAlone-build the build tree and /Users/ioguz/BRAINSStandAlone the source tree.]

  6. Set up your configuration files - the AutoWorkup link below explains this pretty well. In summary, you have to create two files, one config file describing your software setup (an example is in BRAINSStandAlone/AutoWorkup/Template.config) and one csv file that is essentially a list of your subjects and the paths to their images.
  7. Run AutoWorkup. This will take a while. TissueClassify and BRAINSCut together takes a couple of hours, FreeSurfer about 24 hours. Per subject. Yeah.
  8. You are more than likely to end up with some crash files. These are crash-*.npz files that get dumped into the folder you ran AutoWorkup from. Some of these are actual problems, others not so much. You can open them up using BRAINSStandAlone-build/NIPYPE/bin/nipype_display_crash (which should be in your path if you've done the above correctly - so you can just type "nipype_display_crash crash-blablabla.npz")
  9. If you're on helium and you're running a ton of processes together, you're likely to run into IO errors. The crash files for these will have a line that says IOError: (bla bla bla) and will complain that the output file isn't created even though the process was completed. This essentially just means that you're running too many processes at once - wait until at least some portion of your processes are successfully finished and rerun. AutoWorkup is pretty smart about keeping track of where it left things off - so you don't have to change anything to your command line, and it should recover nicely. Emphasis on 'should'.
    • According to the nice admins of helium, this problem can be alleviated by moving your data to one of /localscratch (which is unique to each compute node), /nfsscratch, /glusterscratch, with the recommended one being /nfsscratch.
    • Note that you can either run n instances of AutoWorkup with 1 subject each (faster because in parallel, but is a bit of a headache to manage if they fail and you need to keep track of who got how far) or 1 instance of AutoWorkup with n subjects (slower because runs each subject in sequence).
  10. So if things go well and you process a subject successfully, a set of files will be created in your output directory. Two main subfolders are created, CACHE and Results (these will be named something like /Users/ioguz/bipolar/Analysis/BipolarExperimentOutputs_CACHE and /Users/ioguz/bipolar/Analysis/BipolarExperimentOutputs_Results. The CACHE folder is much, much larger and contains all the scripts generated by the pipeline as well as results from all the intermediate steps. Results is where most of what you care about goes (unless things aren't working and you have to go find out what intermediate step is failing). The exception is FreeSurfer results - those stay in the CACHE (at least for me) for some reason. Here's a breakdown of what should be created and what it is.
      1. FreeSurfer results are in: CACHE/${subject}/BAWFS_Subjects/${subject}. The files here are the typical FS tree for that subject - documented thoroughly on the FS wiki. You can tell if this has completed properly by looking at CACHE/${subject}/BAWFS_Subjects/${subject}/scripts.
          1. If you have the recon-all.done file, you're good to go
          2. if you have the IsRunning.lh+rh, it's running, you have to be patient
          3. if you have recon-all.error, something's not right
      2. Results/${experiment}/${subject}, where the experiment is the string in the first entry of your session file (.csv) for the line associated with the subject. This folder contains:
        1. ACPCAlign - the Talairach transform and a few associated files
        2. TissueClassify and ACCUMULATED_POSTERIORS  - outputs from BABC, which is a more advanced version of ABC (Atlas Based Classification), or the old itkEMS. The main things you should look for here are the t1_average_BRAINSABC.nii.gz and t2_average_BRAINSABC.nii.gz, which are the bias-corrected, averaged (if there is more than one scan in each category) versions of the input scans, as well as fixed_headlabels_seg.nii.gz and fixed_brainlabels_seg.nii.gz which contain the final tissue classification results (with and without the non-brain tissue labels included, respectively).
        3. AntsLabelWarpedToSubject - this is the atlas warped to the subject and the labels on the atlas carried along that deformation
        4. DenoisedRFSegmentations  - the subcortical segmentation created by BRAINSCut - not sure what kind of denoising is applied
        5. STAPLERFSegmentations  - as a wild guess, some of these segmentations must be STAPLE'd together - not sure which ones, though. Also, my version only does this for one hemisphere, which doesn't seem quite right.
  11. A similar list of AutoWorkup outputs also exists at: BRAINS AutoWorkup Output Descriptions


  • No labels