Child pages
  • 20130125 Different Noise Level Testing with Simulated MR Images
Skip to end of metadata
Go to start of metadata

20130125 - Different Noise Level Testing with Simulated MR Images

This page should fill in any important notes about the project. Think of it as a page in a global lab notebook.

  1. Description of data to be processed
  2. Scripts used to process the data
  3. Failed processing attempts
  4. Successful processing attempts

New Meeting Notes Instructions -- Remove this note when finished!

  • Remove this note when ready to publish
  • Recommended Page Notes Name: YYYMNMDD <<Note contents>>
  • OPTIONAL Includes :
  • OPTIONAL Dynamic Tasklist :
    {dynamictasklist:ToDo DESCRIP YYMMDD}
  • OPTIONAL Table of Contents: {toc:style=outline|maxLevel=2}

Table of Contents

Data Description

BrainWeb wget script

Those includes multi-modal MR brain images with different levels of noise. That will serve as a good test case of algorithm for different levels of noise.

Anatomical Model (Ground Truth) downloaded to




  • Evaluation of Performance Metrics for Bias Field Correction in MR Brain Images by Chua et al, 2009, Journal of MRI

: Montreal Simulated Brain Data Used for comparing two Bias Correction Methods in the Field at the time.

: Describes three different validation measures along the simulated data


Tasks attempted description

Experiment Directory


[eunyokim@wundt BRAINSABC_brainweb]$ ls
ExtendedAtlasDefinition.xml  Image_1mm_pn0_rf40/          Image_1mm_pn1_rf40/          
Image_1mm_pn3_rf40/          Image_1mm_pn5_rf40/          Image_1mm_pn7_rf40/          
Image_1mm_pn9_rf40/	     Image_1mm_pn0_rf0/           Image_1mm_pn1_rf0/           
Image_1mm_pn3_rf0/           Image_1mm_pn5_rf0/           Image_1mm_pn7_rf0/           
Image_1mm_pn0_rf20/          Image_1mm_pn1_rf20/          Image_1mm_pn3_rf20/          
Image_1mm_pn5_rf20/          Image_1mm_pn7_rf20/          Image_1mm_pn9_rf20/
Bash Script For Experiment
for pn in pn0 pn1 pn3 pn5 pn7 pn9
    for rf in rf0 rf20 rf40
        mkdir  $outputDir
        cd $outputDir
        cmd="BRAINSABC  \
            --atlasDefinition /hjohnson/HDNI/20130225_BRAINSABC_Publication/BRAINSABC_brainweb//ExtendedAtlasDefinition.xml \
            --atlasToSubjectTransform $outputDir/atlas_to_subject.h5 \
            --atlasToSubjectTransformType SyN \
            --debuglevel 10 \
            --filterIteration 3 \
            --filterMethod GradientAnisotropicDiffusion \
            --gridSize 10,10,10 \
            --inputVolumeTypes T1,T2 \
            --inputVolumes $t1  \
            --inputVolumes $t2 \
            --interpolationMode Linear \
            --maxBiasDegree 4 \
            --maxIterations 3 \
            --outputDir ./ \
            --outputDirtyLabels $outputDir/volume_label_seg.nii.gz \
            --outputFormat NIFTI \
            --outputLabels brain_label_seg.nii.gz \
            --outputVolumes $outputDir/T1_ai_msles2_1mm_${pn}_${rf}_corrected.nii.gz \
            --outputVolumes $outputDir/T2_ai_msles2_1mm_${pn}_${rf}_corrected.nii.gz \
            --posteriorTemplate POSTERIOR_%s.nii.gz"
        echo "---------------------------------------------------------------------------"
        echo $cmd
        echo "---------------------------------------------------------------------------"
        $cmd |tee $outputDir/command.out
        cd /hjohnson/HDNI/20130225_BRAINSABC_Publication/BRAINSABC_brainweb/

How do we want to use this experiment into paper?
  • Is bias computed correctly? --> Looked at computed bias, hard to prove.
  • Is label map consistent along different rf and pn? --> It is relatively consistent BUT it did worse when there is NO bias introduced. (If image is perfect, it does not do well)
  • Then,, how?

Attempt 1 

  • White Matter

Add description and script here

Following image shows comparison plotting of standard deviation of intensity mean ratio against the image with no bias between

a) image with bias,

b) image after BABC with Syn, and

c) after BABC with Affine.

  • Upper row is computed T1 and bottom is computed from T2.
  • x-axis = pn = percentage of noise

Computation of std( mean ratio) : explanation

Possible interpretation

Interpretation 1:

Graph shows variation of [ mean intensity of WM regions from a) right to left, b) posterior to anterior, or c) inferior to superior ]

    • Each mean intensity normalized by its maximum since we only care about fluctuation.
    • Since each images involve botth Gaussian Noise (pn) and Bias (RF), mean intensity normalized by the mean intensity of image with only Gaussian noise (No Bias).
    • Therefore, I am quite sure that lower standard deviation meaning less bias.
    • In that sense, BRAINSABC with Syn Registration performs best, and Affine registration did a reasonable job.
    • Explanation for POOR performance on certain planes ( Saggital in both T1 and T2, and Coronal in T2): 

Regarding that we only used polynormial approximation for bias field at maximum degree of 4, the simulated bias, which possibly has very higher degree of freedom, cannot be captured entirely. The maximal bias introduced in the direction of axial plane both in T1 and T2, and those are well processed.

To support of idea of enough correction of bias field even with those poor-looking graph, we might need to compare

    1. Brain label map from BABC showing it is very robust ( Little or no change across noise/bias level ) and/or
    2. something else???

Well, need to think more carefully.

Any comments/advice will be appreciated.

Attempt 2: With Ground Truth

I found ground truth mask for those simulated data. (Web site was not ideal to notice they have ground truth for each tissue type. )

Ground Truth for Montreal Simulated Brain Data

From the left to right, white matter FUZZY model, label map, and grey matter FUZZY model.

Make sure the fuzzy model is really fuzzy. " The voxel values in these volumes reflects the proportion of tissue present in that voxel, in the range [0,1]."

This attempt will compute statistics, mean and standard deviation, of ROI, white and grey matter from ground truth provided by BSD.

Three experiments are designed based on the paper by Chua et al, 2009:

  1. original mask 
  2. eroded mask by radius 1
  3. dilated mask by radius 1

What the paper by Chua et al suggusted is that, indirect validation could be very unreliable, but the validty of indirect method could be increased by using very conservative (eroded) mask. 

  1. Python code for processing:

    def ErodedMask( maskFilename, radius, outputFilename ):
        import SimpleITK as sitk
        mask = sitk.ReadImage( maskFilename ) >0
        eroded = sitk.BinaryErode( mask, 1 )
        sitk.WriteImage( eroded, outputFilename )
    def DilatedMask( maskFilename, radius, outputFilename ):
        import SimpleITK as sitk
        mask = sitk.ReadImage( maskFilename ) >0
        dilated = sitk.BinaryDilate( mask, 1 )
        sitk.WriteImage( dilated, outputFilename )
    def generateStdCSVFileFromImageAndMask( imageFilename, maskFilename, outputCSVFilename ):
        import SimpleITK as sitk
        image = sitk.RescaleIntensity( sitk.ReadImage( imageFilename ), 0, 255 )
        mask  = sitk.ReadImage( maskFilename ) > 0
        #mask = sitk.BinaryThreshold( sitk.ReadImage( mask ), 0.51, 1.0)
        #vDisplayCenterSlice([image,mask],["t1","mask"], 0.5)
        dim = image.GetDimension()
        size = image.GetSize()
        origin = image.GetOrigin()
        direction = image.GetDirection()
        print( "dim:::" + str( dim ))
        print( "size:::" + str( size ))
        print( "origin:::" + str( origin ))
        print( "direction:::" + str( direction ) )
        DimName = {0: 'x',
                   2:'z' }
        summaryData = []
        for iDim in range(0,dim):
            dimSliceSize = list( size )
            dimSliceSize[ iDim ] = 1
            for sliceNumber in range(0, size[ iDim ] ):
                startIndex = [0,0,0]
                startIndex[ iDim ] = sliceNumber
                myImageSlice = sitk.RegionOfInterest( image, tuple( dimSliceSize ), tuple( startIndex ) )
                myMaskSlice = sitk.RegionOfInterest( mask, tuple( dimSliceSize ), tuple( startIndex ) )
                stat = sitk.LabelStatisticsImageFilter()
                stat.Execute( myImageSlice, myMaskSlice>0 )
                labelNo = max( stat.GetValidLabels())
                mean = stat.GetMean( labelNo )
                std = stat.GetSigma( labelNo )
                if stat.GetCount( labelNo ) > 0:
                    summaryData.append( tuple( [DimName[ iDim ], sliceNumber, mean, std] ) )
        import csv
        with open( outputCSVFilename, "w") as the_file:
            csv.register_dialect("custom", delimiter=",", skipinitialspace=True)
            writer = csv.writer(the_file, dialect="custom")
            writer.writerow( tuple(["dimension","sliceNo","mean", "std"]) )
            for tup in summaryData:
    ## main part is omitted..
  2. Usage (help)

    [eunyokim@wundt BRAINSABC_brainweb]$ python --help
    usage: [-h]
                                                [--inputErodeFilename INPUTERODEFILENAME]
                                                [--inputErodeRadius INPUTERODERADIUS]
                                                [--outputErodeFilename OUTPUTERODEDFILENAME]
                                                [--inputDilateFilename INPUTDILATEFILENAME]
                                                [--inputDilateRadius INPUTDILATERADIUS]
                                                [--outputDilateFilename OUTPUTDILATEDFILENAME]
                                                [--inputImageFilename INPUTIMAGEFILENAME]
                                                [--inputMaskFilename INPUTMASKFILENAME]
                                                [--outputCSVFilename OUTPUTCSVFILENAME]
                                                [--listOfPosteriors LISTOFPOSTERIORS]
                                                [--outputCombinedPosteriorFilename OUTPUTCOMBINEDPOSTERIORFILENAME]
    **************************** 10-cross validation analysis
    optional arguments:
      -h, --help            show this help message and exit
      --inputErodeFilename INPUTERODEFILENAME
                            inputFilename input mask file name to be eroded
      --inputErodeRadius INPUTERODERADIUS
                            inputRadius input radius to be eroded
      --outputErodeFilename OUTPUTERODEDFILENAME
      --inputDilateFilename INPUTDILATEFILENAME
                            inputFilename input mask file name to be dilated
      --inputDilateRadius INPUTDILATERADIUS
                            inputRadius input radius to be dilated
      --outputDilateFilename OUTPUTDILATEDFILENAME
      --inputImageFilename INPUTIMAGEFILENAME
      --inputMaskFilename INPUTMASKFILENAME
      --outputCSVFilename OUTPUTCSVFILENAME
      --listOfPosteriors LISTOFPOSTERIORS
                            List of posteriors to be combined
      --outputCombinedPosteriorFilename OUTPUTCOMBINEDPOSTERIORFILENAME
                            output combined posterior filename from given in list
                            of posteriors
  3. Shell script to compute mean and standard deviation from masks:

    for roi in  phantom_1.0mm_normal_wht phantom_1.0mm_normal_gry.nii
        echo "Process $refMask"
        echo "====== ====== ====== ====== ====== ====== ====== ====== ====== "
        ## erode
        mkdir -p "$outputDir/Erode/"
        python --inputErodeFilename $refMask --inputErodeRadius 1 --outputErodeFilename $ErodeMask
        ## dilate
        mkdir -p "$outputDir/Dilate/"
        python --inputDilateFilename $refMask --inputDilateRadius 1 --outputDilateFilename $DilateMask
        # for erode original and dilated
        for option in Erode Dilate ""
            if [ $option == "" ]; then
               echo "set mask to original"
            if [ $option == Erode ]; then
               echo "set mask to Erodel"
            if [ $option == Dilate ]; then
               echo "set mask to Dilate"
            # processing BABC outputs
            for currDir in $inputDir/Image*
                echo "Processing $currDir..."
                currDirName=(`basename $currDir`)
                python --inputImageFilename ${currDir}/t1_average_BRAINSABC.nii.gz --inputMaskFilename $mask --outputCSVFilename ${currOutputDir}/t1_${currDirName}.csv
                python --inputImageFilename ${currDir}/t2_average_BRAINSABC.nii.gz --inputMaskFilename $mask --outputCSVFilename ${currOutputDir}/t2_${currDirName}.csv
            # processing raw outputs
            for pn in pn0 pn1 pn3 pn5 pn7 pn9
                for rf in rf0 rf20 rf40
                    echo "processing raw data of ${pn} ${rf}..."
                    python --inputImageFilename $t1  --inputMaskFilename $mask --outputCSVFilename $currOutputDir/t1_ai_msles2_1mm_${pn}_${rf}_${roi}_raw.csv
                    python --inputImageFilename $t2  --inputMaskFilename $mask --outputCSVFilename $currOutputDir/t2_ai_msles2_1mm_${pn}_${rf}_${roi}_raw.csv
  4. Above trial looks really ugly. Should think about the validity of computation.

Attempt 3: With Ground Truth Computing CV as a whole image with GM and WM Mask

I don't know what is happening with t1 with Bias(Rf) =40 with white matter.... I might just include experiments of Bias=20%.. 

I am done for now with this validation. 

Bias = 20%, gry= grey matter, wht = white matterBias =40%


Attempt 4(TODO) : Segmentation Consistency Comparison Along the different Noise Level.

Hopefuly, this shows something. 

Quick Validation by Sight: Red: Ours, Blue: :Ground Truth from Montreal Group

Our definition includes Globus and Thalamus. As we see, they match very good other than those globus and thalamus area. As it is, it is Dice Index matric is compatabe to Atropos based on what they published.






  • No labels