2013.07.09
Problem in shape-analysis pipeline with calling the program to condense qsub jobs for population studies. When trying to condense 945 lddmm qsub jobs, python "call" statement fails. Changed program to make a separate condense call for every 96 jobs (divisible by 24, so will make sense on the cis cluster also).
2013.05.28
Need to write a testing routine for VNLSparseSymmetricEigensystemTraits class, a wrapper I wrote for the vnl sparse symmetric eigensystem. The code is in
/cis/project/ncbc/src/sse
2013.05.20
Saving KWScenes consisting of files opened as a group (with "*" in the file selector) fails when opened in slicer3d. Slicer only seems to see one scene component.
2013.05.15
Created a script of commands to automatically perform Laplace-Beltrami and PCA on a directory structure containing shape-analysis pipeline outputs. The script searches the tree for the usual output directories TemplateGeneration and PopulationToTemplateLddmm then performs our installed programs for the analysis. Used on Tilak's NCBC renewal data and the command file is at:
/cis/project/ncbc/data/Renewal/WashU
2013.05.10
To get version of vtk imported in python -
from vtk import *
vtkVersion.GetVTKSourceVersion()
2013.02.19
Used wink to create some animations for http://www.cis.jhu.edu/software/laplace-beltrami/validation.html. Wink wouldn't run for me on Mt. Vernon so I tried to install it but that failed. Finally I just copied over a 32 bit libstdc++ and put it in the path:
export LD_LIBRARY_PATH=LD_LIBRARY_PATH:/cis/home/mbowers/ThirdParty/wink/lib
worked for me.
2013.01.18
The STEs for all the Botteron Female6-9 populations completed. Moved the completed template estimations to:
/cis/project/ncbc/data/template/lddmm-hamiltonian-ste
Worked with Tilak to do some volume and area stats for the new surfaces and they checked out to his satisfaction.

Went to a meeting with Dr. Miller, Dr. Younes, and Siamak. Discussed CVRG All Hands meeting 1/30. I will need to produce several slides (10?) about the software products we've created that are publicly available. Real world example of a researcher using the software would be valuable.
2013.01.16
Cleaned up the rest of Aggy's leave one out which I had to restart after the weekend's power outage. Finally all completed. Aggy repaired CSF labels where required. I created and installed a new atlas set for the new parcellations:
/cis/project/software/remote-processing/auto-segmentation/atlases/PPA_29_pwm_206ROI_iteration_2
Reran leaveOneOut for another iteration. Added memory challenged compute-13 to the lddmm.q for the matches but will need to change it back before the arbitration or it will fill up and crash.

Started the STEs for the rest of the Botteron Female6-9 populations using the python script described below.
2013.01.15
Wrote a python program to set up a qsub script to run a surface template estimation on a subject population. Set it up to read the config files created by my config file creation GUI
/cis/project/software/remote-processing/shape-analysis/scripts/config-gui/shape_analysis_pipeline.py or
/cis/project/software/remote-processing/shape-analysis/scripts/config-gui/pythonconfigparser.py
Program reads a config file STE parameters. A sample one is in
/cis/project/software/remote-processing/shape-analysis/scripts/config-gui/DefaultSettings.conf
I also have an Environment Config file created, which the STG python script reads for path and cluster config info. The default path for this env file will be
~lddmmproc/config/ShapeAnalysisEnvironment.conf
The python script for STE only requires as input a config file. Started a run on the Shape Analysis test data:
/projects/miller/ThirdParty/bin/python /projects/miller/shape-analysis/scripts/stat-pipeline/mgmt/templateSurfaceGeneration.py /home/mbowers/data/NCBCTemplates.Work/Female.6-9yr11m.3T/control/original/Hippo-L/recoarsened/F_6_9_control_L.conf
2013.01.14
Fixed coarsen issue. Glib thought compiler could handle inline functions but apparently not so well. Rebuilt gts with
#undef G_CAN_INLINE
and undefined references went away. Created an area for exes, scripts, config files, and data under /projects/miller/shape-analysis. Installed:
/projects/miller/shape-analysis/bin/utils/coarsen-cis
/projects/miller/shape-analysis/scripts/utils/recoarsen
Modified Laurent routine rigidRegistrationDirectory.py to perform coarsen on an entire directory of byus. Script installed in:
/projects/miller/shape-analysis/scripts/utils/recoarsenDirectory.py
Reduced the surfaces in the shape-analysis test_data directory with:
/projects/miller/ThirdParty/bin/python /projects/miller/shape-analysis/scripts/utils/recoarsenDirectory.py /projects/miller/shape-analysis/test_data/recoarsen/original --dirOut /projects/miller/shape-analysis/test_data/recoarsen/reduced
Note the python exe required: /projects/miller/ThirdParty/bin/python is 2.7. Current default installation on the cluster is 2.4.
2013.01.11
Tried to build coarsen over on the cluster but had problems with the compilation. Had to rearrange source and build directories under
/home/mbowers/CISLibs
for development and
/home/mbowers/software
for installed code. Fixed the previously noted problems with SurfaceReadWrite code and rebuilt everything. Cleared up a variety of issues but still have undefined references.
2013.01.10
Used valgrind to track down the memory corruption in the new version of coarsen, (which I've started calling coarsen-cis).
valgrind --show-reachable=yes --tool=memcheck --leak-check=full ./coarsen_new -i LU151sum_L_NCBC20100628.byu -o LU151sum_L_NCBC20100628_coarsen-cis-test.byu -l 1.0 -c 0  >& memtest.txt
memcheck noted that assigning the index to an vertex_index object was writing outside the allocated block for the object. This is because the code allocated a GtsVertex object instead of a GtsVertexIndex, which is basically a Vaillant extension of the gts. Anyway, figured out all the object oriented features designed into this c library (gts), so was able to properly allocate the correct size vertex object, and the code runs. memcheck also found that essentially nothing is deallocated at the end of any gts code that we use, because no programmer attempts to delete anything, which is possibly as a function of the above allocation issues.

Did extensive testing, showed distances from the large surface to the reduced one are identical for coarsen and coarsen-cis. Here's the relevant R code:
> new_coarsen = read.table('orig_to_coarsen-cis_distances.out')
> new_coarsen = new_coarsen$V1
> new_coarsen.ecdf=ecdf(new_coarsen)
> old_coarsen = read.table('../coarsen/orig_to_coarsen_distances.out');
> old_coarsen = old_coarsen$V1
> old_coarsen.ecdf = ecdf(old_coarsen)
> plot(old_coarsen.ecdf,col='red',main='ECDF')
> lines(new_coarsen.ecdf,col=’yellow')
> output = ks.test(old_coarsen,new_coarsen,alternative='two')
> output
Checked into svn the changes to lddmm-commom and lddmm-surface-lib (including several pending ones), and added coarsen-cis to the CIS svn repository.
svn import -m "CIS originated coarsen based on gts" coarsen svn+ssh://svn.cis.jhu.edu/cis/local/svn/ca-dev/coarsen-cis
2013.01.09
Power outage interrupted LeaveOneOut. When the c cluster was restarted, qstat represented the same state that existed when the processors stopped, but of course nothing was running. Used qdel to clear off the no longer running lddmmproc processes, and the ones that were still queued for running started running. Will need to figure out by hand which matches still need to be made, and restart them by hand later after the others complete.

With the cluster cleared of processing for the most part, I grabbed three more nodes (compute 7-9) for all the mappings that need to be completed (about 500). To create the queue instances, used:
qconf -mq lddmm.q
to add lddmm.q on compute 7-9, then
qmod -d all.q@compute-0-7.local
to disable other user from the node. Will release processors when the mappings have completed.

Continuing to get errors with new version of coarsen. Made sure everything was properly configured for compilation, recompiled in debug, ran and checked error. Error occurs on malloc of third vertex read in. Internet search of the error indicates corrupt stack/heap. Will use valgrind to try to pinpoint corruption.
2013.01.07
Continued to work on coarsen vs. Decimate question. Duplicated Anthony's work by creating a vtk mesh file with the orig->reduced distances. Posted results on
https://wiki.cis.jhu.edu/project/ncbc/coarsen_v_decimate
Tilak asked for a number or R commands to be run comparing the two sets of point distances. Got two R books from Elizabeth and started learning R.

Aggy e-mailed that the results of the LeaveOneOut for the 5 trial subjects looks good. Started the rest of the subjects (technically, started them all and killed the trial ones already completed). When they're all complete, I'll make a new atlas set of the new parcellations, then rerun another iteration of LeaveOneOut. all.q is active on nodes I thought I had reserved for lddmm.q, so I need to figure out what the issue is and possibly lock out those nodes again, at least until this study completes.
2013.01.03
Put the Shape Analysis remote processing client in /cis/project/software/remote-processing/shape-analysis/scripts/config-gui/shape_analysis_pipeline.py. The program allows the user to set parameters and start the front end of the shape analysis pipeline remotely (after the back end scripts are complete). To populate the fields, open the .conf file at /cis/project/software/remote-processing/shape-analysis/scripts/config-gui/DefaultSettings.conf. You can make mods if you like (e-mail of course needs to be changed). That file is not writable but you can save your modified version via File->Save As. There is a button labeled "Send" at the bottom and it is live. It will, like all remote processing, zip up your data directory and config file, rename the file to its md5 sum string, and ftp the zip file to the shape-analysis ftp folder, where the usual remote processing script will pick it up and send it to the icm cluster and invoke a script over there for processing. It just won't do anything yet.

Restarted the arbitration on the last of Aggy's LeaveOneOut study subjects. qstat showed the process was still qsubbed, but it didn't seem to be running. Possibly a conflict with another user locally logged in using matlabpool. Copied the Result.zip to /cis/home/mbowers/projects/auto-seg/leaveOneOut/subcortical_atlas_segs_12_20_2012, and the output segmentations to /cis/home/adjaman1/For_Mike/subcortical_atlas_segs_12_20_2012/. Sent messages to Aggy to let her know where to find the results.

Continued to work on coarsen vs. Decimate question. Looked at Anthony's work and followed the same basic procedure, but started from the original image on the coarsening of the surface vs. using Tommy's results, to eliminate unknowns related to registration that Tommy may have done. The work was done in /cis/project/ncbc/data/coarsen_v_decimate/. Wrote a script in /cis/project/ncbc/data/coarsen_v_decimate/coarsen/recoarsen to loop and repeatedly call coarsen up to 20 times until the surface is reduced to the desired number of vertices. Results of the survey are in:
https://wiki.cis.jhu.edu/project/ncbc/coarsen_v_decimate
Jianqiao wrote a version of coarsen based on gts that produces result somewhat close to those of the original coarsen. Tilak is looking for that code. In:
https://wiki.cis.jhu.edu/jfeng/log?action=AttachFile&do=view&target=11222010_coarsen_report_Jianqiao.pdf
Jianqiao says the code is in
/cis/home/jfeng/coarsen/1118_archive
But that directory doesn't exist.
2013.01.02
Debugged an issue with DTIStudio for Tilak. Code could never allocate more than 200 fibres in total because the capacity never changed.

Spoke with Saurabh about remote processing for the shape analysis pipeline. He is going to write the postprocessing in c++. I will write some glue-ware for all the preprocessing. We will touch base next week to check our status.

Located and checked into cvs all the byus that go into creating atlases for NCBC. Verified that they are in register.

discussed what needs to be done for NCBC, and the priority:
  1. Create via lddmm-hamiltonian surface template estimates on the cluster.
  2. Follow up on git repository status of itk submissions.
  3. Create a flow chart of auto-segmentation process.
2012.12.12
Got BrainWorks running on Ubuntu 12.10 by setting the following environment variables:
export XKEYSYMDB=/usr/local/centos4/12.10/XKeysymDB
LD_LIBRARY_PATH=LD_LIBRARY_PATH:/usr/local/centos4/12.10:/usr/local/centos4/lib
Added logic at the top of the bw and bwtest scripts to query the version of the operating system. Ubuntu machines 12.10 or greater will have these variables set going forward.
2012.12.11
Finished debugging program for creating a python ConfigParser text file for ShapeAnalysisPipeline input. Started modifying the program to add a button for zipping up the needed input and ftping the file to the remote processing ftp site.
2012.12.06
Searched around for a decent Python IDE. Settled on PyDev plugin for Eclipse, which we already have here. Started debugging a few of the issues with the Shape Analysis config GUI program.
2012.12.05
Experimented with a tool I found that provides a GUI interface to the python ConfigParser class. It is at

/cis/project/software/remote-processing/shape-analysis/scripts/config-gui/python-configparser.py

Used this application to produce a python config file based on the parameters

https://wiki.cis.jhu.edu/software/lddmm-roi

A user can open the config file that contains all the defaults at:

/cis/project/software/remote-processing/shape-analysis/scripts/config-gui/DefaultSettings.conf

So the application has a few rough edges but works pretty well.
2012.08.05
Restarted "nohup /cis/home/lddmmproc/check_icm_daemon" as lddmmproc from io19 after Friday afternoon shutdown. Cleaned up some directories that I had temporarily left in the process_queue because they were subcoritcal auto-segs on atlases that were only 6 structures. The matlab scripts were having problems with them, so I created a temporary matlab script directory on the icm /projects/miller/auto-segmentation/scripts/matlab.test to handle the processing.

Worked on the current full brain auto-segmentation code to try to parallelize the matlab scripts. Tested whether the code could work on mtvernon as is but it failed with the same error. Xiaoying created matlab code that puts the loop on structures outermost, then moves the read of each atlas inside. It's possible this might permit some parallelism, but it's going to be slow to read all the same files at the same time, and it's also going to make each worker program huge (15G), so not many copies will be runnable.
2012.07.30
Submitted an autosegmentation job to the c cluster to see how it distributes its processing. What I found on the 16 compute nodes was:
  1. 1 dwtard job - 8 MATLAB threads.
  2. nothing
  3. nothing
  4. 1 dwtard job - 8 MATLAB threads.
  5. 1 dwtard job - 8 MATLAB threads.
  6. nothing
  7. 2 dwtard job - 16 MATLAB threads.
  8. 1 dwtard job - 8 MATLAB threads.
  9. nothing
  10. 8 lddmmproc jobs - 64 mmlddmm threads.
  11. 1 dwtard job - 8 MATLAB threads.
  12. nothing
  13. 1 saurabh job - 24 estimateSurface threads
  14. nothing
  15. 8 lddmmproc jobs - 64 mmlddmm threads.
Then there are 14 lddmmproc jobs waiting on the queue.

There doesn't seem to be any rhyme or reason to the distribution of processing on the c cluster. I need to figure out how to make this work for auto-seg.
2012.07.25
Sent an e-mail to the MRI Studio users that the ICM is going to be powered down, so be judicious about submitting jobs close to the shutdown time. Forwarded the msg to CAWorks users here.

Neil's auto-seg jobs started arbitrating, and failing to complete. It's likely that the ROI count = 16 being hardcoded throughout is causing a problem. Created a parallel set of scripts that hardcode RIO count = 6 as a test. Will try to determine which ROIs go with which labels, then run the test scripts.

Made progress on the wiki page for the remote-rpocessing for auto-seg.
2012.07.22
Fixed the Fibre creation code. The code was supposed to allocate new Fibres in sets of 100, but then it never incremented the proper "capacity" count, so it capped out at 200. Not sure why track 102 was always bad, but the increment seemd to fix the problem.

Both of the Aggy segmentation targets completed on the cis cluster and I sent the results to Can, Aggy, Xiaoying, Kwame and Neil. After a quick visual check, Aggy and Can approved of the results. There is some periodic behavior where some structures are beingn repeated outside the skull. Can's fix is to mask the segmentation result with the original image (IMG_mask segmented_result.img target.img segmented_result.img).

The arbitration on the cis cluster for full brain took 23 hours. Started trying to use MATLAB parallelization:

matlabpool open 2

I used a "parfor" MATLAB parallel for loop around the ROI counter, but was unable to get it to run. Got the following error:
Warning: Error caught during construction of remote parfor code. The parfor construct will now be run locally rather than on the remote matlabpool. The most likely cause of this is an inability to send over input arguments because of a serialization error. The error report from the caught error is: Error using distcompserialize Error during serialization of (null)
Appears to be a problem running the parfor loops. Most info on the internet points to a memory limit of 2GB of data into the pool. The total memory footprint of this program is greater than 17G, and most variables are referenced in this loop, so that may be the problem. Either that or the total footprint of the program exceeds total memory. Will try with a pool of two, so should be under the 128G available on c.cis.jhu.edu nodes.

It's ususably slow as it is now, so I'll continue to try to find speedups. The two approaches I'm considering now are:
2012.07.22
Set breakpoints inside fibre creation code where index is set which corresponds to fibre coordinates, but got no out of bounds hits, only on the write...

Had problems with auto-seg on our cluster. First, was using the PBS directive with qsub, but that doesn't work on "c". Need to just use # $. Then, qsub on "c" dumps all the processes on the nodes at once, so submitting multiple jobs overloads the system and nothing completes. Got some actual data that Aggy wants to run and kicked off the processing on c. Had a problem with applying the results of the match to the atlas labels, but Can found the problem early Sunday morning so I built the fixed code and installed it in our binary area and ran the apply def which worked. Started the job at the point of the matlab arbitration step. It shows no sign of completing so started looking into parallelization techniques in MATLAB. Problem could likely be solved with the "parfor" loop, but not sure parallelization is available on "c". Further testing is needed.
2012.07.17
Finished updating fiber data file command line output so it's like the current DTI Studio. The file still won't load correctly because the coords for fiber 102 are out of bounds. We looked around and this seems to be the only one that's a problem. Trying fewer threads, but tomorrow going to use the debugger to figure out where and why it goes wrong.

Made some mods to scripting to add some error checking and reporting. Going to take the step of putting the unzipped data on the ftp outgoing queue before moving it over to the local cis queue, and copying the Result back there. Added the call to /etc/profile to the ssh command so now qsub and other things can be found.

Issue with Brainworks creating some repeated points during curve generation.
2012.07.16
Tested scripting. Auto-seg dying on atlas #3 of new full brain atlas set. Used atlas 35 from Mori Elder 35 set and replicated (except for timestep parameter) our validation test data from the icm. Can says the results are what he expected to see.

Worked on updating fiber data file output so it's like the current DTI Studio. Tilak wants the formats to be the same because we're having proglems loading a fiber data file generated by the command line program.
2012.07.12
Need to use old binaries from icm to run the remote processing solution on the c cluster. Changed script paths to point to cis local scripts.

There is a 64 bit command line version of dynamic programming for fibre tracking on Linux machines. It is in /cis/project/software/DP/MikeDP and should be run on large data sets.

Checked Joe vte submission. It's been running for a week on the icm, but is only a few iterations in. It will likely time out (10 days). The image sizes are 10 times the size of the test set that Geoff and I have been using. The program needs to be parallelized.
2012.07.10
lddmm-utils checked into svn. Wrote a build script to export the code from the repository and automatically build on the following targets: The executables, build scripts, and exported source are all installed under /cis/project/software/lddmm-utils.

Created a script to put in /usr/local/bin. Any user can type:
lddmm-utils <utility_program> <parameters>
Discussed validation data with Can. He's going to write a script to produce outputs from all the utils, and a MATLAB script to compare those outs between two directories.

Got a valid vte through via caworks, but Joe Hennessey waiting on a return from a job submitted days ago, so keeping an eye on that.
2012.07.02
Took all the code from /cis/home/can/smallprj4/mapping_scripts, added in new files from Geoff and Siamak (no changes to existing code), added in mods in Array3D to fix bugs and speed things up, added a "common" directory where files that were replicated through the img, byu, and lmk subdirectories, updated the makefiles to add all the new files, and "all", "clean", and "install" targets, and built everything on "c" and installed it in /cis/project/software/lddmm-utils/x86_64. Took old lddmm-utils cvs module and ran cvs2svn on it so now there's an svn module in our repository.

Found the issue with vte that Geoff was having trouble with. CAWorks was not putting the data in the "data" directory. Fix is for caworks to put the data directory name in the config.txt file, then the scripts can handle it.

Had a major backlog of data in the icm "miller" queue, with some 350 or so jobs queued up on Friday. Sent an e-mail to the heaviest users to ask them to be patient and let the queue clear, and be sure not to resubmit jobs they've been waiting for. That list of users (once for each job they submitted) is in lddmmproc@user1.cor.icm.jhu.edu:/home/lddmmproc/emails.txt. The backlog cleared over the weekend so sent another e-mail to the same users that the cluster was available for new jobs.
2012.06.25
Finished the resizing changes for lddmm-cardia and vte. Adding some error processing to the cardiac pipelines. I generate a file called FAILED then do a "find" to search for it when processing completes. I then write out the contents of all the files called FAILED into a mail message to the user.

Started looking into using the "c" cluster for processing auto-segmentation.
2012.06.19
Started working on upgrades to lddmm-cardiac for resizing. Got a code drop from Geoff/Siamak which had some issues but I think those are worked through. Gave them a tar drop of Can's cleaned up lmk utility code with a makefile with "make all", "make clean", and "make install" targets to help with some of the coding/building issues.
2012.06.15
Wrote a test program for building multi-block data from the auto-seg output. Built against a local Paraview build I had, but the vtk is pretty old. Dowloaded vtk from the git repository and built that, but there's no analyze reader lib in that - I need paraview. Tried building Paraview from git, but it wants a newer QT than the one available on my machine. Will need to either build against one of Joe's later Paraview builds, or figure out how to get Paraview to point to a later version of QT.
2012.06.11
Worked with Can and Kwame on trying to debug auto-seg. Modified scripts so data is not removed from cluster servers automatically. Discovered verbose flag doesn't work in CAWorks and reported that to Joe. Set up another run for Can.

Met with Siamak to discuss CVRG. He is working on his pipeline and will keep me appraised of software development requirements for CVRG. The only current one is the validation of remote vte and lddmm-cardiac processing.

Saurabh showed me how his code built the surface tenplate estimator, so I copied that program to /cis/project/software/lddmm-hamiltonian.
2012.06.07
Met with Dr. Miller, Anthony, and Bill to discuss Multiblock data formats and how to integrate XNat, CAWorks, and remote processing. Decided that I would Attempt to write out auto-seg processing results in multiblock format.
2012.06.06
Worked on verification of auto-seg results with Kwame, Can, and Xiaoying. Kwame retrieved the results from caworks, Can said they didn't look good. Can now looking over possible problems. Commented out the "rm" of the results in the process queue so that we can look at partial results.

Restarted check_icm_daemon script on io19 to expose the vte functionality.

Checked out and compiled lddmm-hamiltonian.
2012.05.30
Sent the results of the new itk skull stripping filter to the writers of the software.

Added ITK Shape Analysis subproject to our project management database. Updated statuses, which will probably become a Wednesday morning custom.

Started work on the remote processing vte scripts. Looked at the zip file created by CAWorks and asked Joe for some changes.

Checked the remote processing scripts into our CVS repository under the module name Remote Processing.
2012.05.23
Added the subproject Remote Processing to the CIS project and began listing some of the tasks necessary to complete it, and the dependencies between some tasks with differeing assignees.
2012.05.22
Completed building the new ITK skull stripping filter. Tommy created a template mask from a previously skull stripped image Xiaoying sent me, so I tried to use this mask and the original T1 as atlases for this filter. The results were poor (confirmed by Elizabeth) so I'm looking into possible problems, then will contact the authors of the filter with my results.
2012.05.18
Can got me his final scripts for auto-segmenation so I merged his changes into my current scripts then started testing. He and I got together to hammer out remaining issues and completed testing. Informed interested CIS personnel that the auto-seg pipeline is now on production status via ftp or caworks.

Added "smoothSurface" to list of scriptable commands in BrainWorks.
2012.05.15
Got the final Jacobian calculation script from Geoff and Siamak. Merged the changes with the changes I had to make on the cluster. It won't run on the cluster because it needs version 2.6 of python at least. Installed 2.7 under /projects/miller/ThirdParty. Looks like this will be where we'll build the supporting software we need for CIS projects on the icm. Added the Numpy (numeric python: http://numpy.scipy.org/) and PIL (Python Imaging Library: http://www.pythonware.com/products/pil/) libraries, which were straightforward. Python VTK was also required. Pointed to the installed versions on user1 machine, but they are under /usr/local, so the processing nodes can't see those libs and "import vtk" fails in the script. Had to copy the vtk libs to /projects/miller/ThirdParty/libs, and add that path to the LD_LIBRARY_PATH in the qsub script that calls the python program. Finally got some answers and Geoff testing and verified the results.
2012.04.27
Added call to python script to calculate Jacobian of deformation maps. Doesn't seem to work on this version of python (2.4). Asked Kyle to update python to 2.6 at least. Started messing with Can's new auto-seg scripts.
2012.04.26
Wrapped up the lddmm-cardiac and auto-seg interface with caworks.

Started comparing lddmm-cardiac scripts so I can add the line to calculate the Jacobian of the HMap and KiMap.
2012.04.25
Finished debugging the old scripts for auto-seg and integrated with caworks. Working out issues with Joe.

Got new scripts from Can. Need to rework entire architecture of how auto-seg works.
2012.04.16
Met with Can and he wants to rewrite my automatically generated script for parallelizing the matching required for auto-segmentation. He is adding functionality to resize atlas images to the size of the target images. While he's working on that, I'm creating the rest of the script that does the arbitration part of the process with MATLAB. This will work only on images that are the same size as the atlas T1s. I am debugging the script for doing that but am getting problems connecting to the license server for MATLAB. This could continue to be a problem for our remote processing if we intend to use MATLAB.
2012.04.11
Tried debugging BrainWorks over and over to attempt to figure out the font problem but after running the program on mtvernon and modifying it to look for a non-existent font, I reproduced the problem. So just needed found the missing fonts in the packages:

xfonts-75dpi
xfonts-100dpi

Anthony installed them on Kubuntu 11 machine dwalin and the problem was solved.

Started working on debugging a SubSD scripting issue brought up by Tom Nishino at Wash U. It appears this BW function writes distance values of -0.123456789 in all non-valid voxel of the distance volume. These don't get stripped out and appear in the histogram of distances determined from SubSD, but not from SD. Asked to Wash U guys to try some tests for me and am waiting for their feedback.

Started looking at the Xiaoying scripts for segmentation arbitration. Her main MATLAB script is hardcoded for Can's test casse, including Atlas count, atlas names, and structure count. Started looking at how to modify these and discussed with Can. He wants to take my script and start modifying it because he wants to do image resizing and other things. Going to let him do so but will probably get a version up and running without all the flexibility built in.

Ear shape template estimation completed but the results are poor. The ear is paper thin and very small. Daniel thinks the problem could be non-registered surfaces. Though Tilak thinks the surfaces are alreday in register, a visual check suggests they may not be. Tommy gave me a copy of the surface registration scripts he uses to call Laurent's MATLAB byu registration code. I am going to at some point build a C++ version of Laurent's and add to /cis/project/software.
2012.04.02
Was finally able to get BW compiled by copying things down into /tmp and building there. The Makefile handles all this automatically now.
2012.03.30
The qsub scripts seem to work fine at this point. Got some MATLAB scripts from Xiaoying for the arbitration of the segmentation.

Been working on getting BW to compile to try to fix the font issue on kubuntu 11. Having a lot of problems that all produce the same error:

Value too large for defined data type

It occurs for any file in a directory I've created recently. Files in older directories seem ok.
2012.03.26
Continued working on auto-seg scripting. Main bash script reads atlas_list.txt correctly and creates desired subdirectories for each individual atlas correctly, and creates a qsub script in each of the directories. Working on verifying the qsub scripts work correctly.
2012.03.22
Spoke with Yajing yesterday about BrainWorks, which she apparently needs to use. So today I tried to build on a kubuntu machine, which I've tried before. I was able to build but not to run. There are issues with loading dynamic libraries so the program can not start. I then powered on hplab8 to try to build the program but was unable to. The archive building program ar is not able to create an archive from the object files g++ is building, with the error:
Value too large for defined data type.
By which I gather that the compiler is building objects containing data too large for a 32 bit program. Made sure all old object files were cleaned out and rebuilt but the same problem continued to occur.
2012.03.21
Worked on auto-seg scripting.
2012.03.20
Worked on auto-seg scripting. Had to modify the default remote processing script run_q to run the auto-seg script directly on user1, because it kicks off processing via qsub.
2012.03.16
Fixed problems getting executables for lddmm-utils to work properly. It turned out to be the pointer aliasing issue we had years ago with the Array3D code. Can updated the Array 3D code he used for lddmm-volume, but not for his utils.
2012.03.15
Finished debugging the interface with caworks for cardiac pipeline. Pipeline looks to be up and running.

Started copying over data, scripts, and executables from Can to icm cluster for auto-segmentation pipeline. Had problems getting executables for lddmm-utils to work properly. It turned out to be the pointer aliasing issue we had years ago with the Array3D code.
2012.03.13
Finished lddmm-cardiac scripts again. Needed to rebuild some Can code but the rebuild didn't work. Geoff had a copy that did. Replaced it. Changed the names of the input data files to match remote lddmm-roi. Geoff checked results and they were correct. Restarted the check_icm_daemon on io19 as lddmmproc. Joe checking his interface against current input.
2012.03.12
Finished lddmm-cardiac scripts. Rebuild of some of Can's scripts for the cluster. Copied over mm_lddmm and put all the exes under
/projects/miller/lddmm-cardiac/scripts
2012.03.08
Worked on lddmm-cardiac scripts. Finished mods to script in /cis/home/lddmmproc and began modifying Geoff's script for cardiac
2012.03.07
Checked on surface template estimation runs. Process completed on io20, but failed in the same way that it has previously. i.e., the hypertemplate not deforming. The process continues to run on the icm cluster, at iteration 29 of 100.

Worked more on the cardiac pipeline on the icm cluster.
2012.03.06
Anthony showed me the automated remote processing scripts that needed to be modified to add lddmm-cardiac to the list of remote processing pipelines. Created check_lddmm-cardiac (from check_lddmm-roi) and made the updates for cardiac processing.

Debugged my mods to DTI studio for Tilak to write out fiber files with cost data appended. There was an issue with resetting the fiber length incorrectly when it had alredy been set correctly.
2012.03.05
Anthony set me up with permissions to modify lddmmproc scripts. I will write cardiac pipeline scripts for remote processing, to be completed by the end of the week.

Fixed bw and bwtest scripts so Haiyan could run on elkhorn.

Worked on some mods to DTI studio so Tilak can write out fiber files with cost data appended.

Can finished creating atlas files we want for auto-seg.

Continued working on and understanding Daniel's surface template estimate scripts for the cluster. The right side of the ears are working on there and apparently making progress.
2012.02.28
Andinet submitted the KWScene Tech Paper to the . VTK Journal.

Met with Anthony to discuss remote processing architecture. We went over the scripts in /cis/home/lddmmproc/scripts that do the processing. Input is sent to an incoming ftp directory, a cron job picks it up. Data transfer is effected via nfs mount to the cluster file system. The processing is kicked off by ssh-ing to the cluster with a flag to kick off a command.
2012.02.27
Reworked some of the KWScene example session for the technical paper, send to Andinet.

Installed AIR on the cluster. Started looking through scripts for lddmm-remote and Daniel's Surface Template estimation code.
2012.02.23
In preparation for remote auto segmentation, did some testing to verify that I can qsub matlab scripts.

Built a new version of lddmm-surface-exec for the cluster with Saurabh's changes.
2012.02.22
Kyle installed matlab on the cluster, and the full pathname to the executable is:

/apps/MATLAB/R2011b/bin/matlab
2012.02.21
Worked with Daniel to fix the ste 3.0 script and restarted on icm cluster.

Finished building static lddmm-surface-exec with Saurabh's changes on mtvernon and copied it to /cis/project/software.
2012.02.17
Jianquo asked for an account on the cluster for doing some surface matching. Sent him Kyle's e-mail address so he could make the request. Started updating my version of surface matching (lddmm-surface-exec) with Saurabh's modifications so Jianquao can submit jobs to qsub with it. But it looks like he won't be using it. Going to finish the update here and on the cluster.
2012.02.16
Sent a Use Case to Andinet at Kitware for his technical paper on KWScene.
2011.12.28
Completed the HIPAA refresher course.
2011.12.27
Got a response from Andinet at Kitware that he'd like to take up the idea of producing an Insight Journal submission for the KWScene Atlas code.
2011.12.19
Worked on itk::PrincipalComponentsAnalysis class for submission to the Insight Journal. Cleaned up the testing routines and Cmake files and the documentation and sent a draft of the article to Laurent. Still having some troubles with config and build on the test routines. The VTKPolyData to QuadEdgeMesh reader is having a problem compiling when instatiated with a Mesh that contains a vector data type as PixelType.
2011.12.09
Met with Laurent and Randy about what might be a good stat pipeline routine for him to write in the intersession. We selected surfaceAnalysis.m.
2011.12.08
Finished adding all the changes to lddmm-surface identified by Saurabh and Dr. Younes. Finished building static version of the code for CentOS and Ubuntu here. Added a check in the lddmm-surface script to determine which OS the user is running, so the appropriate executable is invoked. Build a version for the cluster. Used cvs2svn to add lddmm-surface to our svn repository, and added the new changes in. Modified the manual webpage and the changelog webpage. Sent an e-mail to all CIS to describe the changes.
2011.12.06
Laurent sent the subject age data file I requested.

Sent an e-mail to Anthony and Dr. Miller to discuss whether it might be wiser for me to not go to the NAMIC meeting in SLC. Dr. Miler agrees.
2011.11.30
Surface Template Estimation failed for the ear shapes failed again on the cluster. I can't decipher the issue on the cluster so I will have to go back to using a machine at CIS.

Was able to build a static version of lddmm-surface. Among the caveats:
2011.11.22
Started running Laurent's latest version of the stat pipeline in preparation for coding an itk version of the regPerm function, which is essentially statPerm with a new factors added in to enhance statistical significance of the program. The code in svn is missing two files, one that sets paths (which I can do myself) and subject ages, which I need. Asked Laurent to check in or send thes files to me.

statPerm has essentially been discarded by Laurent - it is a legitimate statistical method but wasn't giving powerful enough results.
2011.11.18
Started working on lddmm-surface issues with Saurabh. He found some issues related to the regularization weight contribution to the functional. He also found a problem with cblas calls some matrices were transposed. The transposition had no effect on the outcome since all the matrices were symmetric.
2011.11.16
Wrapped up debugging StatPerm. Function performs same as Laurent's MATLAB version when I import his random variables, but slightly different when random vars are from ITK. Will check further if there seems to be a pattern, or it's just random.

Started Tilak's earshape template estimation again.

Started BW code to cut across an open surface, then run the cut around the surface boundary back to the original start vertex.
2011.11.11
Finished work on gtca code.
2011.11.09
Started working some on GTCA. Agreed to do the following:
  1. incorporate Jianqiao's changes into the current code.
  2. check them into cvs
  3. create an svn repository from the current cvs repository using cvs2svn
  4. create a CMakeLists.txt file to build executable. install it in /cis/project/software/gtca
  5. create script files for running the new exes in /usr/local/bin
For cvs2svn, I copied the GTCA conversion configuration file under projects/configResearch/svn/gtcaC2SConfig.
2011.11.07
Continued debugging StatPerm.
2011.11.04
Completed code and started debugging for StatPerm.
2011.11.03
Met with Laurent to discuss the CA pipeline and what I should be implementing. He agreed to join the group that can write to/read from the ca svn repository and check in his latest MATLAB code.
2011.11.02
Presented a plan for a NAMIC project at the software status meeting. Hope to put together the core modules of the Statistical Pipeline in itk-based C++ for the meeting in January. The presentation is here.
2011.11.01
Finished command line version of Dynamic Programming for Tilak. Created a CMakeLists.txt for cross-platform building, and added the ability to read in x, y, z eigenvalue and eigenvector files (instead of one file for xyz) which make it easier for tilak to run the large data sets he has. Tested on large and small data sets and "installed" executable in /cis/project/software/DP/MikeDP.
2011.10.31
Surface template estimation on the ICM cluster for the right side earshape data set seems to have failed with the following error:
pbs_iff: cannot read reply from pbs_server
Worked on building a commend line version of the DP algorithm Tilak added to DTI Studio. Was able to build for Windows 32 and 64 bit, and Linux.
2011.10.29
Started surface template estimation on the ICM cluster for the right side earshape data set.
2011.10.27
Built BrainWorks on a 32 bit kubuntu 11 machine (phi). Essentially the same as kubuntu 10 in terms of package dependencies.
2011.10.26
Submitted Laplace-Beltrami filter code and article to Insight Journal.
2011.10.25
Built BrainWorks on a 32 bit kubuntu 10 machine (epsilon). Checked in to the repository all the recent changes to BrainWorks. Checked out the repository under my home directory. Started to build. Required updating many system header includes to remove .h extension from their names (iomanip, fstream, iostream). Specified "using namespace std" where needed. Anthony installed the following: makedepend, libmotif3, libmotif-dev, x11proto-print-dev, libxp-dev. Swapped out link to libg2c for libgfortran. To run on mtvernon, copied over the following libs from epsilon: libblas.so.3gf, libgfortran.so.3, liblapack.so.3gf, libXm.so.3, libXp.so.6. Set LD_LIBRARY_PATH to point to these libs, then ran. Program ran properly in general, but did only cursory testing on ImageViewer and SurfTrack. Fixed one bug with the Zoom that was causing a crash when zoom factor exceeded 10.0.
2011.10.18
Looked further into curvature.m issue on surfaces. In the sample surface byucheck_20e640f_rstg.byu, removing the duplicate faces, then any orphaned vertex, produced a surface that the script could calculate curvature on. Shannon agreed to try these steps on the remaining surfaces to see if they could be run through curvature.m.

Revisited Xiaoying's script that performs lddmm-landmark and applies the deformation to an image. Had a discussion with her and she said that the script failed or any image voxel size other than 1, 1, 1, including other isometric images.
2011.10.17
Finished testing BW changes noted below and sent an e-mail to Dr. Miller and Tilak to inform them of added functionality.

Started working again on Xiaoying's landmark matching issues, which include problems applying a transformation to an image. Working on a script of hers that she's used before, but this is not an isometric image and that might be causing her problems.

Tilak asked Shannon to ask for help with calculating surface curvature on a byu file that has some topological issues. The curv calc fails because of some issue with determining the edges in a 1-ring of one of the vertices. I don't understand the failure, so I'm going to look at some of the edits made by Shannon and see if I can come up with a surface that can get through the MATLAB program for curvature.
2011.10.14
Finished the BW changes requested by Tilak and Jacqui for creating curves using waypoints on surfaces.
2011.10.13
Sent an e-mail to Luis at Kitware about contributing code and how it's done. Set up a dashboard build of itk via git. Built itk but could not find results on itk dashboard.

Worked on Xiaoying's lddmm-landmark problem.

Trying to modify Surface Tracking to be able to create waypoints in curves, vs. every click being an endpoint. Made that work, so now have to add the ability to create multimple sets of waypoint generated lnes.
2011.10.11
Registered for ITK Gerrit Code Review access. Used Yahoo ID magck123. User name MichaelBowers.
2011.10.07
Checked in the Laplace Beltrami code into the NAMIC Sandbox repository.
2011.10.06
Started building the Differentiating function and realized I need an index sort like in MATLAB. Looked through what was available in vnl and couldn't find anything useful. So I built a solution that used a multimap of pairs comprised of the vnl_vector values and the index. The multimap sorts on insert by the first member of the pair (the value), then after inserting all the pairs, I pull out the indices and sorted values and hand them back. I wrote additional functions for matrix sorts by row or column. Anyway it seemed like the repeated constructions and sorts might be slow, so I tried using a std::vector of pairs, which is similar but there's an explicit sort call. In my test on large random data sets this performed in 34 seconds vs. 54 for the multimap. I inquired at the vxl-users list if something similar to this existed in vnl, and Ian Scott told me about some functions under contrib/mul/mbl which do something similar on vcl_vectors. They don't operate on vnl classes, don't return sorted values (not a big deal), and aren't available in itk. Anyway I thought his implementation was neat and might be faster than mine, so I incorporated it into my class. Turns out it finishes in 33 secs vs 34 sec for mine, so no big improvement, but it's more elegant. So I cleaned up the code, removed the "static" from all the functions, and proposed it (about 12 times, accidentally) to the vxl people. They accepted the code and apparently are adding it to their repository.

Checked the latest itk git repository main branch, and found out the vnl changes I had made for the vnl sparse symmetric matrix solution are incorporated into the main trunk of the code, which means I can make a submission to the Insight Journal for the Laplace-Beltrami filter. So I cloned the itk git repository, built itk, and built LaplaceBeltrami against it. Started merging that code back into NAMIC Sandbox LB code base.
2011.09.27
Finished compilation of StatPerm and a test program for it. Need to build a Differentiation Function for the code then it should be ready to test.
2011.09.21
Worked briefly on StatPerm.

Performed a surface match on a mouse striatum given to me by Manisha. Modified the display of default variables in the Usage print statement to be more type friendly. Sent an e-mail to Manisha about how to determine good sigma values, then run lddmm-surface-exec to create vtk formatted surface files that have momentum data attached at each surface. Sent her a picture of the momentums displayed in caworks.
2011.09.20
Continued working on StatPerm. Found a permutation generator in ITK and did some testing to see if it did what I expected and I could use it. It passed the tests so it's in the code.
2011.09.16
Continued working on StatPerm.

Added functionality to BrainWorks to allow the operator to select a vertex in any surface box. Right-click and select "Go To Vertex". The program will behave as though the user middle-clicked on the specified vertex. Elizabeth and Shannon requested the functionality to help them deal with problematic points on some of their surfaces.
2011.09.13
Continued working on StatPerm.
2011.09.12
Finished class definitions for StatPerm.

Continued working on vertex pick code in BrainWorks.
2011.09.09
Continued work on statPerm in ITK. Decided to use generic PointSets as input types. RankSumPerm will operate on those types.

The ability to "translate" in the surface view was obscured by an option for SurfaceExtractKeep (Shift-MiddleBtn) added by Lei Wang. Made the translate vailable again. Started working on code to add the ability to go to a specified vertex or point in any SurfaceBox. Going to add the function to the end of the context sensitive menu you get when you right click.

Sent ear shape template to Joan Glaunes.
2011.09.08
Continued work on the kdevelop project for statPerm under projects/NAMICSandbox/JHU-CIS-ComputationalAnatomy/StatisticalPipeline.

Added the vertex pick style (EndPoint or WayPoint) selection button to SurfTrack.

Configured new laptop to connect to old CIS network. Added Office.
2011.09.07
Added the surface vertex output to SurfTrack and copied the executable to BrainWorksTest.
2011.09.06
Continued work on the kdevelop project for statPerm under projects/NAMICSandbox/JHU-CIS-ComputationalAnatomy/StatisticalPipeline.

Ran a surface match of some biocard data for Joe using lddmm-surface-exec with vtk formatted output. Put results in /cis/project/biocard/data/single-lddmm-vtk.

Template estimation for Tilak's ear shape data finished. Put the results (left side) in /cis/projects/earshapes/templateEstimation.

Tilak asked for an indicator of Vertex number and position in the SurfaceTracker module of BrainWorks. Currently tracker always takes points selected as start, end of a complete segment. Tilak wants a button that gives the option of continuing a segment.
2011.08.26
Created a kdevelop project for statPerm under projects/NAMICSandbox/JHU-CIS-ComputationalAnatomy/StatisticalPipeline.
2011.08.25
Worked on the vertex thing for Jaqui/Tilak. Worked endlessly trying to get the text value out of a Xm Label to no avail. Gave up and now I save the text string myself and write it to stdout when the user requests it. Synched up release and debug code and created a new bwtest for Jacqui to try.
2011.08.24
Template estimation for Tilak's ear data still running on io21.

Looked into parallelization of iterators in lddmm-surface. Gts iterators use tree traversal code from glib. Looked around for a parallel implementation of glib but found nothing. The best bet we have for using surface matching in parallel is match based parallel processing on target populations, which we already have in template estimation and lddmm-surface-exec.

Worked more with Tommy on surface registration. When operating on left-right registration, the MATLAB code seemed to converge to a flipped solution. I looked through the code to see if a flip in rotation was suppressed intentionally (like in Vaillant's runSim) but found nothing like that. Tommy now flips his population around the z axis in preprocessing and gets good looking results, but this obviously requires some a priori knowledge of the data.

Modified ivcon to put out a lot less information depending on whether the DEBUG preprocessor flag is set. Let Daniel know how to link in this new library he could build new template estimation exes that don't make a stdout mess when parallelized.

Discussed with Jacqui about the generation of sulcal curves on surfaces, and how brainworks might be able to support that. It's not an operation that can be completely scripted, since the operator has to visually select the point on the surface for the start and end of the curve. The solution we settled on was to have a button that prints the currently selected vertex to stdout. Jacqui can send that output to a text file that could be parsed into a bw script for curve generation. Got started writing and testing this code. Turned hplab8 back on to compile the changes.
2011.08.22
Template estimation script still running on io21.

Looked into parallelization of iterators in lddmm-surface. Looks like it would be necessary to get into the gts library to add openmp pragmas.

Found a surface registration program in Laurent's shape analysis MATLAB code. Tommy mentioned that he needed one. Showed Tommy around Can's utility programs so he could apply the output of the RTS data.

Spoke with Casey about how CMake works. Sent him an example. He's putting together a library of Laurent's c++ code (based on ImageMagic and GSL).

Discussed with Tilak and Jacqui about the generation of sulcal curves on surfaces, and what brainworks can and can't do for them. It's possible to perform this operation manually on BrainWorks, or via script. It may be the case that she needs to specify landmarks or pick the begin and end vertex of the curve by hand.
2011.08.11
Shannon and Elizabeth needed a script function to extract the largest connected component from a surface. The existing script required that the user supply a vertex index but users don't have a priori knowledge about which component is largest. So I added the option of specifying vertex=-1, which makes BrainWorks find and extract the largest component. I also added the option of writing out the Euler Number in the SurfStats command.

Spoke with Dan, Tim, and Anthony about what might be needed to get Dan's template estimation program running on the icm cluster using qsub instead of invoking parallel runs on the same server. Anthony plans to use a scheme involving writing of files into a separate directory when jobs complete, and searching for/counting those files.

Template estimation continues to run on io21.
2011.08.08
Tried to recover mtvernon from template estimation run, but it ws too locked up. Script still running on io21.

Spoke with Dan about what might be needed to get his template estimation program running on the icm cluster using qsub instead of invoking parallel runs on the same server.

Dr. Younes provided a test program for statPerm.m, which instead of using real data produces arrays of randomly generated number.
2011.08.05
Tim set me up with permission to use io21. I ran Yajing's scripts on io21 and they ran through to completion without any of the errors for "Stale file handle" that she had seen previously. The situation was similar on mtvernon.

Reconverted Tilak's ply files to byu. Started up a left side template estimation run on io21 using Daniel's templateSurfaceDriver.sh script. Started one for the right side on mtvernon which seemed to freeze the machine. 2011.08.01
Sent an e-mail to Andinet at Kitware about collaborating on a paper describing the KWScene effort, and the code written for a vtk-itk interface to make it possible to create itk based plugins for paraview.

Worked more on debugging Yajing's script problem with lddmm-vte. Ran an instance on mtvernon and on io20 with --verbose added to the bash invocation, so I can get a better idea of where the program experiences problems.
2011.07.29
Downloaded a bunch of head surfaces from Tilak's collaborators in Sydney. They were in ply format, so I used MATLAB code they had to read them into MATLAB, then I saved them using savebyu.m, which I got from Laurent. I wrote a script to convert them all, then Tilak asked me to delete all the byu and ply files because they were not anonymized.
2011.07.28
Updated the BrainWorks script support web page to add new script functions (smoothSurfaceAvg, FlipPolyNormals) and update the script function wishlist to remove functions that have been implemented.
2011.07.27
Worked on reading in some .ply data Tilak had, and converting it to .byu. The resulting file was bad looking because some of the binary data in the .ply file looked to be corrupt.

Started looking into an AKM segmentation library for Kwame.

Started looking into travel to some conference which might pertain to Lei's schizophrenia grant.
2011.07.26
Finished the surface voxel/real space coding and testing.
2011.07.21
Figured out a way to add voxel/real space parameters to SurfaceRed Write by adding a string space_identifier and using the transformation matrix of the ivconv class. Started writing the code.
2011.07.20
Looked at the eigenvalue sum program output with Tilak. He wants to try level sets instead of contours for visualization.
2011.07.19
Wrote a program for Tilak that sums up the eigenvectors generated by the Laplace Beltrami filter and writes them to a vtk surface file at each vertex.
2011.07.15
Built a class in lddmm-common to contain information for origin and resolution information to describe voxel space.
2011.07.13
Added the lddmm-landmark-lib and lddmm-landmark-exec modules to the ca-dev repository.

Added code to Landmark class to store and write out extra caworks data on the end of .lmk files.
2011.07.12
Sent an e-mail to Laurent asking for a calling program and some data for his statPerm.m MATLAB program so I can verify some things about how it works.

Checked in some lddmm-surface changes that weren't in the svn repository yet.
2011.07.11
Worked on writing itk based code from Laurent's statPerm.m function.
2011.07.08
Tested more of the Kitware Atlas functionality (KWScene). Problems saving filter output consisting of multiple files. Wrote an e-mail to Kitware asking them to look into these problems.

Started writing itk based code from Laurent's statPerm.m function.
2011.06.28
Made vtk IO seamless with other ivcon supported file types and rebuilt ConvertSurface (tested) and lddmm-surface-exec (not tested yet).
2011.06.27
Added the capability to create qsub scripts and invoke them by running qsub from lddmm-surface-exec on the cluster. Added some file read/write capability to the vtk Surface Read/Write class so vtk I/O should be as seamless for lddmm-surface-exec as all the other ivcon supported surface types.

Built the installed CIS dev libraries from svn checkouts. Am going to start using that repository in general.
2011.06.07
Did some experiments with cvs2svn, then exported some of our modules to the svn repository: svn+ssh://svn.cis.jhu.edu/cis/local/svn/ca

  • CISLib
  • lddmm-volume
  • lddmm-common
  • lddmm-surface-lib
  • lddmm-surface-exec
2011.06.07
My version of lddmm-surface didn't run - it got to 50 iterations and appeared to loop infinitely. Checked the parameters and noticed some difference between what I used and what Aastha used. Built the Aastha version and ran both with identical parameters. It ran successfully and the output was close to identical (less than 1.0e-12 difference in momentums, identical vertex positions). Checked in the code modifications to the CIS cvs repository.

Started reading about svn. Very similar to cvs. Current plan is to bring in the cvs repository to the new svn repository at:

svn+ssh://svn.cis.jhu.edu/cis/local/svn/ca-dev

The files will be owned by the "ca" group.
2011.06.06
Finished integrating all Aastha's changes into lddmm-surface. Testing the program at this time to see if it produces results similar (exact matches are probably not possible) to the original code.
2011.05.31
Started working on data set for NCBC Atlas. Picked up the average template for the Female.6-9yr11m.3T.control+MDD set. Performed a Laplacian decomposition into surface harmonics on the template.

Had another long discussion with Casey about revision control. Will meet with Casey's reading group tomorrow and show some diagrams of lddmm library architecture. Saurabh said Dr. Miller has asked him to merge surface and volume matching so the underlying classes should be of interest.
2011.05.27
Worked more on KWScene. Put the plug in code into kdevelop for debugging. Was able to set and catch break points within plugin shared libraries. Tried to understand some of the problems with the Laplace Beltrami but those problems aren't really important currently in terms of assessing the Atlas. Decided to build a set of test input for building scenes which then I could save, recall, and visualize in Slicer. They input data I think we need are: Most of these exist in test data I have, but I think I'd like to build a set from actual data that we have, and get the data from the processing pipeline we're working on for the NCBC grant.
2011.05.25
Worked more on KWScene. It does the following things which I don't think are right:
2011.05.24
Worked a little bit more with the KWScene Atlas code updates from the Kitware guys. The LaplaceBeltrami code was commented out, so that filter wasn't available. Rebuilt with LB and tried again. Need to show that KWScene can save those outputs inside a scene, and possibly some PCA data as well. So far it has not been successful. The program crashes when trying to save LB filter output.
2011.05.23
Finished build of lddmmSurface on ICM cluster and ran it on a surface that Tim said was giving him problems. It looked ok, so I moved the previous lddmmSurface exe under x86_64 to lddmmSurface_02_05_2007 and copied the new one to the directory so that it's used globally. Tim said he'd check out the new exe to verify it's matching correctly.
2011.05.18
Finished building the KWScene plugin. Used current ITK 4 distribution and an older paraview. Ran it but just did a little testing.

Continued to try to build lddmm-surface on cluster but still having glib problems. Kyle Reynolds has offered to look around for a glib distribution that I could compile against.
2011.05.16
Added a bwtest in which the problem with float data histogram is fixed,. There is also code to display the selected surface vertex number and it's coordinates in the Image Viewer.

lddmm-surface is producing a number of assertions in gts (or glib) while running on the cluster. The program seems to run ok on our local Centos machines. Started the process of trying to build over there. Had to start with building cmake, then all our underlying libs (blitz, vxl).

Made some progress trying to build KWScene atlas.
2011.05.11
Met with Dr. Younes to discuss the CIS stat pipeline. Discussed what we have (pcaDec, pcaDecMomentum, LB) and what we need (JacobianForLandmarks, statPerm, staticStatAnalysis).

Did an svn update for Kitware's KWScene Scene Graph plug-in. Having some problems building because they built to an old ITK and the current Paraview trunk, I built to the current ITK and an older Paraview.
2011.05.10
Fixed BW to read any lmk file with the header "Landmark-".

Looked more at the NCBC pipeline.
2011.05.09
Added a script command called byu2Img to bwtest:
Usage: byu2Img nx, ny, nz, [fillVal]
Fill Value: (1-255, def. = 1)
This replaces the byu2Img utility we have.
2011.05.06
Added functionality to the ImageViewer of BrainWorks. When the user right clicks on one of the three ImagePanes in the triplanar view, the code checks to see if a surface is loaded into the viewer. If so, the code determines the closest point on the surface to the click, and centers the view of the three slices on this point, then adds a red dot on the surface at that point.
2011.05.04
Read design proposal about the QuadEdgeMesh to see if more features of the class could be used in LaplaceBeltrami. Would like a less clunky implementation before Insight submission for the filter.
2011.05.03 Received msg from Kitware about an update to their KWScene library code. In essence the update means the user can write out scenes from paraview. Wrote an email about current NCBC status:
The Kitware guys have made updates to the KWScene atlas code they wrote for the NCBC Shape Analysis grant last year. This was the library that we integrated with our Laplace-Beltrami code at the NA-MIC programmers' week last June. They will continue to make improvements in the code but hopefully we can use what they have here for storing our template atlases and lddmm results, and it provides a file-based interface between ParaView/CAWorks and Slicer3D.

In other news, the Kitware guys (Brad King mostly) pulled into ITK the enhancements I contributed to vnl_sparse_symmetric_eigensystem in vxl. The new code uses ARPACK FORTRAN routines converted to c++ to solve the generalized case of the eigenvalue equation A * x = lambda * B * x. We needed it to perform Laplace-Beltrami on open surfaces, so in theory after rebuilding and testing I can pull the trigger on the Insight submission for LB. And some of our people (Siamak, maybe Casey) need the sparse eigensystem code here, so the c++ interface will help.

Finally, Laurent and I have agreed to meet this week to look at the rest of the pipeline proposed in the grant and see what pieces correspond to what MATLAB modules he's written. I'll produce ITK based code of the MATLAB functions.
There's a problem in the deformation equation in the lddmm-landmark webpage that I'm trying to fix. I want to change one-over-sigma-squared to lambda to avoid confusion about the parameters. Need to figure out a way to rebuild the image file. Rebuilt the Laplace-Beltrami filter code with ITK4. Used ctest to test it and looked at the results with caworks and paraview.
2011.05.02
Got the vxl-update branch checked out of the Kitware git repository(s) via:
  1. git clone --recursive git://itk.org/ITK.git
  2. git fetch https://github.com/bradking/ITK.git update-vxl:update-vxl
  3. git checkout update-vxl
Configured and built ITK-4 with new eigenvalue solver. Tried to build Laplace-Beltrami against new ITK but got some linker errors.

Helped Siamak link FORTRAN routine into lddmm-surface. I think the FORTRAN routine looks for eigenvalues in a sparse symmetric matrix. Need to show him how to use vnl_sparse_symmetric_eigensystem to do this.

Modified ivcon to be able to read/write "proper byu" files. The extensions of these files will be ".byu_g" or ".g". Produced the executable /cis/project/software/lddmm-surface-exec/x86_64/ConvertSurface which takes an input and output file name and converts the input surface file to the output file's format.
2011.04.28
Received e-mail from Brad King about vnl_sparse_symmetric_eigensystem additions.
> I don't see new generalized eigenvalue code in vnl/algo in the ITK
> repository. Do you have any plans to add it any time soon? Is there
> any way I can help expedite its inclusion?


The easiest way to bring that in is to update ITK's snapshot of vxl.
I pushed out a branch to here:


https://github.com/bradking/ITK/tree/update-vxl


that does that. Please fetch the 'update-vxl' branch and try it.

So the new code is in the ITK git repository, which I need to update and rebuild.

Helped Daniel Tward include lddmm-rigid into his surface matching program so that he can do rigid registration of landmarks before lddmm.

Started looking into what all needs to be produced for NCBC (data, atlas, and pipeline) with the intention of seeing what pieces we have and what still needs to be created.
2011.04.27
Sent a follow up e-mail to Luis Ibanez and Brad King asking about the status of adding my sparse symmetric eigensystem code to ITK. When it's added I can build and link in my Laplace Beltrami code for open surfaces required for the NCBC grant. If it doesn't get incorporated we can build a version of ITK here with the code included. I saved the code today in the following directory: /cis/project/software/src/itk/git/MyTests/saveCode

Started looking through the NCBC grant proposal again to find out what parts still need to be done and how to put together the required pieces.
2011.04.26
Worked more on lddmm-volume code.
2011.04.25
Asked Anthony to add Saurabh to "software" group. Pointed him to the lddmm-surface code and wrote a brief history of the code for him, Anthony, and Aastha as follows:

  1. Code was originally written by Marc Vaillant. That original program, plus whatever mods I made, is available via the script lddmm-surface, and the code is in the CIS CVS module lddmm-surface.
  2. I re-architected the code to layer it on top of lddmm-common and CISLib, to give us Apply Deformation and other transform code, licensing, and a common API with lddmm-landmark. This version is in the CVS modules lddmm-surface-lib and lddmm-surface-exec. It can be run by typing lddmm-surface-exec.
  3. lddmm-surface-exec has input and output that are similar to lddmm-landmark, but I added an option to produce the same output files as lddmm-surface exec.
  4. Aastha has been making mods to the old lddmm-surface code and I have been merging them into the new lddmm-surface-lib code. Her code as it stood when I last merged it and had a successful test with it can be executed by running lddmm-surface-exec with "-o c", for optimization type conjugate Gradient.
  5. As you can see from the below e-mail, Aastha and I are currently working out some of the issues with her "grid" size code. When we successfully address those, all development in lddmm-surface should cease and researchers should use the lddmm-surface-lib code. So my plan is hopefully to get the rest of Aastha’s code mods integrated and working this week. Saurabh can check out the code and start looking at it now...
Finished building lddmm-volume-lib library. Started working on the code for the main executable routine.
2011.04.22
Had to do a bit of a redesign on LddmmVolume class because the forward declaration required for the facade pattern could not work with a templated class. Add the "FLT" type everywhere that was previously templated, and the type will be decided at compile time via a compiler setting as in done now with lddmm-volume. Continued building library.
2011.04.21
Helped Karam with a question she had about a MATLAB program that Laurent had given her that wasn't producing results she expected. Introduced her to the MATLAB debugger to show here that there was a registration step in the program that was causing her misunderstanding.

Helped Jung get started building Slicer4, including building QT.

Corresponded with Yajing to reassure her that changing her script file while it was running would have no effect of the original script's behavior.

Continued working on lddmm-volume libraries but determined that there might be a problem with a forward declarion of a class in my facade pattern for the libs.
2010.11.30
Fixed the issue with the regular mode of vnl_sparse_symmetric_eigensystem for the general case. Needed a x = Ax before solving By = x for y. Verified eigenvalues and eigenvectors against arpack++ example program examples/superlu/sym/lsymgreg. Wrote out text files of the vxl vs. ARPACK results in ~mbowers/projects/validation/sparse_sym_eigensystem/regular.
2010.11.29
Completed build of BW on Centos 4 32 bit virtual machine running on mtvernon. Required: Made a version that includes the new tissue types requested by Lei which is still being tested. Made another exe with the standard 5 tissue types by:
cvs checkout -r BeforeNewTissueTypesBranch brainworks
Built both versions and license makers and secure copied them back to mtvernon and verified all.

Built surface segmentation code for Daniel Tward. He was having trouble linking to the CISLib code from io19, getting unresolved references on stack checking calls that I gather aren't in g++ v 3 or less (io19 is CentOS 4). The code he's trying to compile is based on an old version of lddmm-surface, which uses CISLib only for licensing and Provenance, which I gather they don't actually need. I modified the source and makefiles that he sent me to remove the references to CISLib and was able to build. I zipped the code and sent it back to him.

Added another test program to LaplaceBeltramiFilter code to check out the performance of vnl_sparse_symmetric_eigensystem against an example program in arpack++. The purpose is to verifiy that the new vnl routine reproduces the results of the ARPACK code for the generalized eigenvalue solver for the regular solution mode (i.e. dsaupd mode =2), versus the Shift-and-Invert mode (mode=3). Mode 3 is already verified by test program itkLaplaceBeltramiFilterTest2 so I added itkLaplaceBeltramiFilterTest3 and compared against arpack++ example program superlu/sym/lsymgreg.cc. The eigenvectors checked out but the eigenvalues were conisderably different.
2010.09.21
Working on producing a gtca executable from various source code versions we've received. The gtca executable installed on our system creates two volumes of topologically correct surfaces, one binary valued, the other gray scale. The one that we want to use produces the surface.
2010.09.17
Produced a version of lddmm-surface-exec which reproduces the output of Aastha's IMlddmmsurface2 program.
2010.09.14
Integrated Aastha's surface matching code into lddmm-surface-lib. It can be accessed by requesting the ConjugateGradient optimization type in the matching parameters. Created and began testing a new lddmm-surface-exec program that uses the new lib.
2010.09.09
Finished building caworks 3.8 in Debug.
2010.09.07
Added four new tissue categories to segmentation in BW. Change the gui, the segmentation code, etc. Added an event handler in the histogram gui to tell the user what bin his mouse is in when he's hovering. Zipped up the executable and sent it to Lei Wang.
2010.08.24
Updated the instructions for building lddmm-volume from the repository. They are at https://wiki.cis.jhu.edu/software/install_config. Had to make code mods for the newer g++ and kubuntu.
2010.08.13
The installed version of BrainWorks is failing on kubuntu machines. A right click on an Image icon in the icon list causes the machine to hang and requires a reboot/logout to "recover". Take the usual debug steps:
2010.06.29
Gave Shane the MRML files that Michel from Kitware incorporated into his SceneGraph test files. Hoping that these contain some of our subcortical structure atlas files in MRML format that can be visualized by Slicer3D.
2010.06.28
Last week I attended the NA-MIC programmers’ week at MIT. The week is dedicated to working with other NA-MIC contributors on medical imaging software projects and reporting on the results achieved.

I collaborated with Michel Audette of Kitware on a project to build a Paraview plug-in to call our Laplace-Beltrami operator for closed surface data. In preparation for this, Michel wrote some code for the conversion of surfaces represented as vtkPolyData into surfaces represented as itk::Mesh. I added some code for writing LB output into vtk surfaces, and for creating the plug-in output. I showed Michel how to use the kdevelop debugger and we got it all working. I presented our results to the full group Friday morning. A link to the project page is here. Near the bottom of the page, there’s a link to a quicktime video of an amygdale with surface harmonics on it.

I met many of the people I’ve corresponded with via the Insight Toolkit Developers mailing list, including Alexandre Gouaillard, Araud Gelas, who are Quad Edge Mesh experts, and Julien Finet and Andinet Enquobahrie, Kitware personnel who were helpful during the meeting.

Steve Pieper stopped by and he, Michel, and I discussed using MRML for the Atlas capability that Michel is building for our NCBC grant. He also described yet another toolkit in development called CTK, the Common Toolkit, intended to provide a unified set of basic features for use in medical imaging. It seems to be in it’s infancy but hopefully could provide some building blocks from which to start new project development.

Going forward, the work that we did should provide the basis for the inclusion into Paraview (and by extension CAWorks) of modules developed in itk for NCBC (and any existing itk modules). With the inclusion of the lddmm modules in CAWorks it should be possible to provide most or all of the NCBC processing pipeline in CAWorks.

So I think the week was a success. I met with key people, collaborated on a project to further our NCBC aims, and learned a lot about some building blocks we’ll be using for future development here.
2010.06.09
Decided on a Ubuntu configuration for laptop I’ll be taking to the NAMIC programmers meeting. Worked with Joe to get the configuration up and running.

Completed testing Amygdala filter, checking in the changes, checking out and compiling, building and testing on Windows. Sent out e-mail to indicate to Joe that the mods are done, and to build a version for a student landmarker.

Registered for NAMIC programmers’ meeting. Coordinated with Michel on a project that will be valuable to NCBC, Paraview, and CAWorks. The project will involve building vtk style plug-ins to call itk functionality from Paraview, including our Laplace-Beltrami filters and PCA. A description of the project is here.
2010.06.01
Finished modifications to the Amygdala landmarking filters. Each slice will have 6 landmarks, their labels will be SU (for superior), LS (lateral superior), LI (lateral inferior), IN (inferior), MI (medial inferior), and MS (medial superior). The user will have to set the head before setting the tail. When the head is set, the x and z from that landmark will be propagated to the tail. Selecting the tail will only specify the slice number. The x and z spinner controls are disabled for the tail.
2010.05.25
Finished working on some performance improvements in lddmm code. Added iterator definition to Array3D, removed some slow system calls (floor), took out some repeated function calls. Got about a 33% speedup of Can’s test code. Made similar mods to classInterpolation code and got about a 40% speedup of total interpolation time. Total interpolation time is now roughly %25 of total lddmm time. Fixed some known bugs and sent code to Can and Dr. McCullough.

Worked with Joe to set up and build caworks. Configured smart-git, downloaded code. Configured code with CMake, built caworks. Added an Amygdala landmarking class which is an exact replica of Hippocampus Landmarking.
2010.04.23
Gave example atlas data files to Michel of Kitware via ftp.

Filled out performance evaluation form for 2010.
2010.04.22
Added file writing of the final output of PCA. Wrote a function comparePCA.m in MATLAB that autochecks the output of the itk PCA against the MATLAB code. The errors I got were down at the level of rounding error in a double precision float number. Wrote an e-mail to Laurent, Anthony, Dr. Miller and asked for input describing the PCA function and how it works so I can add it to a CIS webpage or to an Insight Journal submission.

Asked Siamak for some of his heart data. He creates volumes with tensor data at each voxel which quantifies some aspect of heart motion. He took time out to create for me a vtk file with this information in it so that I send it down to Kitware.

Started working on performance eval for this year.
2010.04.21
Finished debugging PCA. Visually verified results against MATLAB version of the code and everything checks out. Going to write some output files to import into MATLAB to automatically check the numbers.
2010.04.19
Produced vector field data as ascii txt files, one per each field. Started debugging PCA.
2010.04.16
Started creating test data for PCA debug. Want to recreate inputs of Laurent's MATLAB programs so I can check results side-by-side. Created the input mesh, started working on how to input a collection of vector data at each vertex of the mesh.
2010.04.15
Finished building PCA test program.
2010.04.13
Tried some fixes for in-place Remap an Array3D but they didn't work. Can completed his work testing Remap and sent his findings to Intel.

Wrote an e-mail response to Michel about some atlas questions he had.

Continued working on building PCA calculator for itk meshes for NCBC.
2010.04.12
Built and tested ipp test for remap. Could not reproduce results from Array3D by an inplace method of addressing data. Downloaded latest IPP example code and searched for ipprRemap but found nothing. Checked a histogram of output data vs. Array3D output but concluded nothing from it.

Wrote a test program for itk PCA and wrote some of the CMakeLists.txt files to configure PCA build.
2010.04.08
Wrote code for using ipp for interpolation. Compiled it and am trying to link in the ipp lib.
2010.04.07
Started looking at compiling Can's program for testing 3d interpolation. Tried to compile with our current icpc and got some errors.

Continued to code PCA functionality with itk. Finished the PCA calculator class and started writing a test program. Will have to figure out how to form a set of vector fields as input to the program.
2010.04.06
Attended a meeting to discuss lddmm-volume performance enhancements. First approach is to attempt to discover algorithmic improvements. Second is to work with Intel for parallelization/optimization improvements. Look into the Intel IPP for canned interpolation routine for 3D.

Continued to code PCA functionality with itk.
2010.02.19
Heard back from Francisco Gomes re: arpack++. He said the package has not been supported for some time.

Spoke with Dr. Younes about Laplace Beltrami status. Discussed current situation with arpack++ and ARPACK, the effort from the itk developers to select a new math lib, and the algorithms required for solving eigenvalue problems. Decided to:
  1. Debug the arpack++ code for solving the general case of the eigenvalue function.
  2. Produce a version of Laplace Beltrami for closed surfaces and open surfaces with the Dirichlet boundary condition.
Used kdevelop to try to figure out where arpack++ blows up on eigenvalue calculation. Looked at some of the calls into dependent library SuperLU and the signatures didn't match up. Sent an e-mail to Dr. Gomes to ask the version of SuperLU he used. Started downloading older versions of the lib to find a code match.
2010.02.12
itk developers have started a search for numercial library code to use in itk. One of the libraries they're considering using is ARPACK. Added some detail to Insight development wiki about what I know about ARPACK and ARPACK++. Sent an e-mail to the ARPACK++ developers in Sao Paulo to get clarification on licensing and start a dialogue about how/if the current ARPACK source is being supported.

Was able to build and use ARPACK and arpack++ for the closed surface code I had written for Laplace Beltrami. I used the standard eigenvalue function from arpack++ to solve A * V = V * D. For open surfaces vnl failed to solve the problem, but ARPACK did. Laurent added an update to the algorithm, which made solving the standard eigenvalue problem invalid for open surfaces. I have been trying to implement the generaized eigenvalue problem (A*V = B*V*D) with arpack++ but have been unable to get the code to build. I was able to find a patch for the C++ templates which will compile, but the program segfaults during execution. I hope to ask the arpack developers a few questions about the code.

Started running the MATLAB PCA code and started thinking about an appropriate itk architecture to model PCA after.
2010.01.20
Finished the MATLAB comparison code - intermediate results are within 10**-10 of each other.

Added a command parameter for gradientStepSize (-g) to lddmm-surface. Rebuilt lddmm-surface and it's dependent libraries on foo. The executable built on mtvernon produces a seg fault when run on older operating systems.

Started looking into using arpack for calculating eigenvalues (as does MATLAB). Want to check if it can work for our open surface data. Built arpack and arpack++.
2010.01.12
Still working the sparse eigensystem issue. Visually the two programs produce similar data for input into the eigenvalue calc. Started building .m programs to bring c++ output into MATLAB for analysis and comparison of the data. Looked at netlib code for eigensystem calculation to see where the error code is generated.

Went to a meeting with Dr. Miller, Anthony, and Tim, to discuss NAMIC and NCBC. Dr. Miller described a shape analysis pipeline that is essentially what I'm trying to build in itk.
2010.01.08
Compared intermediate results of my program versus Laurent's, and they are either equal or very similar. The vnl sparse eigensystem I'm using cannot handle the data, but Laurent's can. Trying to figure out a way around the issue at this time.
2010.01.08
Looked for code in Laplace-Beltrami to replace with QuadEdgeMesh code. LB code sets up edge-vertex-face relationship matrix, which may be replaceable with QE functionality or attributes.

Laurent tried the open surface I sent him and discovered unattached vertices. Ran his program to to remove those points and was able to calculate eigen- values on the surface.
2010.01.07
Made some bug fixes to Laplace-Beltrami to reproduce the intermediate results of the MATLAB version of the program. The program however fails while calculating eigenvalues for the c++ and the MATLAB versions. Wrote an e-mail to Laurent about it and included an open surface in MATLAB format for him to use as test data. Made some modifications for performance improvement and started looking through the code to get a better understanding of the algorithm and the Mesh functions used. Looked through a PPT file sent to me by an itk developer for some ideas.
2010.01.05
Created an open surface in .mat and .vtk format, to compare the output of the itk LB code vs. Laurent's MATLAB code. The calculations are not equal, so I've begun to try to figure out where the code diverges.
2010.01.04
Debugged and vertified Laplace-Beltrami for open surface code still works for closed surfaces. Started looking for open surfaces to test all the changes.
2009.12.28
Continuing work on Laplace-Beltrami for open surfaces. Made code additions, began testing.
2009.12.21
Continuing work on itk contribution with Kitware/ITK developer help. Completed draft of Insight Journal submission, with comments/suggestions from Dr. Miller, Anthony, Dr. Younes, and Tilak. It can be seen here. Began work on PCA, reading MATLAB code and comparing with existing itk/vxl classes. Began working on adding code for open surfaces.
2009.12.01
Checked in code modifications and new test output. Checked out files that automatically create an Insight Journal submission document skeleton. Began working on the submission.
2009.11.29
Made mods to LB filter: Rebuilt locally, then did a dashboard rebuild - configure, compile, and test all completed without errors.
2009.11.25
Set up nightly build with Luis for LB.
2009.11.23
Discussed Laplace Beltrami filter architecture with Luis and Michel from Kitware. Michel forwarded some of our e-mails to the Insight developers group, and members have provided responses and suggestions. One of the issues mentioned is that the itk MeshToMesh Filter is actually broken and will not work when input and output types differ, the behavior I was seeing when I tried to compile. I fixed most of the issues but am going to revisit itk LB code based on their suggesstions and offers to work together. Specifically every respondent suggested using QuadEdgeMesh instead of Mesh.
2009.11.20
Removed the template parameter which specifies the type of the data used for internal calculations in Laplace Beltrami.
Made some changes to the LB test routines to make it a little more interesting of a test and had lots of problems when Input/Output mesh types were different. Had to replicate all the typedefs. The CopySurface functionality became an issue also. I am looking for a CopyInputToOutput Mesh function that Luis has told me about but I can't find it in the code. 2009.11.18
Looked at removing the template parameter which specifies the type of the data used for internal calculations in Laplace Beltrami. The eigenvalue calculation has to be in "double" so there's not much reason to give the user a choice.

Looked more at the process for itk submission, sent an e-mail to Michel. Signed up for the Kitware itk proposal wiki.
2009.07.24
Contacted Will Schroeder of Kitware to discuss the NCBC grant status and the MRML requirements. Agreed to try to set up a teleconference after my return from vacation.

Added the keep/extract flag to the SSD BW script command.
2009.07.22
Updated NCBC requirements for MRML Lib. Joe added his comments to our wiki entry.

Rebuilt standalone BW library with the files necessary to run the Segment command. Looked at the Linux lib dependencies and didn't see any that should be a problem for Windows.
2009.07.14
Started writing down the current requirements of NCBC MRML work as I understand them. Spoke to Laurent about NCBC requirements for GRF processing. Started to compile a table of lddmm output-to-vtk-to-itk conversions we'll require.
2009.07.13
Spoke to Anthony, Joe, and Youngser about NCBC MRML requirements.
2009.07.10
Checked output of Apply Deformation in lddmm-surface, but the output looks the same as the input. Did a few sanity checks but found no obvious issues.

Started looking into NCBC MRML requirements.
2009.07.09
Created surfaces from my ellipse-to-sphere test from lddmm-landmark validation. Used caworks, loaded the landmark sets, then used Delauney3D to triangulate, then Extract Surface to get an actual surface, then saved as a byu. Used these surfaces as input to lddmm-surface. Couldn't get kdevelop to correctly debug the program, so ended up using gdb to fix bugs. Program ran properly to completion.
2009.07.08
Finished checking in lddmm mods into the caworks svn repository. Sat down with Joe and worked through merging the code into the trunk of the repository, then worked out the compiler issues.
2009.07.07
Finished rebuilding lddmm-surface-exec. Ran some data through but it failed after a couple of hours. Ran the data through my FixSurfaceFiles program and restarted lddmm-surface.

Began checking in lddmm mods into the caworks svn repository. My smartsvn demo license expired so I need to investigate buying a license.
2009.07.06
The hippocampus data has some vertices associated with incomplete faces or no faces. I made mods to lddmm-surface to check for this condition and shut down if detected. That didn't work as expected, so now I'm trying to rebuild everything in the /cis/project/software/dev with icpc, but gts and ivconv have yet to be built with icpc here. Started rebuilding those libraries, but having some trouble with CMake 2.6 vs 2.4 doing some extra checks and not allowing some things in our CMakeLists.txt file.
2009.07.02
Added Apply Deformation to lddmm-surface. Began testing with hippocampus data...
2009.06.30
Finished updating lddmm-landmark webpages with new validation data for the sphere to ellipse case. Started adding apply deformation to lddmm-surface-exec.
2009.06.22
Talked to Joe about creating a filter for extracting the blocks. He is going to write one.
2009.06.19
Got Apply Deformation working and tried to figure out how to get images at each step of the deformation from the output. Tried ExtractBlock but couldn't get to the individual blocks.
2009.06.18
Debugged Apply Deformation for Volumes. Had to include a vbl template instantiation file because instantiation was not being compiled into the executable.
2009.06.17
Worked on debugging Apply deformation for volumes in caworks.
2009.06.16
Removed stream based code from VolumeTransformation code so that it can work in CAWorks. Rebuilt caworks.
2009.06.15
Continued trying to get the original point based appply deformation working in CAWorks. Code that has worked since 2007 doesn't work in CAWorks. For some reason using streams is broken in CAWorks. The original plan for interfacing with CAWorks was to use streams of vtk formatted data, but while it seems to work stand-alone, it seg faults in CAWorks. Rewrote interface to use vnl structures instead of streams.
2009.05.08
Built and validated i686 version of lddmm-landmark-exec.
2009.05.07
Checked out and compiled lddmm-volume dependent libraries in i686.
2009.05.06
Checked lddmm-volume (and dependent library) changes into CIS cvs repository.
2009.05.05
Found problem with lddmm-volume on win32. It seems like the operating system is writing extra line feed characters into the binary data written for the vtk files. If the file is opened specifically as ios::binary, the line feeds don't occur. Checked the lddmm-volume output and it matches the original lddmm-volume output from linux.
2009.05.01
Added changes to BW code to produce an additional stats file when a segmentation is saved. Discovered the source of the strange behavior in the histogram code where the histogram value at 128 is dropped out. Sent the following to Tilak:
After looking through the code with the debugger, I discovered is simple math issue that accounts for the strange behavior of the Histogram at 128. Essentially there is a calculation of the bin space that is less than 1.0 ((dataMax-dataMin)/numBins = (255-0)/256). The bin assignment is determined by (val-dataMin)/binSize + 0.5, which is then truncated to integer. So values of 127 go into bin 127, values of 128 go into bin 129.

When I change the equation so the bin space is 1.0, the gap at 128 goes away.
2009.04.28
Recompiled and rebuilt everything in lddmm-volume to make sure no optimization was occurring that might produce write problems. Didn't help vector field issue.
2009.04.27
Validating win32 versions of FluidMatchBirn. Image data in vtk format looks acceptable. No vector field files appear to be correctly written. Each of the vector field files is of different size and incorrect size, and each should correspond to the linux vector field sizes.
2009.04.23
Built win32 versions of FluidMatchBirn. Started validating program output.
2009.04.22
Built win32 versions of lddmm-landmark-exec.
2009.04.21
Built win32 versions of lddmm-volume-lib, lddmm-landmark-lib.
2009.04.20
Built win32 versions of CISLib, lddmm-math-lib, lddmm-common.
2009.04.14
Added some README files to the dependent library modules for lddmm-volume and lddmm-landmark. Created and checked in new configuration files (buildSetp and devScriptCxx). Rewrote the wiki page at https://wiki.cis.jhu.edu/software/install_config to update the instructions for configuring and installing the modules and building lddmm-volume and lddmm-surface.
2009.04.13
Started working on new configuration files for building lddmm-landmark with Apply Deformation functionality. Current script for setting up environment points into the configured area, so I am making a new script with the flexibility to write to directories specified by the user.
2009.04.05
Finished checking in Apply Deformation changes to CISLib, lddmm-common, and lddmm-landmark-exec. Sent an e-mail announcement.
2009.04.04
Created a wiki page of Apply Deformation results which includes links to snapshots of CAWorks display of deformation of sphere into an ellipse.
2009.04.03
Created snapshots of CAWorks display of deformation of sphere into an ellipse. Created an animated gif to show the deformation of the image over the time steps.
2009.04.02
Ran the ellipse-to-sphere test on the new lddmm-landmark and compared with results from the old code. The results matched.
2009.04.01
Created ellipse and sphere phantom landmark data, and ellipse image data. Began testing lddmm-landmark with apply deformation.
2009.03.31
Wrote phantom data creation programs. Modified program that places corresponding landmarks on concentric ellipses so that the user can enter an arbitrary center point. Created a program that creates an elliptical shell of arbitrary center and size inside a volume space of arbitrary size and resolution (in each dimension). Purpose is to create a set of landmarks on a small ellipse and a large ellipse, then match the landmark sets. Using that matching, deform an image of the small ellipse and see how closely it matches the landmarks of the larger ellipse.
2009.03.30
Debugging lddmm-landmark-exec with apply deformation.
2009.03.29
Worked on compiling with icpc. Found a version of lddmm-landmark-lib previously compiled with gcc that was giving undefined errors in all the vxl and vtk calls.
2009.03.27
Worked on building lddmm-landmark-exec with icpc.
2009.02.27
Can asked that I take out the changes and re-run lddmm-volume to get an idea of the total speed up. Without the code changes:
Total fftw count was: 1220
Total fftw time (sec) was: 2124.4223
total Save VTK Time (sec) was: 4594.4891
interpolation Time (sec) was: 3960.3832
Total lddmm-volume time (sec) was: 12495.457

So the speedup is almost 4 times...
2009.02.26
As Joe and Tim recommended yesterday, setting LDDMM_TMP to /tmp improved the performance of the file output part of lddmm. The performance numbers I got for the 256x256x256 run:

Total fftw count was: 1220
Total fftw time (sec) was: 482.63509
Total lddmm-volume write time (sec) was: 303.11958
Total Save VTK Time (sec) was: 385.58809
Interpolation Time (sec) was: 558.63565
Total lddmm-volume time (sec) was: 3266.6895

Checked lddmm-volume changes into the cvs repository.
2009.02.25
Presented CIS library slides and plans to software group. Power Point is here.
2009.02.24
Cleaned up and tested lddmm-volume memory leaks. Cleaned up fftw plans, changed "delete" to "delete []" for array allocations, cleaned up FluidMatch class members. Re-ran valgrind to verify nearly all leaks are cleaned up.
2009.02.23
Worked on lddmm-volume lib dependency Power Point slides. Continued trying to use Boumi and was able to get it to "reverse engineer" the library code, but could not get it to produce diagrams. Tried using ArgoUML but didn't do much more.

Starting working on cleaning up memory issues in lddmm-volume, using valgrind outputs as a guide. Noted that FluidMatch, the main lddmm-volume class, cleans up very little of its allocated memory.
2009.02.20
Worked on lddmm-volume lib dependency Power Point slides. Did some refactoring of libraries - moved some generic math classes from lddmm-volume-lib to lddmm-math-lib. Started working with Boumi UML modeling tool to try to produce some dependency diagrams of the libraries.
2009.02.19
Worked on lddmm-volume lib dependency Power Point slides.
2009.02.18
Started lddmm-volume lib dependency Power Point slides. Generated doxygen documentation of libraries.
2009.02.17
Sent an e-mail to Anthony to discuss these issues. Looked further into memory problem. Created a script to test lddmm-volume with valgrind, the linux memory leak checker utility.
2009.02.16
Looking further into the performance, I saw that after lddmm performs the match, it takes more than an hour to write out all the output files the program generates. The original code for writing vtk files rearranges the order of the bytes for every double, then writes the 8 bytes to the output stream, then flushes the stream to the disk. With a volume that large, the output file is about half a Gig of data, and there are a lot of files, so that.s a lot of disk operations. I changed the code to buffer the write, and that version of FluidDriverBirn runs in 5007.6381 sec.

Noticed serious machine memory issues with writing out lddmm-results. Everything seems fine until the program starts to write its output files, then it begins to eat up memory gradually. It will begin the writing with about 15G of memory left on mtvernon, but before it completes, all that memory will be used up. "top" says that the program is using about 17.5G of memory the entire time, but available memory on the machine goes toward zero, and there's nothing else running.
2009.02.15
Achieved speedups with parallelization of interpolation, verified identical processing results.

Without openmp, the lddmm executable FluidDriverBirn runs in 10661.889 seconds on a 256x256x256 volume. With openmp, the program runs in 7474.8974, almost an hour faster. The results are identical.
2009.02.14
Started applying parallelization efforts to lddmm-volume, specifically in the interpolation functions.
2009.02.13
Got parallelized lddmm-landmark transformation matrix generation to work. Classified variables as follows:
  1. Use "#pragma omp parallel for"
  2. Allow dynamic scheduling of threads.
  3. Specify that there is no default scope classifier, then classify each variable in the loop as follows:
    • Local variables that are read-only: shared
    • Local variables that are written to: private
    • Accumulated variables: reduction(+:)
On mtvernon, with 8 processors and nothing else running, the transformation matrix is created in 31.0034 seconds. With no openmp switch during compile, the matrix is completed in 243.279 seconds. The results were identical.
2009.02.12
Continued working on parallelizing lddmm-landmark transformation matrix generation. Started figuring out how variables inside parallel sections of code should be classified in order to avoid conflicts between threads.
2009.02.11
Continued working on parallelizing lddmm-landmark transformation matrix generation.
2009.02.10
The creation of the .xfm file by lddmm-landmark is parallelizable. The flow of each grid point is independent of the flow of every grid point. Added open mp pragmas to VolumeTransformation.cpp in the lddmm-common library to try to exploit that. When run on a tiny data set (n == 3), the program works properly. When it is large (n == 64) it aborts. Spent some time trying to figuring out where and why the program aborts.
2009.02.09
Was able to build a dynamic version of the lddmm-landmark program in icpc. Ran the program on a small tes data set and compared to g++. The trajectories and deformed temples were identical. Using od -f on the transformation file and comparing the outputs was identical. There were differences in the momentum files when numbers were very close to zero (10**-15). The outputs are numerical the same.
2009.02.06
Worked on trying to get icpc to build a statically linked version of lddmm-landmark. Was unable to build one that worked, or to track the problem via the debugger.
2009.02.05
Was unable to build a working, statically linked, lddmm-landmark with the icpc compiler. Incurred a seg fault at startup, before the first line of code was reached (according to the Frame Stack in the kdevelop debugger), during the initialization of globals. Rebuilt to make sure all dependent libraries (blitz, vtk, vxk) were up to data with latest icpc, but program still failed.

Produced an lddmm-landmark executable with g++ that ran properly. Gave that to Nayoung for testing. The parameters are:
-W --writeBackXForm : write out a backwards volume transformation file based on the volume space file specification.
-D --writeBackDXForm : write out a backwards difference transformation file based on the volume space file specification.
2009.02.04
Started working on Nayoung's request for a "reverse" transform created from lddmm-landmark. Added new options for the new direction transform, compiled and started testing.
2009.02.03
Checked in BW modifications. Incremented the bwtest version number and built a release version of the code. Copied executable into bwtest area. Added Rodd Pribik to BW user mailing list and sent a notification about the BW mods to the list. Updated change log on the CIS software webpage.

Spoke with Can about lddmm-volume, the cvs module I created with the separate lddmm-math-lib and lddmm-volume-lib libraries, and separate test and utility code directories. Discussed CMake configuration and how MKL is integrated into the program. Discussed vtune and the kind of analysis it does. Checked in my latest changes to lddmm-volume in preparation for him to check out the code and begin his performance enhancement modifications.
2009.02.02
After not getting desired results from my BW code modifications, tried simply setting min, max from the GUI and seeing how that changed the data in the HistBase class. Forced the script version to make the same data changes, and the script version performed as required.
2009.01.30
Continued working on BW modifications. Had to add a new flag to the HistBase class to indicate a calculate min, max request.
2009.01.29
Sat down with Felipe and created a cvs repository under his home directory, helped him gather his code into a development tree, then showed him how to use tkcvs to import a new module and commit changes to code. Showed him how to roll back to previous versions of code.

Checked out BrainWorks code modified by Lei Wang and started working building a new version of the program with a functional modification requested by Tilak. Tilak wants the Segment script command to provide a flag that forces BW to determine the min, max of the volume and to set the min, max of the histogram to those values.
2009.01.28
Finished the output comparison program.
2009.01.27
Worked on lddmm output comparison program for vector fields (in addition to volumes). Added the calculation of the root mean squared of all the data in each file, then divided the max error of the file by that number to provide scaling to the error number. Largest errors around 10**-10 with scaling. Informed Can about the problems with Array3D and sent him a copy of the code with the errors fixed.

Started working with Felipe to get him acquainted with cvs for some development he's been working on.
2009.01.23
Finished lddmm output comparison code. Found errors in the calculation of mean and standard deviation in Array3D. Found that the code produced by icpc and g++ produced outputs that differed by less than 10**-11.
2009.01.22
Worked on lddmm output comparison program. Added it to the "test" directory under lddmm volume.
2009.01.21
Given that different compilers and different libraries might produce slightly different outputs for lddmm, started to write a program to determine how close two given output sets of 3D volumes in vtk format (the lddmm output format) are. This program will be useful to give to Mazdak to test his semi-langrangian code.
2009.01.20
Continued working on why differences exist between lddmm vtk output between version of code in lddmm cvs module versus version of code in lddmm-volume. Discovered icpc version created different output than g++ version.
2009.01.16
Continued working on why differences exist between lddmm vtk output between version of code in lddmm cvs module versus version of code in lddmm-volume.
2009.01.15
Discovered that something called memory aliasing might be causing the problem in the code. What happens is that when a new pointer is created to existing memory, the optimizer might optimize away operations performed on that new pointer. In Array3D::writeDouble, there is a reinterpret_cast which creates a pointer to the double value as a long long int, then the order of the 8 bytes are flipped, then the double is written to the vtk file. The optimizer in g++ v4 optimizes away writeDouble, and reorders instructions so that an uninitialized value gets written at the front of the file. This behavior can be prevented through the g++ compiler flag -fno-strict-aliasing. This however prevents certain optimizations, so I fixed the code by doing a memcpy from the double to a buffer, then reversing the bytes, then writing them out.
2009.01.14
Tried building on v3 (tet) and v4 (mtvernon) in debug mode, then stepping through with the debugger and comparing behavior (focusing on Array3D::writeDouble), but I noticed no difference. Then checked the output of the debugger version and the output was correct. So version 4 of g++ creates executable code that behaves differently depending on the level of optimization requested.
2009.01.13
Noticed that the v0-6 was built on tet, and the current version was built on mtvernon. Built v0-6 on mtvernon and it failed. g++ on tet is version 3, on mtvernon version 4.
2009.01.12
Built version v0-6 of lddmm. That code behaved like the originally installed version. Checked the code differences between the current version and v0-6 and found no significant differences.
2009.01.09
Created a makefile for Tim to compile a vtk conversion utility file using the installed lddmm-math-lib.

Went back to the original version of lddmm-volume in the cvs repository (module lddmm) to see how that behaved. Found that the latest code had the same vtk binary file problem. Decided to try to build the code tagged v0-6 and see how that behaved.
2009.01.08
Since the code from Array3D had been modified for NIfTI support, looked through those changes to see if anything significant was changed that could have introduced that error but found nothing. Began looking at other changes in the code but found no obvious suspects.
2009.01.07
Discussed with Tilak ways to speed up the semi-lagrangian code, including new interpolation procedures.

Started tracking down the reason for the differences in the vtk output files from lddmm-volume. The difference cannot be attributed to differences in vtk libraries because lddmm-volume doesn't use them. It has it's own code for vtk output.
2009.01.06
Set the number of threads to one and reran tests. Problem recurred, so did some more testing and it appears that MKL fft produces significantly different results from fftw. All the text out put is the same, including the energy in each step, but the binary files contain significantly different numbers.
2009.01.05
It turns out that the vtk field data only needs to have nine text lines stripped from the top of the file, so unfortunately without any simple way to differentiate from the two different formats, I have to leave a hard-coded diff command file for the comparisons instead of building the command file after each test run.

Took a rudimentary stab at parallelizing the classInterpolation class of lddmm-volume. Added an "omp for" pragma to the outermost loop of classInterpolation::StanCoteTrajectoryDisplacement. The comparison in the validation area showed significant differences in the results.
2009.01.02
To compare vtk files, I strip the text lines (10 lines) off the top, run od -fD (for long float (i.e., double) formatting) to get a text rendering of the data, then diff the od output. There appears to be a 1 byte offset into the data portion of each file, and they only actually match when the first 8 byte value in the new vtk format is discarded.
2008.12.31
Ran some test data in validation area and found that .txt data file matched properly, but that .vtk format data is in general different in the new lddmm-volume vs. the installed version, possibly due to different versions of VTK being used, and some issues with vtk files with their mixed text and binary data.
2008.12.30
Set up a test environment so that a new version of lddmm-volume can be copied into a work area, run, and its results compared to the standard run of the configured version of lddmm-volume using the same data and parameters. Invoked the validation script to set up the standard results and to test the current version (w/ mkl and icpc). Set up this validation process to quickly test future parallel inplementations of lddmm-volume.
2008.12.29
Read the Wikipedia entry on OpenMP to prepare for some experiments on parallelizing some OpenMP code.
2008.12.23
Worked on trying to implement the Optimization Report feature of vtune. Posted to the Intel forum again about the problem. Built the Intel code example from the vtune distribution, using the instructions from the vtune manual, and still got no response to the Optimization Report button.

Looked at some code examples from Tilak, and ran to the library to pick some some materials related to Semi-Lagrangian solvers that he asked for.
2008.12.22
Worked on trying to implement the Optimization Report feature of vtune. Posted to the Intel forum about the problem.
2008.12.19
Worked on trying to implement the Optimization Report feature of vtune. The block of code I am interested in optimizing is in a module called classInterpolation, so I compiled that outside the command line using the optimization report generation flags of the icpc compiler:
icpc -DUSE_MKL -DNO_IBM -DUSE_FFTW -DNO_MPI -DFLT=double -DWRITEVTK -DANALYZE_IMAGE -O3 -g -xP -Ob0 -opt-report -opt-report-level max -opt-report-file stancote.opt -vec-report3 -I/cis/project/software/src/CISLib/include -I/cis/project/software/src/lddmm-math-lib/include -I/usr/local/intel/mkl/10.0.4.023/include/fftw -c ~/projects/configResearch/installTest/lddmm/x86_64/lddmm-volume/lddmm-volume-lib/classInterpolation.C > stancote.vec
Built the lddmm-volume-lib with this object file, then re-linked the lddmm-volume executable, then ran the program in vtune as per the instructions in the vtune manual for getting optimization reports, but nothing happened when I pressed on the report button. Rebuilt, created a separate collection activity, re-ran, but still nothing. Went hunting into the Intel vtune forums for answers and found an environment variable should be set:
export VT_ENABLE_OPTIMIZATION_REPORT=ON
Set this variable and recollected sampling data, but still the Optimization Report feature did nothing. Rebuilt the application with this env variable set - still nothing.
2008.12.18
Tom Nishino sent an e-mail to tell me about getting BW to compile on OS X. He had to comment out licensing, but he thinks he can get it to work properly by using the Mac address.

Sent the lddmm performance e-mail to Tilak for review before sending it to Dr. Miller. He asked me to hold off on sending it. Here it is:
Dr. Miller,

First, I wanted to report that I followed up on lddmm-volume performance (128 voxel cube) with 4 processors, and the fftw took up about 5.6% of the total processing time, compared with about 4.0 for 8 processors, and about 18% with single processor fftw 2.1.5.

To speed up lddmm-volume processing, I think there are a number of approaches we need to look at:
  1. Software engineering approaches
    1. Use vtune outputs to identify wasted cycles due to unnecessary data conversion, memory accesses, recomputing intermediate values
    2. VTune/ICPC provide performance improvement tips at compile time.
  2. Parallelization approaches
    1. Identify code that can be replaced by calls to mkl library calls that provide parallelization for free.
    2. Implement an open mp version of the parallelization Faisal provided with MPI. See if single machine performance is improvement.
    3. Add Open MP to parallelizable sections of code.
  3. Algorithmic approaches:
  4. Try to find less computationally intense ways to perform volume matching:
  5. Incremental TimeSteps.
  6. Incremental Resolution.
Using vtune, I identified a major part of processing time used was in a function called classInterpolation::interpolateMapStaniforthCote (about 32%). From the comments in the code, this function is a semi-lagrangian scheme for integrating an advection equation. I have been unable to find an mkl function that performs this specific function. I have an e-mail into Intel tech support asking them to check into whether they have or know of any library code to perform this function.

Tilak and I have been discussing this function and he feels that the accuracy achieved by this algorithm is required for the lddmm-volume solution. He could think of no substitute algorithm that would provide the same accuracy.

Faisal did not parallelize this piece of code.

If we accept the above assertions as true our options are as follows:

  1. Use compiler tips from icpc/vtune for code modifications.
  2. Use our code writing experience to make improvements to the existing code. Joe can help out with this effort as needed, and has already provided some ideas in this area.
  3. Parallelize this function via OpenMP pragmas.


Option #1 is pretty much a freebie, assuming the code is not too garbled based on the compiler hints. Option #2 is something we should do anyway. These could be complete within about three weeks, including the creation of a test program to measure the performance and validate the accuracy of new code.

Parallelization of this function I think would provide the greatest improvement in speed, especially given the new hardware available to us. I am as yet unfamiliar with how to do this, but Intel is providing a lot of resources for learning parallelization techniques, e.g.:

http://portal.intel-dispatch.com/ViewLetter.aspx?view=GA4WXF9qDJc=_505_482_511

http://portal.intel-dispatch.com/ViewLetter.aspx?view=GA4WXF9qDJc=_501_477_506

Please let me know what you think of this plan.

Mike B.
2008.12.17
Learned about the function in lddmm that is the most time consuming of the processing, a semi-lagrangian advection equation integrator. Spent time with Tilak looking at the code. Talked with Joe about trilinear interpolation and some faster coding schemes. He sent code that is faster for large interpolation problems, which this is not. It doesn't look like it would speed up our operations.

Started working on an e-mail to send Dr. Miller to describe the strategy I think we should use for improving lddmm-performance.
2008.12.16
Call Graph run failed for unknown reasons. Looked over the interpolation function that was taking up most of the processing time in lddmm-volume. Met with Dr. Miller, Tilak, and Anthony to discuss lddmm-volume performance issues. Decided we needed a way to speed up the interpolation function. Dr. Miller recommended finding a library function in a package such as mkl to perform the function.

Discussed the interpolation function with Tilak.
2008.12.15
Created a bar graph of lddmm-volume run times of 128, 256 voxel cube sets, with fftw as a percentage of total. Recorded the following:
Points Total Time lddmm-volume Time FFTW MKL Percent Time FFTW
128 2485.15 100.13606 4.0
256 10640.74 366.46628 3.4


Produced the following agenda for lddmm-volume performance enhancement strategy meeting:
  1. Software engineering approaches
    1. Use vtune outputs to identify wasted cycles due to unnecessary data conversion, memory accesses.
    2. VTune/ICPC provide performance improvement tips at compile time.
  2. Parallelization approaches
    1. Identify code that can be replaced by calls to mkl library calls that provide parallelization for free.
    2. Implement an open mp version of the parallelization Faisal provided with MPI. See if single machine performance is improvement.
    3. Add Open MP to parallelizable sections of code.
  3. Algorithmic approaches:
    1. Try to find less computationally intense ways to perform volume matching:
      1. Incremental TimeSteps.
      2. Incremental Resolution.
Did not discuss above at meeting. Decided to adjourn until tomorrow and look at individual lines of lddmm-volume code.

Looked at previous vtune performance data collections. Noted that there were different, unexpected functions high up the activity list, including lirary loading functions and wait functions. The activity lists from the two runs looked dissimilar. Seemed possible that since mkl would use all eight processors, it would have to do some swapping. Set the number of processor to 4 and ran First Use collector twice. Got similar results, with a function called interpolateMapStaniforthCote taking up about 1/3 of total processing time. According to comments in the code, this is an ODE function. The largest part of the time spent is in a call to StanCoteTrajectoryDisplacement. Set up a "Callgraph" vtune data collection before leaving.
2008.12.14
Saved the results of my vtune performance data collection from a Friday run with the gcc compiler on a 256 cubed data set. Tried running a 64 point cube in the function summary view, and saved the results.

Ran the 128, 256 cubed sets through a gcc comiled lddmm-volume.
2008.12.13
Built FluidDriverBirn with icpc. Created a batch file to run Can's data (128 and 256 sized cubes) through icpc compiled FluidDriverBirn.
2008.12.12
Built vxl and vtk with icpc. Rebuilt the CIS libs that lddmm-volume depends on.

Started using vtune to run data for performance analysis on gcc compiled FluidDriverBirn.
2008.12.11
Helped Valentina compiling some initialMomentumMatching code modifications she was doing for Laurent. Introduced her to kdevelop. Answered some questions for Sheeraz Daudi who is working for Lei. He is trying to build BW and was experiencing problems related to trying to compile on a 64 bit machine with g++ 4.

Started reworking the libraries to try to build with icpc compilers.

Can provided data sets for volume matching.
2008.12.10
Got the results of lddmm-volume processing. Again, program did not appear to run very successfully, based on error messages that I saw. However, it did run to completion, so I have sample fftw numbers for lddmm-volume for 64x64x64, 128x128x128, and 256x256x256 sized arrays. Put those numbers on the wiki page for lddmm fftw performance. My numbers showed that FFTW operations took a small percentage of lddmm processing time. Since the images did not match well, I sent an e-mail to Can to request that he create more reasonable image pairs of the sizes we tested with, to see what the percentage of fftw time is for images that lddmm operates on more successfully.

Met with Anthony and Dr. Miller to discuss the results of my testing so far. Based on what we've seen from the testing so far, the percentage of time spent in fftw is too small for the speedup we get from mkl fftw to make a significant difference in processing time. Dr. Miller asked for a study using VTune and the good data sets from Can to determine where time is spent in lddmm-volume processing, so those sections can be optimized.

Got code from Aastha Jain for her version of Surface Matching which uses fftw. Created a makefile for her that links in mkl fftw3.
2008.12.09
Looked at the results of lddmm-volume on images. The 128 size runs didn't converge properly, and the 256 point runs used up all the memory on the system, then some. Restarted 256 point run with mkl link, and 10 timesteps instead of 20. Program finished properly.
2008.12.08
Changed the instrumentation calls in lddmm-volume-library to use gettimeofday instead of clock, so multiprocessor times are more accurate. Built for both mkl and nonmkl. Looked for 128x128x128 and 256x256x256 data. Found some adni data that was 256x256x256. Used IMG_subcubecut to create a 128x128x128 volume from mouse brain data given to me by Xin Li from the DTI Studio group. Started an overnight process to run lddmm-volume on the 128 and 256 volume with and without mkl.

Checked in testfftw code and a CMake file for building the program.

Rebuilt licenses for io19 for lddmm-dti-tensor, which had expired.
2008.12.05
Tried the following experiments to see if they have an effect on lddmm performance: Added more instrumentation to the code because it didn't make sense. Discovered that the call to "clock()" adds up all the CPU time for all the processors being used, so the numbers for the multiprocessor calls were higher even though the program was finishing faster. Created a program that just performed an fftw call (taken from Faisal code) on progressively larger NxNxN constant value volumes, N = 32 to 1024. Plotted the results. The mkl multiprocessor fftw performed 5 - 6 times faster on the larger volumes than the single processor standard fftw. The results are here.
2008.12.04
Helped Tom Nishino build BrainWorks from the cvs directory. Created a CMakeLists.txt file to build a library that contains and installs only the files necessary for CIS Licensing. Provided step-by-step instructions for building this library and linking it into BW.

Started mkl performance testing/verification. Searched through all code files of lddmm-volume for calls to fftw, added instrumentation points to accumulate the time spent in the calls. Linked the code with mkl fftw, and saved an executable to my projects/validation area. Linked with normal fftw (2.1.5) and saved that executable as well. Tested both programs against each other using hippocampus volumes (80 x 96 x 80). The mkl version spent 644.74 seconds in the fftw, the non-mkl version spent 355.18 seconds in fftw. The output files are the same otherwise. I will try the following experiments to see if they have an effect on performance:
2008.12.03
Ran some more NIfTI and Analyze data data through original and modified lddmm-volume for comparison.

Finished building lddmm-landmark v2 libs for Xin. Sent them to her but she experienced problems due to the fact that the MSVC projects are not putting the x64 libraries into separate directories when they're being built. Got her the correct libraries and she was able to link without problems. She does not intend to distribute new Landmarker versions immediately.

Helped Tom Nishino build BrainWorks from the cvs directory. Since we now link in CISLib for the licensing code, BrainWorks has to be built with whatever dependencies CISLib requires. I created a separate CMakeLists.txt that will build a version of CISLib that just includes the licensing code.

At the software status meeting, Anthony stressed the importance of completing the testing of the performance of mkl fftw verses regular fftw wrt data used in lddmm-volume. I promised the results by the end of the week.
2008.12.02
Tested code for lddmm-volume support of NIfTI data.

Built lddmm-landmark v2.0 libraries for Xin. Started to collect necessary file into zip folder for her.
2008.12.01
Finished code to support NIfTI data and started testing with hippocampus data.
2008.11.26
Worked on NIfTIImage class to write code for conversion of NIfTI Image to Analyze data object.
2008.11.25
Helped Yun-Ching Chen with running lddmm-dti-tensor on his computer.
2008.11.24
Worked on NIfTIImage class to write code for conversion of NIfTI Image to Analyze data object.
2008.11.21
Continued working on implementing NIfTIImage class in lddmm-math-lib. Was abe to build lddmm-volume driver.
2008.11.20
Met with Dr. Miller, Anthony, Kyle Reynolds, and Rai Winslow to discuss a hardware procurement for Dr. Winslow's lab. Discussed some performance numbers observed with MKL in lddmm-volume and lddmm-landmark. Anthony thinks our numbers should be much better. Dr. Miller asked me to perform further performance tests on lddmm-volume.
2008.11.19
Searched for "coarsen" code for Tilak. Looked at gts_surface_coarsen, a call from the gts library, to see if it could be used. Looked at the parameters currently supported by coarsen, and compared them to gts example code, and found significant differences. To completely replicate the "coarsen" we use here we will need the source code.

Worked on building lddmm-volume driver.
2008.11.18
Continued working on implementing NIfTIImage class in lddmm-math-lib. Worked on building lddmm-volume driver.
2008.11.17
Continued working on implementing nifti file readers in lddmm-volume driver.

Discussed BW modifications with Lei Wang and Tilak wrt Segmentation scripting. They want to:
2008.11.14
Continued working on implementing NIfTIImage class in lddmm-math-lib. Worked on adding calls to the class in lddmm-volume driver.
2008.11.13
Continued working on implementing NIfTIImage class in lddmm-math-lib. Worked on adding calls to the class in lddmm-volume driver.
2008.11.12
Continued working on NIfTIImage class.

In the CIS Software Development Group meeting Joe Hennessey said that the latest NIfTI libraries could be found in his caworks directory under ITK. Built the libraries in /cis/projects/software/opt/x86_64/nifti directly from the CMake file in the source directory.
2008.11.11
Continued working on implementing nifti file readers in lddmm-volume driver. Adding a NIfTIImage class to lddmm-math-lib. The class will use the nifti libraries for IO, and provide an interface thaqt allows the user to do some Image comparisons (e.g., on orientation) to see if two images are compatible for lddmm-volume matching. It will also provide a conversion to an Analyze class in lddmm-math-lib which is used in lddmm-volume.
2008.11.10
Continued working on implementing nifti file readers in lddmm-volume driver.

Received an e-mail back from Xin Li about landmark orientation issues. She said after discussion with her group and some testing, they believe the Landmarker transformation code works as is and should produce data with the correct orientations.
2008.11.06
Continued working on implementing nifti file readers in lddmm-volume driver.
2008.11.05
Continued working on implementing nifti file readers in lddmm-volume driver.
2008.11.04
Started implementing nifti file readers in lddmm-volume driver. In addition to current image consistency checks, will check nifti orientation information before performing lddmm-volume.
2008.11.03
Continued reading orientation webpages for Analyze/NIfTI files. Used caworks, bw, and Landmarker to look at various volume types and their orientations. Downloaded MRIcron, a simple program for reading volume data. Ran the program and looked at a couple of volumes. Sent an e-mail to Xin Li to start a discussion about these issues in Landmarker.
2008.10.31
Found the following website and read it's discussion of Analyze and NIfTI orientations:

http://www.grahamwideman.com/gw/brain/orientation/orientterms.htm

Tried to figure out how the various types of orientation correspond to the XYZ coordinates of landmark files, and the transformation volume that I create for Landmarker.
2008.10.30
Read NIfTI and FSL code and documents.
2008.10.29
Read NIfTI code and documents. Downloaded FSL and looked through some of the source code.
2008.10.28
Looked at getting NIfTI support for lddmm. Worked with Joe to configure caworks version of VTK. Started build. Downloaded the NIfTI code and began reading it over.
2008.10.27
Met with Joe to try to build the CAWorks version of VTK. I had the wrong version (caworks32). Checked out the caworks version.

Looked over Josh Hewitt's lddmm configuration file. Parameters looked legal so I ran lddmm on his data.
2008.10.24
Put BrainWorksTest in a public html area for transfer to a collaborator. Updated the changelog in the BW webpages.

Figured out how to check out caworks code from the Subversion repository. Used:
svn checkout file://localhost/cis/local/svn/caworks32
Tried to build the CIS VTK configuration with CMake in linux and windows.
2008.10.23
Rebuilt and reinstalled lddmmLandmark for x86_64, ia64, i686, and win32. Checked outputs of the architectures for consistency with validation data on lddmm-landmark webpage.
2008.10.22
Finished modifying the "changelog" page of the lddmm-landmark website.

Went through source code for all architectures to make sure all the latest changes are in the repository and the directory for each architecture is the same. Began building each module. Currently there is a directory for each module for each architecture. With CMake that should no longer be necessary. After these builds I will create a single source tree for each architecture, and a build script that checks out the current source and builds for each of our supported architectures.
2008.10.21
Revised the faq, manual, and input file format pages of the lddmm-landmark web pages.

Met with Bruno, Camille, and Josh Hewitt to discuss lddmm, DTIStudio, etc.
2008.10.20
Put the 2.0 validation data on the lddmm-landmark website and started modifying the webpages.
2008.10.17
Worked on re-running the Sphere-To-Ellipse validation set. Modified code to put out the flowed point set in lmk format. Re-tarred the results.
2008.10.16
Worked on re-running the X/H validation set. Re-tarred the results.
2008.10.15
Worked on trying to get CMake to properly search for win32 libraries under the "debug" and "release" profiles, unsuccessfully. The user will have to modify the CMake file to specify the build config. Rebuilt Release version of all libraries and lddmm-landmark-exec.
2008.10.14
Worked on building lddmm-landmark 2.0 for i686, ia64, and win32. Finished building executables for ia64, i686, win32. Continued working on refining CMake configuration files for more seamless cross platform builds.
2008.10.13
Worked on building lddmm-landmark 2.0 for i686, ia64, and win32.
2008.10.10
Worked on building lddmm-landmark 2.0 for i686, ia64, and win32.
2008.10.09
Worked on updating lddmm-landmark to 2.0.

Built fftw libs in /usr/local/intel/mkl for x86_64, i686, and ia64 (with icc). Cleaned up Makefile for Laurent's code and sent it to him with an e-mail explaining linking in mkl.
2008.10.08
Changed the version number of lddmm-landmark to 2.0 and began rerunning validation data sets to update the webpage with the latest changes.
2008.10.07
To learn more about MKL performance settings, read the MKL user's guide at:

/usr/local/intel/mkl/10.0.4.023/doc/userguide.pdf

Chapter 6 is relevant. It looks like multithreading is already programmed into the libraries. One can set the number of processors available through a couple of different env variables (one OMP specific, and one MKL specific), but the libraries (as of version 10) assume a default processor number equal to the number of procs in the computer.

Fixed lddmm-landmark to work on ubuntu. It's safer to allocate large arrays on the heap vs. the stack.
2008.10.06
Determined the issue with lddmm-landmark is that it is not automatically being compiled with CXXFLAGS=-DUSE_MKL, which is what is specified in the CMakeLists.txt. Modified the devScript file that I use to set the development environment so that CXXFLAGS is set when CMake starts up.

I was able to link in Intel's Math Kernel Library and the initialMomentumMatching program that runs in about 9 minutes without MKL completes in about 7 minutes with MKL. Laurent wants the Makefile I used. Need to set up the mkl fftw libraries in /usr/local/intel for him to link to.
2008.10.03
Continued working on integrating mkl into Laurent's initialMomentumMatching code.

Provided a recent lddmm-landmark for Anthony to use in testing Intel Nahelem. It was performing slowly, so I began checking why.
2008.10.02
Continued working on integrating mkl into Laurent's initialMomentumMatching code.
2008.10.01
Started working on integrating mkl into Laurent's initialMomentumMatching code.
2008.09.30
Worked on building lddmm-volume executable on win32.
2008.09.26
Installed mkl on mtvernon.

Built CISLib, lddmm-math-lib, lddmm-volume-lib on win32.

Finished running the Botteron problem surfaces, put the lddmm-surface outputs in /cis/home/mbowers/public/forTim/botteron_surfaces, and notified Tim by e-mail that the matches were complete.
2008.09.25
Worked on building lddmm-volume on windows.

Ran lddmm-surface on Tim's "problem" Botteron surface. Program seemed to run correctly. Ran each problem pair, then switched the matching direction and ran them again.
2008.09.24
Began working on building lddmm-volume on windows.
2008.09.23
Got code from Laurent that he wants me to link mkl into.
2008.09.22
Got my MATLAB lddmm-volume/lddmm-landmark kimap reader to correctly read in the lddmm outputs in the two different formats. I checked the data after subtracting out the absolute distance from the lddmm-volume output. The values didn't look correct, so I checked the raw data. The first row doesn't look right and I'm not sure why.

vtkdfm(:, 1, 1, 1)

ans =
181.9646
217.7180
0.0624

So the Kimap at the origin is this number, but I think it should be close to 0.0. Not sure why this is, and it looks like it's only the first entries in the array:

vtkdfm(:, 1, 2, 1)

ans =
181.9657
0.7423
0.0550

vtkdfm(:, 2, 2, 1)

ans =
0.9907
0.7399
0.0524

Sent and e-mail to Can, and he told me that the Kimap is replicated in all directions periodically, so it is in effect wrapping around at the boundary. I did some experiments to sub-index the vector field to eliminate the borders, and got better results. I sent the MATLAB program to Nayoung.
2008.09.19
Determined the lddmm-landmark dfm file and the lddmm-volume Kimap.vtk output given to me by Nayoung are of different sizes. Modified Nayoung's LDDMMMatrix.hdr file and re-ran matching using the parameters she used originally.
2008.09.18
Got Nayoung's lddmm-volume and lddmm-landmark test data and began trying to read them in with my MATLAB program.
2008.09.17
After discussion with Nayoung, she said that she wanted to compare the two dfm files from landmark LDDMM and volume LDDMM. I agreed to give her a MATLAB function
function [vtkdfm, rawdfm] = loaddfm(vtkfilename, rawfilename);
The function will read in the Kimap.vtk formatted file output from lddmm-volume and subtract absolute position from it, then read in the comparable raw dfm file put out by lddmm-landmark and apply deformation.
2008.09.16
Tried using caworks to read in Kimap.vtk and dfm file.
2008.09.15
Started working on code for Nayoung to compare .dfm output of lddmm-landmark with Kimap.vtk output from lddmm-volume.
2008.09.12
Provided a copy of surface matching code to Steve Cheng.
2008.09.11
Sent the large afni surface files to Aastha to run her new surface matching code on.
2008.09.10
Avinash reported a speedup in his svd calculations of about 15X after initial testing.
2008.09.09
Did some more testing of lddmm-volume with mkl fftw linked in. Used a hippocampus volume and discovered that the mkl fftw version of the code performed the match in about 24 minutes, while the non-fftw version finished in about 28 minutes.

Sent a snippet of lddmm-landmark code to Avinash so he could use it to call an mkl singular value decomposition from MATLAB. Helped him link in the necessary mkl libraries.
2008.09.08
While stepping through the code I found where lddmm-volume allocates some intermediate processing buffers. For the velocity buffer, the program allocates two iterations x 10 time steps x 3 dimensions x the image size x the size of a double. 2 x 10 x 3 x 64 x 64 x 64 x 8 = 125829120 = 125 MBytes. A similar allocation for trajectories means roughly 250 MBytes just for those two structures. A size of less than 300 MBytes is not unreasonable given those requirements.
2008.09.05
Built a version of lddmm-volume in debug mode in order to step through the code to study the memory allocation of the program. Anthony stated that the program takes up almost 300 Mbytes when processing the 64x64x64 sphere data, so I am looking to see how memory is allocated.

The compiler defines for a non-mpi MKL make is:
-DNO_IBM -DUSE_FFTW -DNO_MPI -DFLT=double -DWRITEVTK -DANALYZE_IMAGE
2008.09.04
Added the DataMin DataMax parameters to the Segment bw script command.
2008.09.03
Ran some initial testing of lddmm-volume and for the sphere validation set on the CIS lddmm-volume website, the mkl version was a few seconds slower than the non-mkl version.

Tilak decided that they wanted some changes to L1Distance scripting. Made the changes and built a new bwtest.
2008.09.02
Completed linking mkl fftw into lddmm-volume.
2008.08.29
Built fftw for mkl. Had to copy mkl directories over to my home directory, because mkl's fftw build files install to a relative path off the main mkl directory. Changed directories to: mkl/10.0.4.023/interfaces/fftw3xc, then typed:
make libem64t compiler=gnu precision=MKL_DOUBLE
to build into the em64t directory.
2008.08.28
Started trying to link mkl into lddmm-volume.
2008.08.27
Attended MRI Stdio meeting with Susumu. Discussed remote processing prioritization. Meeting minutes are here.
2008.08.26
Finished testing L1Distance. Check code in and created a new bwtest executable.
2008.08.25
Continued testing L1Distance.
2008.08.22
Finished creating L1Distance class. Started testing L1Distance scripting.
2008.08.21
Continued working on L1 Distance scripting.
2008.08.20
Continued testing the Segmentation memory cleanup code. While stepping through the debugger it appears that the destructor is not being called but adding an output statement for debug shows that it is being called. Checked in code and built a new bwtest for Yoshi to test.
2008.08.19
Worked on a scriptable version of BrainWorks L1 distance calculation. Finished working on commmand syntax parsing code. Code for L1Distance calculation is in a module called MainUtils, which is not a class, and which contains GUI functionality. The L1Dist calc functions that not GUI are separate and can probably be collected into a new module and implemented as static methods in a new class.

Yoshi found that the segmentation scripting works for about the first 200 or so times, then the program starts to fail, which is a pretty good indicator of a memory leak. After perusing the code, noted that the segmentation that is calculated in never cleaned up in memory after the GUI is closed. The script functionality also fails to delete it. Put some code in the destructor to do so but for some reason the destructor of the derived class isn't getting called when the GUI is closed.
2008.08.18
Tilak asked for a scriptable version of BrainWorks L1 distance calculation. Began working on commmand syntax parsing code.
2008.08.08
Finished scriptable version of BrainWorks segmentation. Completed testing, checked code into cvs, built a new bwtest, updated the modules documentation web page, sent an e-mail to Tilak to inform him that code is ready for further testing.
2008.08.07
Worked on a scriptable version of BrainWorks segmentation. Finished working on commmand syntax parsing code. Started testing scripting and comparing results to GUI based and previous bwtest segmentations.
2008.08.06
Worked on a scriptable version of BrainWorks segmentation. Began working on commmand syntax parsing code.
2008.08.05
Worked on a scriptable version of BrainWorks segmentation. Finished MultiCompartment redesign. Tested that GUI still works.
2008.08.04
Worked on a scriptable version of BrainWorks segmentation.
2008.08.01
Worked on a scriptable version of BrainWorks segmentation. Started working on breaking down MultiCompartment into a base class and a derive GUI class.
2008.07.31
Worked on a scriptable version of BrainWorks segmentation. Separated HistLUT into a base class and a derived GUI class.
2008.07.30
Worked on a scriptable version of BrainWorks segmentation.
2008.07.29
Worked on a scriptable version of BrainWorks segmentation. Segmentation as performed by BrainWorks now consists of two GUI classes (HistLUT and MultiCompartment) working together to collect the necessary input from the user, then segmenting. It is necessary to break apart each of these classes into a base class that contains the methods required for segmentation, and the data members those methods access, and a derived class that contains GUI functionality. The methods in MultiCompartment that reference the Histogram will be modified to operate on an input reference to a Histogram base class.
2008.07.28
Worked on a scriptable version of BrainWorks segmentation. Trying to figure out how to separate actual segmentation code from GUI functionality.
2008.07.24
Started working on a scriptable version of BrainWorks segmentation.
2008.07.23
Ran some tests on lddmm-landmark comparing the MKL compiled and linked version with a straight vxl type implementation. E-mailed the following to Anthony:
With 237 landmarks the MKL version ran in 10 minutes, 28 seconds. The non-MKL version ran in 2 hours, 45 minutes, and 9 seconds.
2008.07.22
Experienced problems with running lddmm-landmark on large data sets. Program had a issue with the size of the work buffers I was providing for it. There is no explicit rule for specifying this size, and the documentation states that bigger buffers are better for performance, but apparently the size I specified was too large. The documentation stated that if the call to dgesdd is made with a buffer size -1, the program will return the optimal buffer size. I added this additional call to the code, and the program ran sucessfully.

Started performance comparison runs for lddmm-landmark with and without MKL on mtvernon.
2008.07.21
As far as I can tell the only way to get CMake to create a proper staticly linked build of lddmm-landmark is to specify -static in the link flags, then go into the automatically generated make file and take out -rdynamic flags that CMake adds to the link line. Built a version of lddmm-landmark with all the mkl calls taken out.

Completed Anqi's travel request, faxed receipts, and had her sign request.
2008.07.18
Built a dynamically linked version of the code that successfully calls mkl lapack. Started working on building a static version. Added both the desired link lib specifications (static and dynamic) to the CMake file, to be commented out by the user.
2008.07.17
Having troubles with mkl call. It's the first one I've implemented from the lapack interface of mkl (I've use CBLAS in the past). It seg faults on the call itself, but in code that's trying to load dynamic libraries. Spent the rest of the day reading through MKL documention to debug the build configuration.
2008.07.16
Added a lapack svd call to LmkNewtonsMethod.

Got an e-mail from Geshri Gunasekera at Northwestern about getting a more recent version of BW. Copied one over for her.
2008.07.15
Started working on producing a comparison of lddmm-landmark performance with and without MKL. This is for the CVRG group. Decided to try to replace the VNL SVD call with a call to a lapack routine so I can link in MKL. The desired lapack function is called dgesdd.

Tilak sent a tutorial on BW AKM segmentation. He wants the functionality to be scriptable for a study of 3000 brains.
2008.07.14
Worked on lddmm-volume CMake file. Looked at how the code is built with MPI and what I need to do to produce a CMake file that specifies different types of builds.
2008.07.11
Built lddmm-volume-lib for x86_64 with a CMake config file. Started creating a CMake file for the src directory of lddmm. This directory will contain executable code for lddmm that is not test or utility in nature.

Worked on travel expense requests for Angi and Nayoung via SAP.

Slightly smaller jobs ran today on mtvernon, but still required more physical memory than was available. Used het again.
2008.07.10
Copied Tim's lddmm code directory over and started comparing changes he had made to what was in the cvs directory. Several files were changed, but the changes consisted mostly of some reformatting and some debug output statements. The makefile was the only file whose changes warranted an update in cvs.

Worked on travel expense requests for Nayoung and Anqi via SAP.

Noticed significant degradation of performance of my desktop machine mtvernon. Two instances of mm_lddmm were running on the machine, and each required about 16G of memory. Used het.
2008.07.09
Got a few sheets of paper from Can about the general solution of apply deformation for volumes and started to look at how to apply the changes to Can's existing code.
2008.07.08
Discussed at length "apply deformation for volumes" with Can. We discussed extending his current code to provide a general solution to the problem.

Rebuilt lddmm-landmark with covariances for Nayoung (the previous build was in debug).
2008.07.07
Discussed at length a "sulcal depth calculation" with Can. Had a similar discussion with Tilak in the past. Can and I discussed what a proper script command to do this calculation would look like in terms of input and output files. Can agreed to try to perform the calculation using the GenCurve script function and I agreed to write a better function if he couldn't make GenCurve work.
2008.07.03
Determined that the plan for creating Array3D images from vtk data posed a few problems - there is no origin to Array3D (or AnalyzeImage for that matter), and the Array3D doesn't recognize resolution differences in different coordinates. So Can's apply deformation code, which operates on Array3D objects, doesn't have a concept of R3 space. Even extending the solution to operate on AnalyzeImage won't work, because there is no concept of the origin.
2008.07.02
Continued working on apply deformation for volumes. Began testing code for creating Array3D (from lddmm-math-lib) from vtkImageData. Wrote a test program to read in the vtk data, then create the Array3D then write the data back out as an Analyze file. Visualized results with CAWorks. Results looked erroneous.
2008.07.01
Worked with Tilak and Marin Richardson of KKI to help access anonymous ftp so Marin could transfer data sets to Tilak.

Worked with Ray G. to troubleshoot CIS wireless network issues.

Worked with Dawn to help her fix her SAP printing problems.
2008.06.27
Continued working on Apply Deformation for Volumes.
2008.06.26
Renamed the VTKUtils class in LDDMM namespace to VtkPointUtils. Added a new class called VTKVolumeUtils for new apply deformation to volume code.
2008.06.25
Debugged CAWorks interface to Apply Deformation for Points with Joe.
2008.06.24
Stephanie A. presented image files with extension .mda and .mhd, which she wanted to read into a MATLAB program to do some processing. I used CAWorks to create legacy VTK image files, then modified a program she had for reading KiMap files into one that could read a vtk image file and produce a 3D array of the voxel values.
2008.06.23
Worked on LddmmVtkUtils to provide a function for converting vtkImageData into Analyze/Array3D class objects (from lddmm-math-lib).
2008.06.19
Continued working on apply deformation for volumes.
2008.06.18
Continued working on apply deformation for volumes. Elected to try to use lddmm-math-lib (basic classes pulled from lddmm volume), then build a lib from Can's code, which performs apply deformation on lddmm volumes, then link into lddmm-common for the code that produces the Hmap and Kimap from lddmm output.
2008.06.17
Worked with Avinash to help him use VNL for some math calculations he wanted to do in MATLAB. He wanted to perform an SVD, but call VNL from MATLAB. I helped him make the VNL SVD call, but suggested MATLAB would probably be faster and more accurate. It was. I recommended using MKL for the call and he said he would look into it.
2008.06.16
Began creating libraries for apply deformation for volumes.
2008.06.12
Checked in bw modifications for SurfaceExtract.
2008.06.11
Finished modifications for the bw scripting module SurfaceExtract, one of the original modules from the BW scripting wishlist. Received test surfaces from Michael Harms, and verified the results.
2008.06.10
Worked with Can to fix and enhance lddmm-dti code. Fixed a seg fault issue with the code, and got cvs working for Can so he could get the code and add a new output file to the program.
2008.06.09
Finished re-architecting all lddmm point based libs. All lddmm point based matching programs and point based apply transformation classes now have a common parent class which contains trajectories and momentums and also provides an deformation energy value. Lddmm solutions and point based deformations can now be passed around equivalently.

Completed the apply deformation interface that I wanted to integrate into CAWorks and checked it into cvs. Sent e-mail to Joe with instructions for integrating apply-deforamtion into CAWorks.
2008.06.06
Continued working with Nayoung to help her produce test output for her meeting to discuss her thesis proposal with her advisors and other professors at CIS.
2008.06.05
Wrote CMake files for lddmm-dti tensor and vector and installed under /cis/project/software. Modified the INSTALL file. Checked modifications into cvs repository.
2008.06.04
Worked with Nayoung to help her make some verification data sets for lddmm-landmark with covariances.

Remade lddmm-dti tensor and vector using new CISLibs and installed under /cis/project/software. Added an INSTALL file.
2008.06.03
Worked with Tilak on modifying fibretracking code for DTIStudio.
2008.05.30
Attended the KKI retreat at Mt. Washington. Listened to talks about imaging projects at Hopkins.
2008.05.29
Continued working on the lddmm-landmark with covariances testing and verification.
2008.05.28
Found an issue with the lddmm-landmark with covariances. I missed one of the changes Laurent asked for (he initialed the other changes and I got into the habit of searching for his initials). The change is in the final calculation of the error.
2008.05.27
Produced some lddmm-landmark results for Nayoung using her covariance data sets. Pointed Nayoung to the results and asked her to verify that the answers looked good.
2008.05.26
Continued working on Nayoung's covariance upgrades.
2008.05.22
Started working on GenCurve script command for BW. Having some trouble getting BW to run on mtvernon. Needed to make the bw and bwtest script point to the CentOs 4 libraries.
2008.05.21
Rebuilt the "diffeo" program, which performs ODE 4,5 on surfaces. Made modifications to Makefiles to use CISLib and lddmm-common. Showed Laurent how to make the program.
2008.05.20
Had a lot of difficulty with Nayoung's data sets and the dos line endings they contain. Worked on bulletproofing the code for these problems. Added error checking so the user is informed if the landmark sets can't be read in. The program runs correctly now and produces output, but crashes upon exit when the data file has dos line endings. Started debugging this issue.

Worked with Tilak to show him how VS works, and the various components of a VS solution. He wants to make modifications to the Fibre Tracking part of DTIStudio.
2008.05.19
Got a new machine today (mtvernon) and Anthony showed me how it was set up and how to operate the VMWare.

Rebuilt lddmm-landmark in optimized mode and ran Nayoung's data sets again. The results of the processing were not what I expected because the header of the lmk file didn't indicate that the file contained covariances. Changed the files to be the correct format and started debugging again.
2008.05.15
Finished debugging lddmm-landmark with covariances and ran some of Nayoung's test data.
2008.05.14
Finished working on adding the CurveGen script command. Did some testing but asked Lei to also check it out. He will not be able to do so for some time. The functionality is in bwtest (new version number), and the website is updated. After Lei runs his data through, I will check in the changes to the CIS cvs repository.

Continued working on implementing the valid flags in the Landmark sets.
2008.05.13
Continued working on adding the CurveGen script command.
2008.05.12
Lei Wang asked for a scriptable version of the SurfTrack functionality, so I began working on adding the CurveGen script command.
2008.05.09
Sent some examples of landmark data sets with covariances in it to Nayoung so she could build her data sets in the correct format. She also expressed a desire to use the valid flag field in the data files, which is currently ignored. Began making changes for that implementation.
2008.05.08
Met with Fijoy and Can to discuss lddmm and applying a volume transformation to a DTI data set. Can agreed to help register data sets in preparation for matching. I showed Fijoy how to use Landmarker to landmark his volume data for registration. I pointed him to the MRIStudio website so he could register for and download the program.

Made modifications to lddmm-landmark-lib to implement Nayoung's landmark matching with variabilities functionality. Worked on debugging changes.
2008.05.07
Sent e-mails to Can to ask for lddmm-dti data sets, and to set up a meeting with Fijoy V. to discuss lddmm deformations of dti data.

Made modifications to lddmm-landmark-lib to implement Nayoung's landmark matching with variabilities functionality. Worked on debugging changes.
2008.05.06
Made modifications to lddmm-landmark-lib to implement Nayoung's landmark matching with variabilities functionality. Worked on debugging changes.
2008.05.05
Built lddmm-math-lib in Win32.

Made modifications to lddmm-landmark-lib to implement Nayoung's landmark matching with variabilities functionality.
2008.05.02
Began making mods to lddmm-utils (Can's code) for dependency on lddmm-math-lib.
2008.05.01
Continued working on lddmm-volume architecture. Trying to compile lddmm-volume-lib. Worked out dependencies to lddmm-math-lib, but some issues of old code remain.
2008.04.30
Continued working on lddmm-volume architecture.

Checked in a fairly original version of Can's code into the cvs module lddmm-utils. Began making mods for dependency on lddmm-math-lib.
2008.04.29
Can delivered his code. Started cleaning it up to get rid of executables, added some useful targets to the Makefiles.

Sent an e-mail to Laurent asking for clarification of lddmm-landmark mods for covariances.
2008.04.28
Worked on re-organizing lddmm-volume code to break it out into the following modules:
2008.04.25
Looked through Can's code to attempt to determine how much was used from lddmm, and could be separated out into its own library. Discussed with Can a plan for his organizing his code so it could be checked into cvs.
2008.04.24
Cleaned up code and created new libraries for Xin to link to Landmarker for their new release.
2008.04.23
Stacy Suskauer tested the new Landmarker it successfully determined the Machine ID.
2008.04.22
Searched the web for ideas about how to get a Machine SID without relying on the name of a standard account. Found a few ideas and implemented them and they seemed to work in my testing here. Sent a new library to Xin for her to integrate and send Stacy a new Landmarker to test.
2008.04.21
Finished debugging Landmarker.cpp for variablility IO. Looked at Laurent's modified LmkNewtonsMethod.cpp file and had questions about the implementation so sent him an e-mail asking for clarifications.

Started four new lddmm-surface jobs on the afni data, with sigmaV values of 15.3, 20.0, 25.0, 30.0.
2008.04.18
Continued working on reading landmark files with variabilities in them. Started debugging code.
2008.04.17
Stacy Suskauer reported the following error from Landmarker:
"Error getting machine ID: Bad call to get SID size, last error = 1332."
This is a "MOT_MAPPED" error, i.e., the account (Administrator) does exist. Apparently some people change the name of this standard account for security purposes.
2008.04.16
Built and sent a 64 bit Release version of the lddmm-landmark-libs to Xin for her to build and give to Stacy.
2008.04.15
Continued working on 64 bit version of test program for Landmarker registration. Problems building x64 version of vtk. Was able to create the version by building to my own C:\ drive (got errors building to target over the network), and by removing references to 32 bit architectures from CMakeLists settings. Decided against trying to do the same for blitz, vxl, etc., and just asked Xin to create a 64 bit version of Landmarker with the libs I sent her.

Worked on a vtk/MATLAB interface. Found an example of a mex file that wraps a vtk function. Got it to compile, then started modifying it to do what I wanted (write a vtkPolyData xml stream). Function is for Nayoung to use in creating her landmark files with variations.
2008.04.14
Updated BW modules webpage.

Continued working on 64 bit version of test program for Landmarker registration. Problems building x64 version of vtk.
2008.04.11
Finished Curvature BW script command. Checked in mods, changed version number, and created new bwtest.
2008.04.10
Worked on Curvature BW script command.
2008.04.09
Added additional error reporting to Landmarker registration test program and sent the executable to Stacy Suskauer. It failed immediately because she is on a 64 bit machine. Started trying to build a 64 bit test program.
2008.04.08
Anqi looked over lddmm-surface results and recommended raising the sigma_v value on subsequent runs, then lowering the sigma_w value and check for improvements in the match.

Looked into whether the Distance program understood the resolution of the images was 0.5 mm instead of 1.0. Added some debug printouts, cleaned up code and configuration file, and sent code to Suraj.

Sent an e-mail to Xin and Stacy Suskauer to verify that the KKI people can install Landmarker. Stacy continues to get an error when the program tries to read the machine ID. She has agreed to run a test program for me to get more error reporting.

Finished checking in lddmm-surface-lib updates for error and energy value reporting.
2008.04.07
Sent out e-mail of compiled lddmm-surface output.

Began looking over lddmm-surface-lib changes to commit to cvs repository.
2008.04.04
Continued compiling output of lddmm-surface on the afni surfaces.
2008.04.02
Began looking at the output of lddmm-surface on the afni surfaces. Plotted the error and energy values by iteration and started verifying the deformed templates. Began an e-mail by compiling the processing steps and the outputs.
2008.04.01
Had a meeting with Dr. Miller to discuss caworks and what should be done to make it a useful piece of software. Discussed what would be needed for a user like Can to make use of the application for his research. My action items are to make sure the landmark matching to surface matching pipeline exists and can be used, and to look into porting lddmm-volume to windows.
2008.03.31
Ran performance testing on lddmm-surface. Compared the installed version (created before CMake configuration), the CMake created version using the libraries integrated into caworks, and the same version with a number of compiler directives in Marc Vaillant's original makefile. The different speeds on the same problem were:

Installed version:
Start: Mon Mar 31 14:18:18 EDT 2008
Finish: Mon Mar 31 14:27:04 EDT 2008
Elapsed: 8:46


CMake version:
Start: Mon Mar 31 14:56:14 EDT 2008
Finish: Mon Mar 31 15:05:17 EDT 2008
Elapsed: 9:03


Compiler optimization version:
Start: Mon Mar 31 17:06:56 EDT 2008
Finish: Mon Mar 31 17:15:20 EDT 2008
Elapsed: 8:24


It appears some small performance improvements can be achieved through optimizing compiler settings.
2008.03.27
Worked on class Landmarks to add variabilities.
2008.03.26
Worked on class Landmarks to add variabilities.
2008.03.25
Finshed program called Distance which calculates the ContourMeanDistance and ContourDirectedMeanDistance using itk. Copied the program into /cis/project/software/shape_utils/x86_64.

Worked on landmark variability for Nayoung.
2008.03.24
Created a program called Distance which calculates the ContourMeanDistance and ContourDirectedMeanDistance using itk.
2008.03.21
Finished and tested a C++ program to perform the Hausdorff distance function between two image files. Sent the results to Tilak. He asked that I produce similar programs to calculate the ContourMeanDistance and the ContourDirectedMeanDistance using itk.

Spoke with Stephanie about about helping me compile a brief description of the lddmm-surface algorithms.
2008.03.20
At Tilak's request, worked on a C++ program to link in ITK modules to perform the Hausdorff distance function between two image files.
2008.03.19
Joe relinked caworks for linux with the Linux system glibs instead of the ones he produced from the source code and caworks ran better but still crashed. After looking at some code I determined that the USE_MKL environment variable was not set during the compile/link of the libraries, so the prototypes did not match the calls to mkl, and hence the run-time seg fault. Joe rebuilt with that env var, and caworks for Linux executes properly.

Nayoung needs to add a 3x3 covariance matrix to each landmark point in landmark matching. The purpose is to add directional variability to the data attachment term. Met with Nayoung and Dr. Younes to discuss the requirements. Decided to use a VTK XML format of landmarks to add this data. It can be incorporated into a VTKPolyData structure. I sent the C++ code that performs the Newton's method of Landmark Matching to Dr. Younes so he could add the code modifications directly.
2008.03.18
Worked extensively with Brad Beyenhof to determine what was causing the LAndmarker registration problems he was experiencing. After giving him a lengthy set of instructions for verifying what he was doing, we determined he was not using the LAndmarker Xin had posted for him on yousendit. Once he downloaded and tested with that executable, he was able to register successfully as a restricted user.
2008.03.17
Brad Beyenhof, the sys admin at UCSD, continues to have problems making Landmarker register properly. He reports that the software continues to work as before, which I don't think is possible given the changes I've made.

Joe returned and we discussed the problems with caworks/lddmm integration. He added a new install package while on leave, so I used that to create a functioning caworks on windows. The version for linux continues to fail.
2008.03.14
Built new CISLib and LDDMMLandmark libraries for linking with Landmarker in Release and debug, and for 32 bit and 64 bit archs. Gave them to Xin and helped her with linking/testing issues.
2008.03.13
Continued working on the lddmm-landmark registration for Landmarker. Finished testing with RegistrationTest program. Made mods to build Landmarker with new CISLib and licensing modifications. Tested Landmarker licensing scheme with Admin User and Restricted User. Checked in modifications to CISLib and LDDMMLandmark libraries.
2008.03.12
Continued working on the lddmm-landmark registration for Landmarker. Finished modifications to Landmarker code and began testing with RegistrationTest program.
2008.03.11
Continued working on the lddmm-landmark registration for Landmarker.
2008.03.10
Continued working on the lddmm-landmark registration for Landmarker.
2008.03.07
Checked on lddmm-surface runs on io13. Both the full surface and the decimated surface matches ran through the weekend. The decimated surface match was on iteration 15 (of 100). Reran CMake to turn on full optimization of the libraries and the executable, then ran another instance of lddmm-surface-exec on the decimated surfaces.

Worked on the file based version of lddmm-landmark registration. Worked on the registration error reporting to the user LddmmLandmark.
2008.03.06
Finished coding the output access functions for error and energy. Rebuilt lddmm-surface-exec with the modified library and tested it on hippocampus data.

Started following Tim's AFNI surface matching procedure he described in:
https://wiki.cis.jhu.edu/project/afni/registration/test.
Used Brainworks for the landmarking of the surfaces. These surfaces appear to be grey/white boundary vs. the grey/CSF surfaces described in the wiki, so I tried to place landmarks in anatomically comparable positions on the surfaces. Converted the landmarks to runSim format, then ran runSim on the data, and applied the transformations to the original surfaces. Started lddmm-surface-exec on the full surfaces. Using CAWorks, created surfaces with 75% decimation of vertices, and ran lddmm-surface-exec on those surfaces.
2008.03.05
Worked on processing afni surfaces with lddmm-surface. Worked on building lddmm-surface-exec with CMake. Worked on verifying lddmm outputs (errors and costs by iteration of gradient descent optimization). Discussed lddmm terminology with Stephanie A. The following terms correspond to the deformation regularization term of the lddmm functional: cost, energy, distance. The data attachment term is called "error". Worked on lddmm-surface output access functions for these values.
2008.03.04
Worked on error messaging for lddmm-registration.
2008.03.03
Worked on error messaging for lddmm-registration.
2008.02.29
Was able to update CMake config files for lddmm-landmark-exec, and link with rebuilt vtk for win32.

Built test program for lddmm-landmark registration process and began making modifications to registration process for non-admin users.

Continued working with Malini to determine L1 Distance of her hand segmentation. Used bw to calculate L1 on an older version of the hand segmentation. Essentially what we discovered is that BW will not calc an L1 distance with an AKM segmentation using the 3 peak mode, so we need to use the 3/5 peak method, or another kind of segmentation. We also discovered Analyze added a number of tag values that were not expected in a 3 peak hand segmentation. Malini determined the tissue types of the erroneously tagged voxels, and I wrote a program to set the voxels to the correct tag values.
2008.02.28
Worked on creating a test version of lddmm-landmark registration process. Rebuilt CISLib, lddmm-common, lddmm-landmark-lib for win32. Created a VC project that just performed the lddmm registration process. It wouldn't link to our installed win32 version of vtk. A function was missing. Worked on rebuilding lddmm-landmark-exec. Had to implement windows functionality and vtk for CMake. Program had the same problem linking as DTIStudio. Rebuilt vtk for win32.

Worked on L1Distance calculation issues in BrainWorks for Tilak. Reworked a Makefile to produce some c++ utilities he located. Helped him locate alternate version of some of the supporting classes. Built debug versions of the utilities and helped Malini detect potential problems with the Hand Segmentation data and the statistics files.
2008.02.27
Implemented momentum vectors in lddmd-landmark flow time steps. Finished producing code for integration of lddmm-landmark-lib and lddmm-surface-lib into caworks.
2008.02.26
Implemented momentum vectors in surface flow time steps.
2008.02.25
Finished vtk xml output of deforming template at each time step.
2008.02.22
Worked on vtk xml output of deforming template at each time step.

Worked on L1Distance calculation issues in BrainWorks for Tilak.
2008.02.21
Worked on lddmm-surface-exec code.

Worked on L1Distance calculation issues in BrainWorks for Tilak.
2008.02.20
Built lddmm-surface-lib for windows. Checked in changes.
2008.02.19
Worked on vtk xml output of deformed template from surface matching. Checked in lddmm-surface-lib.
2008.02.18
Worked on lddmm-surface pseudocode.

Worked on vtk xml output of deformed template from surface matching.
2008.02.15
Worked on lddmm-surface pseudocode.
2008.02.14
Worked on lddmm-surface pseudocode.
2008.02.13
Finished exposing momentums and trajectories calculated by lddmm-surface.
2008.02.12
Worked on exposing momentums and trajectories calculated by lddmm-surface.
2008.02.11
Built a library of the lddmm-surface functionality. The interface expects the target and template surfaces in vtk xml stream format, or in a file of any of the formats supported by ivcon, or gts format. Created an executable program that uses this library to match two surfaces.
2008.02.08
Worked on building Makefile for several surface process utility programs provided by Tilak.
2008.02.07
Ran lddmm-surface on vtk xml data.
2008.02.06
Continued debugging lddmm-surface-lib vtk xml stream interface.

Started writing pseudocode for lddmm-surface.
2008.02.05
Found some problem with using the surface copy routine in gts to copy a surface. Could be a function of vertex objects not original to gts.

Modified version numbers for lddmm-surface and BrainWorks. Each new lddmm-surface binary will get a new version number, as will BrainWorks binaries. BrainWorksTest will have a three digit beta number appended to the version number.
2008.02.04
Continued debugging lddmm-surface-lib vtk xml stream interface. Having trouble creating SurfaceMatching object.
2008.02.01
Continued debugging lddmm-surface-lib vtk xml stream interface.
2008.01.31
Continued debugging lddmm-surface-lib vtk xml stream interface. Figured out how to get the XML Poly Data Reader to parse a stream.
2008.01.30
Continued debugging lddmm-surface-lib vtk xml stream interface.
2008.01.29
Tested code for all architectures. Verified results. Some variation occurred because we are using a new version of gts (some of the faces are listed in different order). The surfaces appear to be the same during a visual inspection in CAWorks, and every face I checked appears in both files. Repackaged the output results and updated the website, including a change log entry.
2008.01.28
Looked into ivcon code to determine why it was having problems reading byu files. Discovered a variable was changed from double to float. A subsequent string scan call was not modiied to reflect the new variable size. Checked in the modifications and started running tests with the repaired code.
2008.01.25
Rebuilt lddmm-surface with a previous version of the gts library. The program continued to fail in the same place. Began checking through the lddmm-surface code to determine why it was failing. Tracked the problem down to a loop in the computations, where an unsigned int was being decremented and then tested if greater than or equal to zero. An unsigned int is always GE to 0, so after decremented from 0 it was 2**32. The variable had been changed from an int to an unsigned int to avoid errors in the windows compiler.
2008.01.24
Started working on debugging lddmm-surface. A user outside CIS had problems with the code functioning on ia64 and i686 machines. I ran the code here on our validation sets and it failed. Began by rebuilding with the old version of ivcon-CIS we used before Joe's CMake version. lddmm-surface ran better, then experienced problems (seg fault) later in the processing. Began looking through the ivcon code.
2008.01.23
Continued debugging lddmm-surface-lib vtk xml stream interface.
2008.01.22
Started debugging lddmm-surface-lib vtk xml stream interface.
2008.01.18
Continued working on lddmm-surface-exec and lddmm-surface-lib.
2008.01.17
Continued working on lddmm-surface-exec and lddmm-surface-lib.
2008.01.16
Continued working on lddmm-surface-exec and lddmm-surface-lib.
2008.01.15
Created lddmm-surface-exec directory to begin building an exec to test the lddmm-surface-lib API and to create lddmm-surface.
2008.01.14
Continued working on VTKSurfaceReadWrite.
2008.01.11
Created a subclass VTKSurfaceReadWrite of the existing SurfaceReadWrite class in lddmm-surface-lib. The class will take stream data, feed it into the vtkXMLPolyDataReader to create a vtk PolyData structure, then take the data and put it into the ivcon data structures that are used to create a gts surface.
2008.01.10
Continued working on vtk xml interface to lddmm-surface. Figured out how to set the stream attribute in the xml reader for the vtk poly data structure.
2008.01.09
Continued working on lddmm-surface interface. Started looking at vtk xml interface.
2008.01.08
Continued working on file interface to lddmm-surface library.
2008.01.07
Worked on coding interface to lddmm-surface library. Worked on interface for file input to lddmm-surface.
2008.01.04
Worked on coding interface to lddmm-surface library.
2008.01.03
Worked on coding interface to lddmm-surface library.
2008.01.02
Worked on coding interface to lddmm-surface library.
2007.12.20
Worked on coding interface to lddmm-surface library.
2007.12.19
Worked on coding interface to lddmm-surface library.
2007.12.18
Finished coding Lddmm Surface Parameters class. Created new local directory for a lddmm-surface-lib module and began writing CMake file to build the library.
2007.12.17
Started coding Lddmm Surface Parameters class.
2007.12.14
Started designing interface for lddmm-surface-lib.
2007.12.13
Modified autoconf files for lddmm-surface to find its dependencies based on environment variables. Built and installed lddmm-surface for x86_64, ia64, and i686.
2007.12.12
Needed to build cmake version 2.4 for ia64. Anthony installed the curses lib and I was able to build it. Used the new version of cmake to build gts on ia64.
2007.12.11
Checked in the gts changes. Built gts for i686, tried to build it for ia64.
2007.12.10
Decided to use the autoconf files to create a config.h.in file that can (via #ifdef) be used in either the windows or Linux versions of gts. Then modified CMakeLists to configure this file. Built gts for x86_64.
2007.12.07
Continued working on configuring and building gts from CMakeFiles.
2007.12.06
Started working on configuring and building gts from CMakeFiles.
2007.12.05
Attended CIS Software Group meeting.

Started working on building lddmm-surface and its dependencies. Began building glib from scratch.
2007.12.04
Built lddmm-landmark for ia64 and x86_64.

Started a install configuration wiki page at: https://wiki.cis.jhu.edu/software/install_config
2007.12.03
Joe and I discussed the problem of building static executables with CMake and he recommended modifying CMake to look for static libraries if the user wants it to. Decided to use the autoconf files for building the executables on Linux. The executables aren't used for the CAWorks/lddmm integration. Modified the autoconf files and built for i686. Checked in the changes.
2007.11.30
Continued to try to force the "make install" target to specify its runtime library path. Continued to try to build a static lddmm-landmark-exec.
2007.11.29
Determined that CMake creates separate link commands for the "make all" and the "make install" target. The "make all" target has the -rpath variable set in it, but the "make install" does not. The ia64 version of lddmmLandmark was built using "make all" in the installation directory, but this adds all the CMake artifacts in the install directory, and doesn't allow me to select which executables I want in the install directory (such as the make_license program).
2007.11.28
Continued to try to use CMake to build lddmm-landmark-exec on x86_64.
2007.11.27
Worked on building lddmm-landmark executable for x86_64. A dynamically linked executable runs on ia64, but the one compiled in x86_64 can't find an mkl library. I have not been able to build a statically linked version by modifying the CMakeLists.txt file, but I can by setting certain variables while running CMake.
2007.11.26
Began building lddmm-landmark dependencies with modified CMakeLists.txt files for x86_64. Built libraries under /cis/project/software/opt/x86_64.
2007.11.20
Finished working on building lddmm-landmark for ia64 using CMakeLists.txt files. Verified function with monkey brain landmark data.
2007.11.19
Continued working on building lddmm-landmark for ia64 using CMakeLists.txt files.
2007.11.16
Continued working on building lddmm-landmark for ia64 using CMakeLists.txt files.
2007.11.15
Started working on building lddmm-landmark for ia64 using CMakeLists.txt files, and installing builds under /cis/project/software/opt and source under /cis/project/software/src. Built vxl 1.9 and blitz 0.9 for ia64.
2007.11.14
Created a timeline/progress table for lddmm integration with CAWorks in https://wiki.cis.jhu.edu/software/caworks/LddmmIntegration
2007.11.13
Helped Steve Cheng use Landmarker to use lddmm-landmark and transform some volume data.

Wrote down some design ideas for message passing between point based lddmm modules and apply deformation. Sent them in an e-mail to Anqi, Tim, Anthony, and Joe.
2007.11.12
Worked with Joe on module configuration for lddmm in CAWorks.
2007.11.09
Finished modifying Apply Deformation to write out velocities. Found some test data that includes a surface and some landmarks, and ran lddmm-landmark on it. Now I need to figure out a way to run apply deformation on the surfaces using the output from the lddmm-landmark. I don't currently know of a good way to read in surfaces to produce point sets, put the point sets into apply deformation, then write out the point sets as surfaces.
2007.11.08
Discussed with Anqi her equations for producing velocities and momentums during Apply Deformation. We don't currently have the ability to produce the momnetums so we discussed modifying the Transformation classes to write out the velocities.
2007.11.07
Worked with Jun to help him compile and run a modified lddmm-surface program. Used CAWorks to provide him with test data sets. At 1000 triangles, his surface were large for a test data set for lddmm-surface (approx. 1 hour running time) using CAWorks to read in the CIS byu files, I ran a "Decimate" filter on both surfaces to get a 90% reduction of vertices in the test data sets. lddmm-surface completed the match in less than 5 minutes.

Finished building lddmm-landmark on i686.
2007.11.06
Modified CISLib to remove the dependency on SurfaceFileRepair class in the CMakeLists.txt, Makefile.in, and configure.ac. This removes the unused ivcon dependency from CISLib.

Continued trying to build lddmm-landmark on i686.
2007.11.05
Finished building BrainWorks on CentOS 5.

Continued trying to build lddmm-landmark on i686.
2007.11.02
Started trying to build lddmm-landmark on i686.

Started trying to build BrainWorks on CentOS 5.
2007.11.01
Changed BW version number, built a new BW test. Went through archive of brainworks related e-mail and the cvs log to create a change log page for the CIS BrainWorks webpages.
2007.10.31
Attended CIS Software Group meeting.

Added output byu file to the archive of output files in the lddmm-surface examples page.

Continued making changes to lddmm-surface to read a file containing initial velocities in and using it for a head start to the solution.
2007.10.30
Started making changes to lddmm-surface to read a file containing initial velocities in and using it for a head start to the solution.
2007.10.29
Finished BrainWorks fixes and testing.
2007.10.26
Worked on fixing BrainWorks to handle/produce "correct" byu.
2007.10.25
Build a ConvertSurface executable for Marc Vaillant.

Finished testing true byu for lddmm-surface.

Discussed with Anqi adding initial conditions to lddmm-surface.
2007.10.24
Attended Software planning meeting to discuss six (five) month plans. Mine is to support true byu in lddmm-surface and BrainWorks, then to integrate lddmm-landmark and lddmm-surface into CAWorks.
2007.10.23
Continued working on ivcon and lddmmSurface. Need to read and write true byu. Also need to write out deformed template in the same format as the original template input file.
2007.10.22
Worked on ivcon.

Attended planning meeting based on BIRN all hands meeting report.
2007.10.19
Continued working on ivcon.
2007.10.18
Started fixing ivcon to read "true" byu.

Continued formulating a software development process to institute here at CIS.
2007.10.17
Fixed links on the faq page of the lddmm-landmarks webpages. A backslash was missing in the address spec.

Continued formulating a software development process to institute here at CIS.
2007.10.16
Downloaded DTISTudio from the website and ran the test data from the validation page of lddmm-landmark. Also checked out lddmm-landmark on a mouse brain example.

Began formulating a software development process to institute here at CIS.
2007.10.15
Prepared for and presented a talk on Computational Anatomy for the IT Fair and Vendor Expo.

Started working out where in the lddmm library structure make license code should be configured.
2007.10.12
Finished validation page of lddmm-landmark. Added examples for the sphere to ellipse test (using CAWorks for visualization) and the monkey brain example. Xin Li put her new Landmarker code on the MRIStudio website.
2007.10.11
Worked on lddmm-landmark website. Updated manual page with new options, output file types, etc. Updated faq with DTIStudio information. Updated validation page with X to H example.
2007.10.10
Spoke to Dr. Miller to confirm that results of lddmm-landmark were acceptable.

Recreated the lddmm-landmark-lib, the lddmm-common, and the CISLib libraries, debug and release, win32 and x64. Created a distribution for Xin Li, e-mailed her the code.

Attended Software Strategy meeting.

Attended DTI Studio working group meeting.
2007.10.09
Worked on vectorized version of ApplyTransformation.

Got an e-mail from Dr. Miller asking to clarify some of the lddmm-landmark results, because of a discrepancy in the error levels between the MATLAB code and the C++ code. Explained the discrepancy as a function of which data set was template and which was target, and re-ran the code for the same comparison, and the discrepancy disappeared.
2007.10.08
Checked in all the modifications in Apply Transformation into the lddmm-common module in the CIS cvs repository. Checked in related changes in the CISLib, lddmm-landmark-lib, and lddmm-landmark-exec modules. Checked out all the changes into the win32 development areas, reworked the CMake configuration files to use environment variables instead of hardcoded paths (where possible) and rebuilt win32 executables.

Worked on vectorized version of ApplyTransformation using Blitz++ and MKL to try to improve performance and accuracy of the ApplyTransformation function.
2007.10.07
Found two problems with the ApplyDeformation:
  1. During matching, I was asking the integrator to return the solution of the ODE (the trajectories and momentums) at 10 points. This as incorrect, because it does not take into account the end points. Then dt is not equal to 1/timeSteps.
  2. While flowing the points, I included the time T = 1.0 in my integration.
I changed the code to fix these issues. The results I've gotten with the modifications (on Linux):

Sigma: 2.5
Typelddmm-landmark linuxYounes
lddmm-landmark error0.6393210.67677
flowed points vs. deformed template0.4240870.37528
flowed points vs. target0.5671730.37528
Sigma: 5.0
Typelddmm-landmark linuxYounes
lddmm-landmark error0.1081830.09583
flowed points vs. deformed template0.2190960.18414
flowed points vs. target0.2311990.18407
Sigma: 7.5
Typelddmm-landmark linuxYounes
lddmm-landmark error0.05726960.09583
flowed points vs. deformed template0.1462530.26656
flowed points vs. target0.1447790.26624
Sigma: 10
Typelddmm-landmark linuxYounes
lddmm-landmark error3.728970.06538
flowed points vs. deformed template0.1585090.32225
flowed points vs. target3.612310.32193


While matching the H to the X, these are the results:

Sigma: 2.5
Typelddmm-landmark linuxYounes
lddmm-landmark error0.6811890.67677
flowed points vs. deformed template0.6420740.37528
flowed points vs. target0.4482270.37528
Sigma: 5.0
Typelddmm-landmark linuxYounes
lddmm-landmark error0.0958310.09583
flowed points vs. deformed template0.1072010.18414
flowed points vs. target0.1519860.18407
Sigma: 7.5
Typelddmm-landmark linuxYounes
lddmm-landmark error0.05482890.09583
flowed points vs. deformed template0.1936880.26656
flowed points vs. target0.2107380.26624
Sigma: 10
Typelddmm-landmark linuxYounes
lddmm-landmark error0.06537770.06538
flowed points vs. deformed template0.3163110.32225
flowed points vs. target0.2961790.32193
2007.10.05
Dr. Miller said the discrepancy in the results between my ApplyDeformation and Laurent's code is too great and he asked that the problem be tracked down. I proposed the following plan:
  1. Determine why lddmm-landmark is no longer producing the same results as Laurent's code.
  2. Implement MKL vector operations for the Apply Deformation.
  3. Step through the MATLAB and C++ code to determine where the discrepancy occurs.
Was able to determine that the discrepancy in the lddmm errors between the codes was that my code was matching the X to the H and the MATLAB code matches the H to the X. When I changed this the errors were the same for most values of sigma.
2007.10.04
Discussed ApplyDeformation with Dr. Younes and he gave me a new drop of his MATLAB code to use for comparison. Sent an e-mail to Anthony, Dr. Miller, and Dr. Younes with the results of the comparision.
2007.10.03
Dr. Younes gave me some new MATLAB code to compare with Apply Deformation, but I was getting the same error numbers as fro the previous code.
2007.10.02
Dr. Miller asked for DTIStudio calculations to be verified as follows:
  1. Make modifications to DTIStudio code to get lddmm-landmark to print out the deformation error, flow the original landmarks according to the lddmm trajectories and momentums, then calculate the distance between the deformed landmarks and flowed points.
  2. Reproduce the landmark set that DTIStudio gives to lddmm-landmark by replicating the landmarks for each slice of the volume in the X and H files.
  3. Produce data for sigmas of 2.5, 5.0, 7.5, 10.0, and compare the results.
I was able to:
2007.10.01
Worked on verifying lddmm-landmark calculations in DTIStudio.
2007.09.30
Built a VolumeTransform class constructor around Marc Vaillant's flow code. Used properties of the Gaussian Kernel to determine the subset of the volume grid that will be influenced by each landmark point. The algorithm uses Marc Vaillant's code flow code to transform those neighboring points, then marks them as completed. The algorithm continues to look at grid points in the neighborhood of the other landmarks. Performance was not too exciting, but improvements may be available.

Continued apply deformation verification. Produced screen shots of DTI output of the X and H data set matches at several sigma values. Sent that data to Dr. Miller. Attempted to execute a 3D landmark match, but DTIStudio prohibits that on landmark data sets constrained to 1 slice (apparently). Built data sets where the landmarks are replicated in each slice (as DTIStudio does). Tried to run Dr. Younes' code on these sets but was having difficulty interpreting the results, or plotting the transformed points.
2007.09.29
Finished incorporating Marc's flow code into apply deformation. This code produces results that are as accurate as the existing apply deformation.
2007.09.28
Supported Xin in building new DTIStudio executables. Built x64 versions of lddmm libraries and sent them to her.

Studied apply deformation performance. Determined that the error values produced by lddmm-landmark were identical to or better than Laurent's matlab version of the code.

Started trying to build Marc Vaillant's flow points program into the lddmm apply deformation framework. Began with trying to flow a set of points.
2007.09.27
Finished implementing registry based licensing for lddmm applications. Added a new dialog to DTIStudio for display at start up so the user can register for an lddmm license. Modified the make license program to have an option (-r) to enter a SID and produce an authentication string for the user to copy into his start up dialog. Test code, checked it into cvs, new libraries and DTI modifications to Xin Li.

Met with Dr. Younes to discuss the apply deformation algorithm, implementation, and performance.
2007.09.26
Attended CIS Software Group meeting.

Attended DTI Studio meeting.

Devised a scheme for licensing lddmm applications over the web, based on our current license file system. In the case of DTIStudio, upon starting an application the user will be told his machine ID and directed to the website where they signed up for a user account for DTI Studio. A page will be available that will ask the user for his username, password and machine ID. Behind the webpage will be a make_license executable that will produce an authentication key for that machine. When the user enters the value into the startup dialog, this value will be validated on the user's machine, and stored in the machine's registry under: HKEY_LOCAL_MACHINE:SOFTWARE:LDDMM_LANDMARK_KEY:LDDMM_Landmarks_License At each DTIStudio startup, or use of lddmm, the key will be verfied. Made modifications to win32 software for lddmm to implement the design.
2007.09.25
Plotted error values by iteration for each of the sets of test data. Added to each plot the distance between the points flowed via apply deformation and the deformed template, and to the original target. Stored the plots in the testdata area and printed them for a meeting with Dr. Miller.

Dr. Miller expressed concern about the distance between the flowed points and the deformed target. He asked Laurent to look into the problem.
2007.09.24
Dr. Miller asked for a comprehensive study of the performance of the lddmm landmark matching and apply deformation code. Collected the data sets we discussed that would be useful tests (and that would make good examples on the website) into a directory called ~mbowers/project/lddmm-landmark-testdata. The sets include:
  1. The "X" figure to "H" figure used in testing DTIStudio.
  2. The monkey brain images featured on the lddmm-volume website, and landmarks on these images.
  3. A set of random landmarks on an sphere matched to corresponding points on an ellipse.
  4. Two Botteron hippocampus landmarks sets.
Dr. Miller asked that for testing purposes, the original landmarks be flowed via apply deformation, then their distance to the deformed template be measured. For that purpose, I integrated a class into lddmm-landmark that I had already written that creates PointTransformations from lddmm output. The new option allows the user to flow an arbitrary point set.

Performed lddmm-landmark (with the apply deformation performed on the original landmarks data) on each of the test data sets for sigma = 2.5 and sigma = 10.0.
2007.09.23
Continued looking into using the code implemented in the shape analysis module for flowing an arbitrary set of points based on the results of lddmm landmark matching. Completed integrating the code into the VolumeTransformation class and began testing.
2007.09.22
Began looking into using the code implemented in the shape analysis module for flowing an arbitrary set of points based on the results of lddmm landmark matching. Brought the code into the VolumeTransformation class and began integration.
2007.09.21
Began looking into the difficulty associated with using DTIStudio's existing Volume Transformation algorithm and code in our lddmm apply deformation framework. Attempted to incorporate some of the code into our VolumeTransformation class to see what the dependencies are. Among the issues that we would encounter are that the code still relies on Numerical Recipes (which has been removed from the rest of DTIStudio), and that the algorithm depends on a Laplacian kernel whereas our lddmm code uses a gaussian kernel.
2007.09.20
Began running some of the test data sets from the DTI Studio people through lddmm. Noted some significant issues with how we apply the results of our lddmm deformation. Sent an e-mail stating those concerns and showing result plots to Drs. Qiu and Younes.

Sent an e-mail to Bill Hoffman of Animetrics Corp. requesting ideas about licensing applications via the web.
2007.09.19
Attended Software Group Meeting. Discussed licensing for lddmm in DTI Studio.

Wrote e-mail to Dr. Miller about licensing:
Dr. Miller,

I am trying to figure out a strategy for licensing lddmm-landmark in the Landmarker module of DTI Studio. For our previous implementations of lddmm, and for BrainWorks, we.ve generally just had to generate licenses for the machines here, of in some cases for machines for which we.ve known the host ID, so we can run a program to generate a license here and e-mail it to the user.

DTI Studio is a different story. There are hundreds of users on machines all over the world. We need an efficient way to get people to send us their .SID., and an efficient way to send back a license file that can be installed in the right place.

I can think of a few options for licensing:

Don't require a license for running lddmm-landmark from DTI Studio. When they attempt to use lddmm-landmark, if no valid license file exists, have the program display the computer.s SID, then the user can:
  1. Send an e-mail to support@mristudio.com with their SID and username, and we can send an e-mail back to their registered e-mail address with the license file and installation instructions.
  2. Go to a webpage on www.mristudio.com and enter the SID and their username/password, and we will e-mail the license back
  3. When they attempt to use lddmm-landmark, Landmarker will automatically create a tcp/ip connection with our server here, send back the SID, username, and password, and receive licensing information which can be written into a file on the machine.
Among the things we need to consider:

We need Susumu and the DTI Studio people to agree with what we decide. Choice #3 entails enough technical risk so that it would probably not be available for the Oct. 1 announcement.

Could you please let me know your thoughts on the above ideas or whether you have any of your own? Have you discussed with Susumu our separate licensing requirement for DTI Studio?

Thank you,

Mike B.
Worked in modifications to code from Medical School visit.
2007.09.18
Went down to the med school to work with Xin for the afternoon. Brought some updated libraries with some bug fixes and performance enhancements. I had the source code so we were able to debug problems that we had. We ironed out some misunderstandings about our interface and made some changes in the code and we have lddmm-landmark up and running in DTIStudio.
2007.09.17
Problems with how Apply Deformation passes the Volume Transformation back to landmarker. Rewrote the interface and continued testing.

Problem with the Write statement to output the VolumeTransformation to disk. I am able to break up the transform into slices and write it out, but writing it in one large chunk fails.
2007.09.14
Continued testing lddmm-landmark in DTI Studio.

Put together some sample data runs to show equivalence between lddmm landmark on Windows and on x86_64.
2007.09.13
Began working on creating an example data set for lddmm-landmark remote processing. Can showed me how to place some reasonable landmarks on the monkey brain image example on the lddmm volume website.
2007.09.12
Finished Apply Deformation performance enhancements.
2007.09.11
Continued working on some processing short cuts for Apply Deformation. Wrote a test program to check equivalence of (within tolerance) of the new Apply Deformation.
2007.09.10
Apply Deformation completed. Started stepping through to see dti studio apply the transform to the image. Encountered seg fault. Began working on some processing short cuts for Apply Deformation.
2007.09.07
Helped Nayoung put BrainWorksTest on her laptop.

Was able to perform a lddmm landmark on images in landmarker. Waiting on Apply Transformation to complete.
2007.09.06
Got some data from Xin Li and started reading landmarker user manual to figure out how to invoke lddmm landmark processing.
2007.09.05
Attended CIS Software Group meeting.

Attended DTI Studio working group meeting.

Determined that the error term in lddmm landmark (for both Newton's Method and Gradient Descent) are multiplied by lambda, which gave the effect that Nayoung saw. Modified the code so that the error term is not weighted.

Rebuilt VTK so that the VC 7.1 dll was no longer required. My version of DTI Studio with lddmm landmark integrated is operational and ready to test.
2007.09.04
Finished integrating DTI Studio modifications into my local project. Waiting for Xin to check in some new files into the repository so I can build the project.

Looked at some of the Landmark Matching code related to data error. Resent an old e-mail to Nayoung to try to continue progress on determining the effect of the lambda parameter.
2007.08.31
New DTI Studio code was put into the cvs repository. Began trying to merge it with the version of the code I've created to try to build on het.
2007.08.30
Continued trying to build DTI Studio on het. Current project from the cvs repository continues to have references to Numerical Recipes.
2007.08.29
Attended CIS Software Group meeting.

Continued trying to build DTI Studio on het.
2007.08.28
Continued working on integrating lddmm-landmark-lib into DTI Studio Landmarker. Got the project to compile and link with a few undefined references, some related to Numerical Recipes. NR code appears to be integral to DTI Studio, but Xin Li says that she has removed NR from the code. I need to find out whether the code I have is old, or exactly how the dependencies are removed.

Looked over some lddmm landmark issues that Nayoung raised. Spoke with Laurent and he is adamant that lambda is in the numerator of the data error weighting term. Looked over some of Nayoung's data, and LM is performing all 20 iterations on her matching set. It may be the case that:
  1. The Gradient Descent algorithm is being invoked (it always performs the specified number of iterations).
  2. The Newton's Method algorithm is not converging because of the large lambda or noisy data set.
2007.08.27
Continued working on integrating lddmm-landmark-lib into DTI Studio Landmarker. Wrote a new class to interface between DTI Studio and our Lddmm Landmark Interface.

Looked over some lddmm landmark issues that Nayoung raised. She has seen behavior that indicates to here that a larger lambda makes for a larger error term. She has produced some output files that appear to confirm this.
2007.08.24
Continued working on integrating lddmm-landmark-lib into DTI Studio Landmarker.
2007.08.23
Debugged a compile error in a blitz++ test program for Xin Li.

Continued working on integrating lddmm-landmark-lib into DTI Studio Landmarker.
2007.08.22
Attended CIS Software Group meeting.

Attended DTI Studio group meeting.

Continued working on integrating lddmm-landmark-lib into DTI Studio Landmarker.
2007.08.21
Started integrating lddmm-landmark-lib into DTI Studio Landmarker. Started modifications to LmkLDDMMDlg to get the lddmm landmark parameters.

Made a modification to the Surface Editor in Brainworks to display the vertex index and coordinates when the user picks a point. Added the functionality to bwtest.
2007.08.20
Began looking into integrating lddmm-landmark-lib into DTI Studio Landmarker. The FluidLmkTransform class will essentially be replaced by a class that provides an interface to LddmmLandmark. The LmkLDDMMDlg will be modified to get the required parameters.
2007.08.09
Wrote some code to test BLAS calls, then replaced the existing ones with prototypes from mkl_cblas. These calls handle C++ array storage correctly. The functions worked properly, and lddmm-landmark completed in about 3.5 minutes on Nayoung's data, vs. 45 minutes.

Checked in the performance modifications for lddmm-landmark. Checked them out in the win32 directories, made modifications to CMake files, and built and installed lddmm-landmark for win32 and x86_64. Built make_license files and stored then in a directory I share with Nayoung.
2007.08.08
Replaced a number of vxl calls with mkl calls. Program did not run correctly, so there is some problem with the blas calls, though they were significantly faster than vxl. Not sure if the problem stems from FORTRAN vs C storage.
2007.08.07
Compared the performance of Laurent's MATLAB Landmark Matching code vs. lddmm-landmark. lddmm-landmark requires about 40-45 minutes to run Nayoung's data set, whereas Laurent's takes about 5. I have narrowed down the issue to 3 statements that take up about 80% of the processing time in lddmm-landmark. These are function calls to vnl's fastops::inc_X_by_AB. These functions perform x += AB where AB are matrices. Began working on replacing this call with a call to dgemm, a BLAS routine, and linking in mkl for the definition. Replaced one of the calls in ShootLandmarks.cpp.
2007.08.06
Created a compressed folder of the files necessary for adding lddmm-landmark to DTI Studio's Landmarker program. Built scripts necessary to create a Visual Studio project file for linking in our lddmm-landmark libraries. Mailed this package to Xin Li.

Fixed a Makefile for Sirong so that he can build his ODE on a surface program on io11.

Continued trying to figure out why Laurent's MATLAB Landmark Matching code performed so much faster than the C++ code. Ran Laurent's code on Nayoung's data, and it did finish very quickly. My code took a couple of hours. One difference I noted was that his ODE has a variable time step whereas mine is constant.
2007.08.05
Added option to lddmmLandmark to allow the user to select the type of optimization (Gradient Descent, Newton's Method, or Automatic) used. Fixed a bug where the derive Landmark Matching Parameters class had another copy of _timeSteps like the one already declared in the parent class LDDMM Matching Parameters.

Continued trying to figure out why Laurent's MATLAB Landmark Matching code performed so much faster than the C++ code. Made one modification to mirror a modification he made in his code: The program now longer steps through decreasing lambdas, so the matching is performed once, at the lambda the user sets.
2007.08.02
Finished an improvement to BrainWorks to load a file of indices to describe a track in SurfaceTracker. Checked in code, put executable in /usr/local/BrainWorks/BrainWorksTest.

Laurent says his MATLAB LandmarkMatching can handle Nayoung's data in five minutes and two iterations. Started looking at what is different between his MATLAB code and my C++ code.
2007.08.01
Attended CIS Software Group meeting. Attended DTI Studio collaboration meeting.

Worked on an improvement to BrainWorks to load a file of indices to describe a track in SurfaceTracker.
2007.07.31
Modified blitz++ to not compile some instances of complex constructors to avoid multiply defined symbols.

Finished windows native version of lddmm-landmark. Tested current executable. Checked in the code to cvs, installed the executables in /cis/project/software/lddmm-landmark.
2007.07.30
Downloaded and installed mkl for windows. Fixed calls in lddmm-landmark-lib code to the lapack/blas functions given the protocols are different. Modified CMake files for lddmm-landmark-lib and lddmm-landmark-exec to find the mkl libraries. Determined which libraries are required to resolve undefined symbols to build lddmm-landmark.
2007.07.27
Worked on implementing netlib functions via vxl. Ran into the issue that not all lapack functions are implemented in vxl. Need to explore MKL and what that will mean for DTI Studio.
2007.07.26
Worked on implementing netlib functions via vxl.
2007.07.25
Checked into various solutions for lapack/blas on windows - narrowed down to two that seemed feasible: MKL libraries, or vxl. The MKL libraries are not freely distributed, and I'm not sure what the licensing ramifications are for that package. VXL on the other hand is open source, but is an f2c conversion of the netlib FORTRAN code. Decided to use vxl.
2007.07.24
Had problems compiling lddmm-landmark in windows. Missing definitions for lapack functions. Began searching around for lapack and blas libs for use with visual c++. Netlib versions require FORTRAM libs I was unable to find.
2007.07.23
Checked code out into windows directories and began trying to compile windows versions of lddmm-landmark.
2007.07.20
Finished access to deformation values per iteration.
2007.07.19
Worked on adding access to the deformation cost and deformation error values for each interation of the lddmm landmark matching.
2007.07.18
Finished producing data runs for Nayoung.

Finished integrating the gradient descent code into landmark matching. Checked modifications into cvs.
2007.07.17
Continued producing data runs for Nayoung.
2007.07.16
Worked on producing some sample data runs for Nayoung's presentation to Dr. Miller. Varied landmark matching parameters sigma and kernel sigma on data sets provided by the DTI Studio group.
2007.07.11
Worked on integrating Marc Vaillant's landmark matching into current landmark matching framework.
2007.07.10
Continued coding licensing modifications.
2007.07.09
Continued coding licensing modifications.
2007.07.05
Began redesigning licensing code.
2007.07.03
Searched around for ways to cope with the license linking problem. Could not find any way to work around cyclic library dependency once the license setting statics were moved into the LDDMMLandmark library.
2007.07.02
Checked out the code from the windows development area into the x86_64 area. Began to rebuild. Experienced problems with linking licensing.
2007.06.29
Finished unifying the parameters between Laurent's code and Marc's. What Marc has been calling sigma, Laurent calls lamba (weight of error term in functional). What Marc has been calling KernelSigma, Laurent has been calling sigma.
2007.06.28
Began trying to unify the parameters between Laurent's code and Marc's.
2007.06.27
Worked on integrating Marc Vaillant's Landmark Matching code into the lddmm-landmark-lib code. Exposed the "lambda" parameter (data error weight).
2007.06.26
Ran the Gradient Descent Landmark Matching code with different values of sigma (10.0, 1.0, 0.1, 0.01) for Nayoung. We used a KernelSigma value of 10.0. Produced volume transforms and deformed templates for each sigma value.
2007.06.25
Worked on integrating Marc Vaillant's Landmark Matching code into the lddmm-landmark-lib code.
2007.06.19
Sent Marc Vaillant a tar file of the code he needs to look at to help with the description of the lddmm parameters in landmark matching.
2007.06.18
Looked into some surface matching problems with some surfaces from Nayoung. Determined that the problem most likely is surfaces with vertices that are not part of faces. I ran a program I had written previously to fix problems like this, FixSurfaceFiles, on these surfaces and the problems went away.
2007.06.15
Worked on integrating Marc Vaillant's Landmark Matching code into the lddmm-landmark-lib code.
2007.06.14
Worked on integrating Marc Vaillant's Landmark Matching code into the lddmm-landmark-lib code.
2007.06.13
Sent another e-mail to Marc Vaillant about the Landmark Matching parameters.
2007.06.12
Added the licensing to the lddmm-landmark-lib code, in the "run" method. Needed to modify all the calls to sscanf_s and sprintf_s to add buffer lengths to the call lists. Added a make_license project to the CMake file and built a new VC++ project file. Built and ran the make license program on het. Ran lddmmLandmark on win32 with licensing on het and produced results identical to those on x86_64.
2007.06.11
Tested the windows version of lddmm-landmark. Currently it will not run unless the user specifies time steps. Need a default for this value. Also during testing it became apparent that win32 has the same problem with variable parameter lists as ia64, that it can't handle references in the named params.
2007.06.08
Finished the BrainWorks modifications and copied the executable to /usr/local/BrainWorks/BrainWorksTest.
2007.06.07
Built the windows version of lddmm-landmark.

Tim asked for a version of BW that provides some help with selecting landmark files when landmarking on a surface. He would like the program to automatically select a file with the .lmk extension whose base filename is the asme as the base filename of the surface.
2007.06.06
Created VC++ project from CMakeLists.txt file. Began trying to build lddmm-landmark-exec. No getopt.h exists in Windows, so I used one that Marc Vaillant employs in lddmm-surface.
2007.06.05
Continued working on lddmm-landmark-exec CMake file. Built a Release version of vxl and blitz++ on win32.
2007.06.04
Checked in all the code changes for CISLib, lddmm-common, and lddmm-landmark-lib from the lddmm-landmark-exec integration effort. Checked in lddmm-landmark-exec directory. Merged the changes with changes made for the win32 builds. Updated the CMake files and rebuilt the libraries in win32. Copied the win32 changes over to the x86_64 side and rebuilt and retested the code. Checked in the changes and installed the libraries on the win32 side.

Created a CMake file for lddmm-landmark-exec to attempt to build the program on win32.
2007.06.01
Finished the lddmm-landmark-exec project on x86_64 and finished debugging code.
2007.05.31
Wrote a new class for the lddmm landmark API. It is an API to a Parameters class in the lddmm-common library. Made the Parameters class in lddmm-landmark-lib a subclass of the new class.
2007.05.30
Continued working on lddmm-landmark-exec.

Chased down a problem with BrainWorks loading more than 10 surfaces into the image viewer. Found several defines that look like MAX_SURFACE, in various places. Changed enough of them to make the program work. Moved the executable into bwtest, but didn't check in the code yet.
2007.05.29
Began creating a module, lddmm-landmark-exec, that will implement landmark matching via the lddmm-landmark-lib API. Created the autoconf and make files for building on linux. Used the previous main program for lddmm-landmark, and made modifications to refer to the new library structure via the API. Began compiling the code.

In BrainWorks, changed the max number of peaks in a segmentation to 15, and the max number of surfaces that can be viewed to 20.
2007.05.25
Finished working on updating the API to the library. Built lddmm-landmark-lib in VC++.
2007.05.24
Built the lddmm-common library in VC++. Wrote the CMakeLists.txt for lddmm-landmark-lib, built the VC++ project. Checked the lddmm-landmark-lib project into the cvs repository. Worked on updating the API to the library.
2007.05.23
Attended CIS IT meeting.

Attended DTI Studio programmers' meeting.

Added code for the "crypt" function in win32 to the licensing code in CISLib. Built the library in VC++. Began writing a CMakeLists.txt file for the lddmm-common library. Tried to get CMake to accept my setting the CMAKE_INSTALL_PREFIX but it would not. I had to set it from the CMake GUI. Set up proper installation targets for the CISLib and lddmm-common libraries. Installed CISLib.
2007.05.22
Used createPointXform, an executable I wrote that uses the PointTransformation class, to apply the lddmm-landmark deformation to a spherical point set generated by the phantom data program. Once again, selecting a sigma was important to getting the desired results. The sigma used for Marc's landmark matching is too small, but the sigma used in Laurent's LM is too large. Used BrainWorks to verify the deformation applied to the sphere resulted in a largely elliptical pattern, like the original mapping.

Added code for getting a 32 bit SID-based "host id" in win32 to the licensing code in CISLib. Built the library in VC++.
2007.05.21
Used BrainWorks to verify the landmark data sets generated by the phantom data program (ellipsePoints) work as desired.

Worked on PointTransformation, a class that, like VolumeTransformation, uses the Transformation class to apply the lddmm deformation. PointTransformation applies to a set of points, vs. an image grid. Finished coding the module, in the lddmm-common library. Added a new executable (createPointXform) to lddmm-utils to apply transformations to a given point set. Finished coding the module.
2007.05.18
Worked on producing volume transformations with the very small sigmas values that seem to be required for lddmm-landmark to work properly. Unfortunately at these very small sigmas, the apply-deformation equation for flowing the volume grid produces basically no movement. Sent those results to Nayoung via e-mail:
Nayoung,

I looked at a bunch of the data. I printed out the trajectories in a more readable format and noticed that during the match, there was basically no movement at all of the point from template to target, therefore the transform matrix was nearly zero everywhere.

So there are two sigma parameters, "Sigma" and "KernelSigma". I have been using the same value for both, 20 I think, which is what Laurent recommended for his version of Landmark Matching on similar data. I don't know the difference between them, do you?

Anyway I tried some other sigma values based on recommendations that Marc had given me for Surface Matching. When I tried 0.05 and 0.025 the trajectories looked reasonable, so I produced a transformation matrix, but it was nearly zero. Of course it uses those sigma values in its interpolation, and basically the max value you might get could be e**-100 of so. So yes, almost zero for everything.

I sent an e-mail to Marc Vaillant asking for an explanation of the values for the sigmas that would be sensible for his matching. Those values may need to be separate from the value I use to produce the transform matrix.
Continued working on phantom data generation code. Finished the code that produces any number of sets of landmarks uniformly randomly placed on ellipses with the a, b, c radii specified by the user. This allows the user to create a set of points on a sphere (a=b=c), and generate corresponding landmark points on the ellipse. The program generates the sets in blitz++ as well as BrainWorks landmark points format. Created some sets of points.
2007.05.17
Worked with Nayoung on lddmm-landmark issues. Susumu asked her to determine the effect of various parameter settings on matching performance. She showed me some results (volume transforms). The numbers she was showing me were way too small. I made some modifications to Marc's lddmm-landmark to print out some intermediate results in a readable form. The trajectories were basically not moving, so the volume transformation matrix was consistant with that. I began to look over some of the parameters I'm using for landmark matching. They were based on the settings used in Laurent's version of the program. I recalculated sigma values based on some previous e-mails that Marc had sent me about surface matching, but they still produced no movement of points toward the target. I finally used a very small value that Marc recommends if the data is scaled (which it isn't, from what I can see in the code), and the trajectories moved toward the target. Sent the following e-mail to Marc:

Hi, Marc. We are running one of your landmark matching programs here. It came from a bunch of code of yours that I put together and checked into CVS under the module name shape_analysis.
The program is called match_new, and it uses the Lmk_new class for matching. The program has a parameter for "sigma" and a parameter for "kernel sigma". I am wondering what the difference is between these two. Can you tell me?

2007.05.16
Continued working on phantom data generation code.

Worked with Nayoung on lddmm-landmark issues. Susumu asked her to determine the effect of various parameter settings on matching performance. Showed her the required format for Marc Vaillant's version of lddmm-landmark, and we looked at some volume transforms generated from the matchings.
2007.05.15
Continued working on phantom data generation code.
2007.05.14
Began working on a program to generate landmark points on a sphere and corresponding points on an ellipsoid. This phantom generates point sets that can be used to check how well the apply-deformation algorithm works. This specific data set was requested by Anqi. She felt that this would be the best way to verify visually the apply-deformation code.
2007.05.11
Finished working on the executable for Nayoung, which uses streams as its interface between lddmm-landmark and VolumeTransformation generation. The executable is in a local copy of ~/mbowers/projects/shape_analysis.
2007.05.10
Began working on an executable for Nayoung, that combines Marc Vaillant's landmark matching with the apply-deformation code to create a Volume Transformation based on lddmm output.
2007.05.09
Looked over a lot of code for determining a unique id for a machine running Windows. There is a MAC address that can be accessed, but this provides a 48 bit number, which is incompatible with the linux licensing scheme.
2007.05.08
Continued working on the CISLib. Ran into problems with the licensing code because win32 doesn't support gethostid. Basically there is no 32 bit unique identifier as there is in linux to id the machine.
2007.05.07
One of the utility files I created, CISFiles, requires some non standard C++ functions from dirent.h. After looking around on the web for win32 implementations of dirent, and finding some particularly useful code called sanos, I scrapped that approach and found some functions in vxl that do the same things (vul_file_iterator) necessary for the directory functionality.
2007.05.04
Finished working on rebuilding CISLib, lddmm-common, and lddmm-landmark. Created a CISLib module in the CIS CVS repository. Began working on building CIS in win32. Built vxl-0.1.8 for win32.
2007.05.03
Finished external interface to CISLib, lddmm-common, and lddmm-landmark libraries. E-mailed API to Xin Li.
2007.05.02
Worked on rebuilding CISLib, lddmm-common, and lddmm-landmark.
2007.05.01
Worked on rebuilding CISLib, lddmm-common, and lddmm-landmark.
2007.04.30
Finished working on CISLib. Installed the library and required header files in /cis/project/software/opt/`arch`/CISLib.

Put the rest of the code from lddmm-landmark and apply-deformation into it's corresponding library and namespace:
2007.04.27
Continued working on CISLib.
2007.04.26
Began creating a CISLib module that will incorporate the current CISLicensing and CISUtils libraries. The CIS Licensing classes and the Provenance class will be put in a CIS namespace.
2007.04.25
Completed the lddmm landmark parameters class. Created a wrapper class to act as an API to the lddmm-lanmark library. Sent this file and the class MatchingParameterTypes file to Xin Li so whe can get started working on the interface to DTI Studio.
2007.04.24
Began working on the lddmm landmark parameters class. Added an LddmmMatchingParameters class to the lddmm-landmark library and the LDDMMLandmark namespace. It is essentially a copy of the same class from the the lddmm-landmark executable project, except that its base class in MatchingParameters from the lddmm-common library and LDDMM namespace. Collected some enumerated types from the various parameter classes into a class MatchingParameterTypes in the lddmm-common library.

Added optimizationType attribute to the parameter class. This allows the user to select Newton's Method (Laurent's code) or Gradient Descent (Marc's code) or to allow the program to automatically select an optimization method based on the landmark set count.
2007.04.23
Began the design and implementation of the lddmm-landmark library. This will provide the API for the landmark matching library to be used by DTI Studio. The tasks I see for this effort are: Spoke with Anqi today about creating lddmm landmark and apply deformation validation data sets.
2007.04.20
Needed to add the -hostx option to the usage statement in the CIS Licensing library. Did so, checked it in, then rebuilt the library for all the architectures. Then rebuilt all the make_license executables for all the lddmm modules except volume and installed them under /cis/project/software.
2007.04.19
Began working on PointTransformation to apply transformations to point sets.
2007.04.18
Modified VolumeTransformation to use the new Transformation class.

Verification of lddmm-results showed that some problems existed in the Transformation/VolumeTransformation interaction. Fixed the problems.
2007.04.17
Began working on class Transformation, which pushes down all the transformation creation code on a single point into a single class, which can be used to create volume transformations and or transform point sets.
2007.04.16
Created a directory containing verification test data and began running lddmm-landmark and apply-deformation and looking at the results.
2007.04.13
Hao who works at the med school on DTI Studio thinks the displacement vectors in the Volume Transformation are very small and that the background shouldn't be zeroes. Continued to work on lddmm-landmark verification issues.

Had yearly review with Dr. Miller, Anthony, and Dawn. Agreed that my priorities for the coming year should be:
2007.04.12
Worked on verification of lddmm-landmark and apply-deformation.
2007.04.11
Looked at the Volume Transform data in DTIStudio. Put the data in a world readable directory and told Nayoung about it.
2007.04.10
Created a program called createXform which is a main module that implements the VolumeTransformation class's functionality. Used the output of the landmark matching to create a volume transform.
2007.04.09
Modified the landmark matching program that exists in the shape_analysis module to write out full trajectories and landmarks, so that this data can be fed into the VolumeTransformation class to compare the xForm computed from this landmark matching program to the xForm computed by lddmm-landmark. Ran the program on Nayoung's lmk data sets.
2007.04.06
Created a new matrix from the one with the bad extents and got it to Nayoung to look at. She was expecting to see non-zero data in the background, but the background is dark. And the pattern of the data is asymmetric, which Nayoung felt is an indication that there is a problem.
2007.04.05
Continued working on the program to fix the xform matrix.
2007.04.04
The transform matrix was the wrong size. The array I produced was index from 0-181, but the actual transform goes from 1-181. Began writing a program to convert the existing matrix to one witht he correct dimensions.

Attended DTI Studio programmers meeting.
2007.04.03
The lddmm-landmark run completed. The transform data file was created.

Brainworks was having trouble reading contour files, because the stream pointer could not be made to return to the beginning of the file. Instead of rewinding for another pass, close the stream and reopen. Made the fix, checked it in, and rebuilt bwtest with the new executable. At this time, no expiration date is required for the BW exe, but the CISLicense library license verification routine always checks for expiration. So the library had to be modified to add the option to bypass the expiration verification.
2007.04.02
The landmark matching continues to run.

Completed my yearly evaluation form and sent it to Dawn.
2007.03.30
Finished the desired testing and started the lddmm-landmark run with Nayoung's data.
2007.03.29
Discussed with Dr. Miller how to test the volume transformation, and he suggested the approach of using two sets of landmarks that are on the grid on a volume (i.e., integer coordinates). The resulting transformation should be simple to verify at those points as the difference between the target and the template points.
2007.03.28
Attended meeting with Dr. Miller to discuss progress of apply-deformation. Discussed with Anqi how to test apply-deformation. She recommended I use an approach for applying deformations to point sets vs. volumes for the testing phase. Began to consider how to create new classes and modify existing ones to implement this approach.
2007.03.27
Wrote a bash script for Tilak to move some data around his test directories.
2007.03.26
Wrote a bash script for Tilak to move some data around his test directories.
2007.03.23
Wrote a bash script for Tilak to move some data around his test directories.
2007.03.22
Added debug information to VolumeTransformation code.
2007.03.21
Led DTI Studio meeting.
2007.03.20
Continued trying to verify volume transformation results.
2007.03.19
Started checking results of volume transformation matrix creation.
2007.03.16
Finished integrating volume transformation function of the lddmm library into lddmm-landmark. Producing a transformation matrix in the format required by DTI Studio.
2007.03.15
Continued integrating lddmm library into lddmm-landmark code.
2007.03.14
Began integrating lddmm library into lddmm-landmark code.
2007.03.13
Created a temporary lddmm library in my home directory.
2007.03.12
Began creating lddmm library module.
2007.03.09
Finished coding VolumeTransportation and began to consider testing and deployment options. Started clearing out current development directory (a copy of lddmm-similitude with some shape_analysis modules) in preparation for creating an lddmm common library module.
2007.03.08
Continued working on Volume Transformation code.
2007.03.07
Attended Software Strategy meeting and DTI Studio Programmers meeting. Will help Xin Xi with vxl if she has questions.

Worked on the VolumeTransformation class for apply-deformation.
2007.03.06
Continued development on the VolumeTransformation class. Began writing a constructor for the class that builds the transformation matrix. Added a reference to a MatchingParameters class to each constructor for the class. Created the Matching Parameters class.
2007.03.05
Worked on the Gauss Kernel class. Continued development on the VolumeTransformation class.
2007.03.02
Wrote virtual Kernel Class. Began working on the Gauss Kernel class. Continued development on the VolumeTransformation class. Downloaded and began building vxl-1.8.
2007.03.01
Tim was experiencing problems with lddmm-surface on certain data sets. The program encountered a segmentation fault while trying to read a file with null faces. The problem was a function of the way ivcon indexes arrays of arrays. I modified the way it does some copying and it works now. Checked the mods into the repository, then rebuilt and installed the libraries and the executables.
2007.02.28
Continued work on the apply-deformation code. Worked on a VolumeTransformation class spec. Settled on two ways to make a volume transformation (and therefore two class constructors). The first way is to provide the trajectories and the momentums over time, plus a Kernel and some other lddmm parameters (time step). The second way is to provide the original template points, the initial momentums, and an instance of an ODE solver. I will finish a working version of apply-deformation based on the first method.

Began moving some classes from lddmm-landmark into the local apply-deformation directory to begin creating an lddmm library. Moved in the ODE classes. Began putting all the lddmm library modules in a namespace called "LDDMM". Began working on a Kernel class, using a Kernel class from lddmm-surface as a template.
2007.02.27
Wrote a VolumeSpace class which will specify the Volume the user wants to create a transformation for. Added input and output functions for using streams to specify the Volume.
2007.02.26
Created project apply-deformation under my local projects area. Looked at utilities Marc Vaillant created (flow, shoot, etc.) to see if any could be used for the apply deformation functioanlity.
2007.02.23
Discussed Nayoung's sample data for lddmm-landmark comparison with DTI Studio landmark matching. Got file format information from Xin Li on what a transformation file should look like.
2007.02.22
Helped Felipe get up and running with blitz++. Built blitz++ version 0.9 and installed it in /cis/project/software/opt/win32. Set up Felipe's Visual Studio project to reference the installed library version.
2007.02.21
Attended DTI Studio meeting. Action item to check into license status of supporting libraries.

Started thinking about an lddmm library design. It should include the following pieces:
2007.02.20
Finished modifications to lddmm-landmark. Checked mods into CVS, rebuilt executables for all architectures.
2007.02.19
Continued working on lddmm-landmark.
2007.02.16
Began working on modifications to have lddmm-landmark optionally write out momentums and trajectories, and write out the deformed template and the initial momentum by default.
2007.02.15
Finished modifications to lddmm-landmark to have it write out momentums and trajectories.
2007.02.14
Looked into a project that Lei and Can want to do which involves performing lddmm-landmark, then applying the transformation to a whole head scan. Sent an e-mail to Lei to ask him to specify the outputs he needs from lddmm-landmark.
2007.02.13
Worked on creating classes for specifying lddmm output.

Worked with Anqi on her new Gaussian code.
2007.02.12
Worked on creating classes for specifying lddmm output.
2007.02.09
Modified lddmm-landmark to accept the parameters required for input.

Read grant proposal Tilak asked me to review.
2007.02.08
Got a phone call from Dr. Miller. He asked me to focus on makine the existing lddmm modules usable, first so that they provide all the inputs and outputs our researchers require here, then by creating a library interface to them, then porting them to Windows for integration into DTI Studio.
2007.02.07
Attended DTI Studio collaboration meeting. Xin Li agreed to send me some error messages she was getting while trying to build DTI Studio statically.

Worked on CMake configuration files for Windows port of CISUtils.

Read grant proposal Tilak asked me to review.

Got a phone call from Dr. Miller. He asked me to focus on makine the existing lddmm modules usable, first so that they provide all the inputs and outputs our researchers require here, then by creating a library interface to them, then porting them to Windows for integration into DTI Studio.
2007.02.06
Malini's birthday. Ate cake.

Modified the CISLicense library so that a license creator can specify the host id in hexidecimal, the deault base for the "hostid" command. Checked in the mods and rebuilt the library for each architecture. Went to each lddmm for each architecture and modified the Makefile.in so that the make_license executable would install into /cis/project/software/license properly. Checked in all the Makefile.in, then built and installed all the make_license files. Created a script to create licenses for all the lddmm modules on all the CIS machines and copied it into /cis/project/software/license. Documented the process of making the script in the https://wiki.cis.jhu.edu/infrastructure/license wiki page.
2007.02.05
Discussed creating a software project with Anqi. We talked about kdevelop, cvs, /cis/project/software/opt, vxl, vtk and itk.
2007.02.02
Created a CMake input file to build and install the ivcon library. Created the Visual Studio project and made the library. Started working on getting the CISUtils library to see the ivcon include files.
2007.02.01
Spoke with Mike An about creating a CMakeLists.txt file for cross platform builds of the CISUtils library. Created an initial version of CMakeLists.txt using Mike's Dynamic Tracking file as a template. Built a Visual Studio project then attempted to build it. Many warnings were generated about making calls to some standard unix functions (getenv, localtime, etc.). These calls are deprecated in C on Windows. The compiler was also unable to find the ivcon library, so I read the CMake manual to figure out how best to specify include directories in the build.
2007.01.31
Attended IT status meeting. Discussed looking at Mike An's efforts for creating CMake files for creating Visual Studio project files and Linux build files.

Dominic requested that the Surface Tracking function of BrainWorks be modified to add the capability to save all the paths as one long path. I added code to write out the indeces and coordinates of the entire path as one curve. I did some testing, but created an executable that Dominic can use to further test the function before I check in the changes.
2007.01.30
Read and suggested corrections to a grant proposal for Joe Hennessey.
2007.01.29
Finished working on adding an expiration date to CIS License files. Checked the code into the cvs. Rebuilt the CISLicense library for all the architectures. Rebuilt all the lddmm modules except volume, installed executables for each architecture. Recreated all the license files with an expiration date of 365 days after today.
2007.01.26
Continued working on adding an expiration date to CIS License files.
2007.01.25
Continued working on adding an expiration date to CIS License files.
2007.01.24
Continued working on adding an expiration date to CIS License files.
2007.01.23
Worked with Joe to create a sample unlabelled point file so that I can have some idea of what a vtk xml PolyData file might look like.

Started working on adding an expiration date to CIS License files.
2007.01.22
Spoke with Joe about VTK XML formats, to determine the idea behind some of the file types. There is an image type (ImageData), and a type which can contain various polygonal types of data (PolyData). ImageData is considered "Structured" in the sense that the data is in a topologically regular array of cells. PolyData is "Unstructured". Joe believes that our Landmarks, Unlabelled Points, Surfaces, and Curves should be in PolyData files.

Used paraview and Mayavi to attempt to create files of data of the type that we would use for lddmm input and output. Under Source there are a variety of shape types a user can create. There is nothing there that looked like it would generate points. I selected "Line" and created an output file of the shape in the vtk xml format.
2007.01.18
Read more of the CMake book.

Read Anqi's LDDMM I/O document. Began modifying it to identify what I believe to be the VTK XML formats we will use to effect the various IO pieces.

Built blitz++ on Windows. The package ships with a zip file that contains the files that constitute a .NET Visual Studio C++ solution. VS 2005 converted this file into 2005 format. The library built with warnings, mostly about converting bool to int and converting numbers to strings with sprintf. Will need to do some testing.
2007.01.17
Attended CIS IT meeting. Discussed the importance of the xml io interface to lddmm. Agreed to discuss it with Joe before proceeding.

Attended DTI Studio/CIS collaboration meeting.

Installed Visual Studio 2005 and the MSDN documentation. Tried to copy the disks but the MSDN disk #2 produced errors during the copy.

Read the CMake manual.
2007.01.16
Started looking at the project of getting all lddmm modules compiled and available on Windows, for use in DTI Studio. Spoke with Joe H. to get ideas about how to go about this process. We discussed the importance of a utility like CMake to creating cross platform build environments. Joe loaned the "Mastering CMake" manual to me and I have begun to read it. I have checked out the CISUtils module into a "win32" directory under my home directory to use as a CMake experiment.

Created a new directory /cis/project/software/opt/cygwin. Moved into this new directory some files that I had built in cygwin and installed in /cis/project/software/opt/win32.
2007.01.12
Finished a utility to fix surfaces which contain points that are not part of any face in the surface. Tested the executable and ran the output through lddmm-surface to ensure it fixed the problem. Checked in the code and installed the new CISUtils library and copied the executables down into the /cis/project/software/opt/arch/CISUtils area for each architecture.
2007.01.11
Looked into a way to programmatically fix surfaces which contain points that are not part of any face in the surface. The fix would involve removing these vertices and renumbering all the vertex indices in the faces, then writing the file out. The motivation for this is that lddmm-surface has problems handling files like this because of the way gts does things. I decided to create a separate routine to do this vs. adding the fix to lddmm-surface because then the deformed template would end up with a different topology than the original template.

After looking over code from the CIS shape-analysis module, BrainWorks, and SurfaceMatching, decide the best way to create the surface fixing module would be to create a class of SurfaceFileUtils, inherited from IVCON for i/o and multiformat compatibility, that would go in CISUtils. Built the class into CISUtils and modified the autotool files to produce and install the new class. Began working on the standalone program that uses the new class.
2007.01.10
Modified a script for Dominic to perform some runSim and applSimByu tasks.

Worked on debugging Surface Matching problem with reading byu files that have vertices that are not part of faces. Discovered that the vertex in question in the example file is not referred to in the list of polys. Am considering writing a program that removes vertices like this from a given byu file.

Attended Software Planning Meeeting.
2007.01.09
Talked to Dominic about applySimByu, so that he can apply his runSim outputs to a surface from which he got his landmarks for registration. Sent him a script that Tim sent me that performs this operation.

Worked on finding solution to the problem encountered by lddmm-surface when it reads in a template file which has vertices that aren't contained in a face.

Showed Nayoung and an intern of hers how to modify surfaces that have the un connected vertices problem described above, so that they can be run through lddmm-surface.

Looked further at lddmm-dti-vector to determine why it crashes on x86_64. Discovered that if I link the program dynamically vs. statically, it doesn't crash.
2007.01.08
Tested lddmm-similitude, lddmm-landmark, lddmm-surface, lddmm-dti-vector, and lddmm-dti-tensor. Everything worked as expected, except that lddmm-dti-vector crashes on a call to gethostid from the license library. This call works in all the other executables and on all the other architectures. Did some preliminary testing and some recompiling but solved nothing.

Updated lddmm webpages to reflect new script names and new parameters.

Discussed with Dominic what he needed to do to register four surfaces to one. Showed him the format for landmark files required by lddmm-similitude.

Met with visiting faculty Neils van Strien at lunchtime.
2007.01.05
Tried a variety of different link commands but was still unable to get lddmmSimilitude to build. The problem appears to be that the liblapack.a installed in /usr/lib on the ia64 machine doesn't contain the implementations of the BLAS functions that are contained in the library on x86_64 and i686.

Wrote some draft scripts that take the name of a module, and check out the latest code from the repository, configure the build, then rebuild the executables for that module, then install the executables under /cis/project/software. Wrote another script that sshes into one machine of each of the linux architecture types in use here at CIS, then calls the build script to create and install executables for that architecture.

Installed version 1.6 of blitz++ for x86_64 in the project area.

Built and installed executables for lddmm-dti, lddmm-surface, lddmm-landmark, lddmm-similitude into project area for the architectures that will currently build.
2007.01.04
Finished testing lddmm-dti-vector and lddmm-dti-tensor for the new data provenance and licensing features.

Cleaned up files that don't belong in the cvs repository, then imported a new module, lddmm-dti, into the repository. It contains three subprojects, lddmm-dti-vector, lddmm-dti-tensor, and DTI_Common. The first two contain a license making project.

Created a development area for lddmm-dti under ~mbowers/projects/configResearch/installTest. Created subdirectories for ia64, x86_64, ia64, and cygwin. Checked out lddmm-dti into x86_64, configured and built DTI_Common, lddmm-dti-vector, and lddmm-dti-tensor. Corrected problems associated with checking in symbolic links. Built make_license files for lddmm-dti executables. Built licenses and tested data provenance features of executables. Checked in changes and repeated the process for i686 and ia64, fixing problems associated with differences in each architecture. After fixes, went back and updated other architectures and repeated the process of configuring the builds and building the executables. Copied the executables into the /cis/projects/software/lddmm-dti area.

Installed blitz-0.9 for ia64 and cygwin in /cis/project/software/opt.

Went into my configuration area for lddmm-similitude (~/projects/configResearch/installTest/lddmm-similitude) and checked in and merged changes made to Makefile.in for the i686 and x86_64 architectures. Reconfigured and rebuild these executables. Fixed link problems with make_license build. Built licensing executables, built licenses, tested executables for both architectures. Installed executables in /cis/project/software/lddmm-similitude.

Checked out and tried building lddmm-similitude for ia64 architecture. Encountered link problems apparently related to the blas library. Began researching the problem.
2007.01.03
Finished building make_license for lddmm-dti-vector and lddmm-dti-tensor. Began testing licensing and data provenance.
2007.01.02
Continued working on adding licensing and data provenance to DTI LDDMM modules. Finished adding it to lddmm-dti-vector and lddmm-dti-tensor. Began searching for data and creating the make_license files.
2006.12.29
Continued working on adding licensing and data provenance to DTI LDDMM modules. Began adding the licensing and data provenance to lddmm-dti-vector. Modified autotool files to locate and include the CISUtils and CISLicense library modules.
2006.12.28
Continued working on adding licensing and data provenance to DTI LDDMM modules. Renamed the DTI_Vector and DTI_Tensor modules to lddmm-dti-vector and lddmm-dti-tensor, respectively.
2006.12.27
Continued working on adding licensing and data provenance to DTI LDDMM modules. Cleaned up some of the autotool issues in the configuration files.
2006.12.26
Began working on adding licensing and data provenance to DTI LDDMM modules.
2006.12.22
Finished working on adding licensing and data provenance to runSim. Needed to create new autotool files for configuration and building. Created the cvs module lddmm-similitude and checked in the code. Built the executables for x86_64 and i686 and installed them under /cis/projects/software/lddmm-similitude.
2006.12.21
Began working on adding licensing and data provenance to runSim. Changed its name to lddmm-similitude.
2006.12.20
Attended a meeting in which we discussed the future plans for collaborating with Susumu on DTI Studio - lddmm projects. Anthony is setting up dtistudio.org
2006.12.19
Met with Tilak, Joe, and Dominic to discuss the performance of the SurfaceExtract function in BW, with respect to the white matter threshold parameter. We decided that given the convolutedness of brain tissue the current BW algorithm would not give the most desired results for a researcher trying to isolate the thickness of the cortex, because it will include tissue that is associated with a different fold in the surface. The fix involves assigning each voxel to it's closest vertex in the whole surface, then paring away the surface that's not the mask we're using to extract, then only including the voxels that are included in the pared down surface.
2006.12.18
Heard back from Marc V. regarding lddmm-surface issue. He said that gts will ignore vertices that are not part of a face in the surface. Tilak read the e-mail and concluded the surface might have a vertex like this. We used BrainWorks to visualize the surface in wireframe mode which made it clear where the problem vertex was. Tilak cut the vertex out of the surface, and we re-ran lddmmSurface, and the program ran properly.
2006.12.15
Attended meeting with BIRN administrator Mark Ferber.

Worked on solving issue with SurfaceMatching, where the program crashes given certain input surfaces. The problem surfaces have vertices that are dropped when they are read in via a byu file and converted to a gts structure by ivcon. The dropped vertex causes an initialization problem in the matching program. Sent an e-mail to Marc Vaillant describing the problem, and attached input files that cause the crash.
2006.12.14
Participated in a data provenance phone conference with MGH. Described to them what we hope to do as far as data to/from lddmm, and how our programs could be compatible with theirs. They stated that as long as we provide for a data provenance input file in xml form, and append our provenance data to it, lddmm could operate as part of the pipeline architecture they envision.
2006.12.12
Worked on problem with Surface Matching (lddmm-surface). When the input files are byu, it is possible that a vertex is dropped during the conversion to gts. Debugging output shows the gts representation of the surface contains one fewer vertex than the byu contains.
2006.12.11
Completed all available Hopkins One courses assigned to me.
2006.12.08
Finished creating Provenance class and integrated it into CISUtils. Added printAllInfoXml to lddmm modules. Finished gcc builds and rebuild and redeployment of lddmm modules.
2006.12.07
Worked on gcc build of lddmm on ia64. Had to modify all the autoconf input scripts, and rebuild all libs used by lddmm modules (ivcon, vxl, gts, CISUtils, CISLicense).
2006.12.06
Spent time working on getting a build of lddmm-surface that is static and doesn't rely on shared libs.
2006.12.05
Worked on printAllInfoXML.
2006.12.04
Worked on getting the example data into /cis/project/software for lddmm modules.
2006.12.01
Cleaned up webpages for lddmm-surface and lddmm-landmark. Added allInfo option to manual, added some sigma determination tips to the faq.
2006.11.30
Made license files for lddmm-surface.
2006.11.29
Added the "Min Distance to Surface Mask" command to scripting. Asked Dominic for some test data to verify the implementation.
2006.11.28
Made license files for lddmm-landmark.
2006.11.27
Attended the Software Strategy meeting to discuss CAWorks.
2006.11.22
Got lddmmLandmark to compile on ia64.
2006.11.21
Finished -allInfo and the CISUtils library, at least for x86_64.
2006.11.20
Began working on --allInfo for lddmm modules. Creating a CISUtils library that will contain (to start) the allInfo code which can be used for anything we send to BIRN.
2006.11.17
Built vxl-1.6 by setting CXX=icpc and CXXFLAGS=-cxxlib-icc before ccmake.
2006.11.16
Had difficulty getting ccmake to accept that I wanted icpc as my compiler for the ia64 build of vxl-1.6. It continues to use gcc or c++.
2006.11.15
Worked on building ia64 version of lddmm-landmark. Having difficulty building the module since migrating to ia64.cis.jhu.edu vs. moros. The compiler is new, ccmake was absent, etc. Decided to rebuild vxl because version 1.6 is out.
2006.11.14
Checked in the BW standalone library, sent an e-mail to the BW user's list about it.
2006.11.13
Wrote a standalone program that can use the BW scripting library to run scripts.
2006.11.10
Began working on creating a separate library of BW scripting functionality.
2006.11.09
Added a class, OpMessage, which provides more flexibility in operator messaging. The default op msg will be putting out a string to standard output, but this can be overridden by the usual message box output, GenericWorkProc.

Created a command line option for the BrainWorks executable that the user can use to specify a script file, which is then processed, and the program exits without ever having brought up the GUI.
2006.11.08
Began some clean up of the Brainworks scripting code. Began the process of pushing out all references to GUI code used by scripting. Modified all scripting code to use data structure from the library, vs. class like LoadedVolume and LoadedSurface, which contain many GUI references. Verified memory cleanup in the code.
2006.11.07
Copied existing executables and test data into the cis/project/software/lddmm-landmark area.

Renamed surfacematching to lddmm-surface in the CVS repository. Modified repository housekeeping files so that the history would be retained. Added a tag to preserve a snap shot of the module after this transition. Renamed the main program and modified the Makefiles to reflect this. Added licensing to the module, and created a make_license program in a separate subdirectory.
2006.11.06
Worked on building lddmmLandmark for all architectures. Was able to build it for all but ia64.
2006.11.03
Added _contactInfo to CISLicense settings. Checked it in and recompiled for all our platforms.

Began working on birn deliveries of lddmm-landmark and lddmm-surface. Added _contactInfo to lddmm-landmark, added -v and -h options. Checked code in.
2006.11.02
E-mailed Steve Pieper about how Slicer supports landmarks in XML. He replied that they use MRML format, and that landmarks are called "fiducials". The name of the data section is "FiducialList".
2006.11.01
Met with Joe to discuss VTK XML and how we will support it for lddmm modules.
2006.10.31
Discussed with Tim and Anthony how our Licensing scheme works and how it can be used in a supercomputing environment.
2006.10.30
Began looking at an XML format for landmarks.
2006.10.27
Checked in SvSStats command.
2006.10.26
Worked on adding SvSStats, a script command to implement the DottySurf class processing from a script. Required the separation of DottySurf into a GUI and a computational component (the comp. component forming a base class for the GUI component).
2006.10.25
Finished adding the ability to apply landmarks on one surface to any given surface of the same size. Checked in the modifications for that and for multiple volume file loading.
2006.10.24
Checked in the 3D2D lmk functionality.

Worked on adding the ability to apply landmarks on one surface to any given surface of the same size (i.e., let the scripter use two byu files for input).

Provided Jun Ma with data to use as input to his program which computes a template based on several volumes. The data is Birn Sasha hippocampus data (101 left and 101 right hippocampus volumes). Jun's program needed the files to all be in the same directory, and have a certain naming convention. I wrote a script to copy them to a designated area, with the desired filenames.
2006.10.23
Tested the 3D2D lmk functionality.
2006.10.19
Pulled a function for loading flat maps out of LoadedSurface (which contains a number of GUI functions I don't want in scripting) and created a new class file that contains a flat map data structure and this function. Used this class in the 3D2DLmk scripting and in LoadedSurface. Finished the 3D2DLmk functionality.

Implemented the multiple file selection capability for Volumes. Began looking into making the file list in the file selection dialog resizable.
2006.10.18
Modified the form that performs 3D2DLmks in BW. Created a base class that performs the core functions of 3D2D, called SurfLmk3D2DBase. SurfLmk3D2D is now derived from this class, which has no GUI functionality.
2006.10.17
Began implementing the 3D2DLmk command in the BrainWorks scripting functionality.
2006.10.16
Finished working on multiple surface file selection. Tested the functionality, checked in the changes, checked the changes out into my build area, built a new executable and replaced BrainWorksTest.
2006.10.13
Continued working on multiple surface file selection.
2006.10.12
Began working on multiple file selection in BrainWorks so that users can open more than one surface file at a time from the File->Load Surface menu item.
2006.10.11
Finished working on the documentation for the CIS Licensing library in the CIS Internal webpages (in /cis/local/www_cis/www_cis_ssl/software/libraries).
2006.10.10
Began working on the documentation for the CIS Licensing library in the CIS Internal webpages.
2006.10.09
Moved the lddmm-landmark module into the CIS cvs repository.
2006.10.06
Instead of attempting to rebuild the landmarkmatching module in the cvs repository, I have created a module with the more correct name lddmm-landmark. A number of file names and autotool (configuration) files had to be modified. I have been using a local cvs repository for these experiments.
2006.10.05
In the process of conducting cvs experiments to see if I could relegate the license builder executable to a separate branch repository, I tried to delete a branch accidentally added to the actual CIS cvs repository. This deletion caused the unintended removal of several files from the module. I saved a copy of the latest version of the code before attempting this operation, but the revision history may have been lost. I will try to restore these.

Finished the branching experiments sucessfully.
2006.10.04
Attended our IT meeting. Discussed a grant proposal for BW maintenance.

Finished implementation of CISLicense for lddmm-landmark. Checked these changes into cvs.

Created (via kdevelop) a subproject of lddmm-landmark to contain the license building executable target. Wrote this program and tested it out and it worked.

Began cvs experiments to see if I could relegate the license builder executable to a separate branch of the lddmm-landmark module in the cvs repository, to make it possible to share the source code, but not the code for making a license.
2006.10.03
Modified BW Makefile to use CISLicense. Made the modifications and verified that my local copy worked okay. Replaced the executable used in bwtest, but experienced problems using the license file pointed to by BRAINWORKS_DIR. Some code modifications had to be made to account for the differenes between c file pointers and fstream, which I used. After verification, checked in mods for the new licensing scheme, replaced /usr/local/BrainWorks/BrainWorksTest.

Installed fftw in /cis/project/software/opt and modified the BW Makefile to find the lib here instead of in m,y local directory.

Began implementing CISLicensing in lddmm-landmark.
2006.10.02
Finished the implementation detailed below and checked in all the code into a cvs module called CISLicense.
2006.09.29
Decided to separate the CIS Licensing classes the following way:
2006.09.28
Finished implementing the use of CIS Licensing in BW.
2006.09.27
Finished the LicenseValidator class and began working to implement the functionality in BW.
2006.09.26
Finished the Singleton class LicenseValidator and began working on creating CISLicense, a class of static data and functions which provide the settings and functionality for licensing.
2006.09.25
Continued working on CIS Licensing.
2006.09.22
Continued working on CIS Licensing class. I have decided to build a Singleton class for the user, and this will have behind it a utilities class consisting of static members (data and methods) that will perform most of the CIS Licensing functions. This class can also be used by the utility that will create the licenses.

Tim wanted the deformation timesteps to be 20 vs. 10 for the trajectories, so I reproduced the surface matching and created the byu files (via shell script this time) for that.
2006.09.21
Continued working on CIS Licensing class. I hope to make a Singleton pattern class that a user can instantiate to ensure a single instance of the CIS Licensing class is created throughout program execution. This object will be used to verify that a valid license file exists, and that the user only accesses modules he is authorized to use. Did a bunch of experimentation to figure out how the Singleton could be implemented to meets the requirements of CIS licensing.

Modified LDDMM-Surface to write out trajectories for each time step of the optimized deformation. Produced byu files for each of the timesteps on a matching between two model hippocampi.
2006.09.20
Read Joe's grant proposal for his surface library. Discussed at our (mostly) weekly IT meeting - particularly the aspects of tying the develop to acceptance of commercial third party software package Amira.

Also discussed stream type interface for CIS lddmm modules to be used by collaborators such as Susumu Mori to hook lddmm into DTI Studio.

Also discussed CIS licensing issues. Began work on writing a CIS Licensing class library that our researchers here can use to control who executes their binaries (and where). Basing the code on the licensing of BrainWorks.
2006.09.19
Produced a byu file of a red blood cell surface for Zirong from a file that was in a different surface format.

Finished Tensor3D. Continued to work on utility functions required for the calculation of the data attachment term for unlabelled point matching. I think the best way to do this is to include the blitz++ package into the code. Blitz++ gives a lot of useful indexing, slicing, and other multi-dimensional capability that I could otherwise spend a lot of time building into the lddmm-framework.
2006.09.18
Continued working on Tensor3D and data attachment for Unlabelled Points.
2006.09.15
The LDDMM framework needs a usable 3D data structure so I created Tensor3D. It is based on Marc Vaillant's DblTensor from his Surface Matching code, except that it is derived from vnl_matrix, and it is a template that can use any (numeric) data type.
2006.09.14
Decided to work on building the overall framework of Joan's LDDMM code vs. all the different options. For starters I hope to get landmark matching and unlabelled point matching working using the adaptive descent optimizer. Started working on the data attachment calculation code and its supporting subroutines.
2006.09.13
Finished working on AdaptiveDescent class. Finished AdaptiveDescentParameters class. Began working on ConjugateGradient class (another class derived from the Optimization class). Finished the Parameters class for ConjugateGradient.
2006.09.12
Continued working on AdaptiveDescent.
2006.09.11
Continued writing the Optimization class, then started on an OptimizationParameters parent class for the different sets of parameters used by the different types of optimizations. Wrote a FixedDescent class and began working on an AdaptiveDescent class.
2006.09.08
Started writing an Optimization parent class for the different optimizing algorithms Joan has in his point matching code.
2006.09.07
Read more of Marc Vaillant's code, in particular the optimization algorithm (gradient descent).
2006.09.06
Met with Jason Minion to discuss DART. Decision was made to use the LDDMM-Volume code as an initial DART experiment (in particular because it already has some test case programs written).

Worked with Sirong to help debug his ODE on surfaces code. Showed him kdevelop and how to use it. Loaded his code into a kdevelop project, compiled it in debug with optimizations turned off, and showed him how to use the debugger to find errors, the editor to correct them, and how to build the new code through kdevelop. Found and fixed a couple of errors with him.
2006.09.05
Looked through itk, with particular interest in whether the surface representation, itkMesh, would be better (easier to implement, read, understand) than the gts representation, or my own implementation. Started looking more into Marc Vaillant's Current Matching code to see how gts is woven into the program's structure. The Gradient Descent optimizing template doesn't rely on it, but the Distribution Matching class (which is the parent class of Current Match) does, so of course Current Match does also. It leaves a choice of pulling gts out of the parent class (possible by making the class of the target and atlas representations a template class parameter) of creating a more faithful implementation of Joan's MATLAB code and using vnl for matrix representations.
2006.09.04
Read about various math libraries that I have been giving consideration to using for Joan's matching code, particularly blitz++.
2006.09.01
Wrote a Makefile for Sirong that compiles and links together vxl, gts, and ivcon with his code that computes ODEs on a surface.
2006.08.30
Met with Dr. Miller to discuss lddmm issues. Anqi has written a Curve Matching program that Dr. Miller wants converted to C++.
2006.08.29
Pointed Sirong to some of the vxl and gts libraries we have built under /cis/projects/software/opt.
2006.08.28
Built a 64 bit, optimized version of SubSD, the command-line BW module that computes distances between a surface and segmented voxels in a volume. Preliminary testing done by Dominic indicates that the new SubSD runs twice as fast as the previous version.
2006.08.25
Worked on LDDMM framework classes.
2006.08.24
Worked on LDDMM framework classes.
2006.08.24
Checked in LDDMM-Surface code modifications. Rebuilt modified code for IA64 processors.

Worked on LDDMM framework classes.
2006.08.23
For LDDMM-Surface, added access methods to set and get the initial and final cost values. Put them out in call to outputResults. Did some testing on file Tim Brown recommended checking.
2006.08.22
Looked at LDDMM-Surface to determine how to capture the initial and final cost mismatch values. Exchanged e-mails and had a phone conversation with Tim and Marc Vaillant to discuss how to do this.
2006.08.21
Gathered the results of landmark matching on the VETSA data.

Met with Sirong to discuss the ODE Solver and the related classes I created for landmark matching. He is going to use it for solving ODEs on surfaces.
2006.08.17
Worked on the LDDMM framework code in C++.

Kicked off scripts for landmark matching (with the new code to output initial and final cost mismatch values) for the VETSA data. Created scripts for Tim Brown to kick off the next night (at the beginning of the weekend) on what machines were available for processing.
2006.08.16
Attended a meeting with Joe H., Tim Brown, Anthony, and Dr. Miller to discuss the requirements of the next generation BrainWorks program, and how existing visualization packages provide some of the required functionality.
2006.08.15
Finished the LDDMM-Landmark updates.
2006.08.14
Started working on code modifications to LDDMM-Landmark to write out initial cost data. Worked on providing an option that causes the program to only write out inital cost value. Modified the program to not require the user to use the -d option to specify that the target was a directory of files, against each of which the template should be matched. Now the program just determines if the target filespec is a directory for this functionality.
2006.08.11
Started working on the LDDMM framework code in C++.
2006.08.10
Continued to look at a problem in LDDMM-Surface where the code asserts an error on certain template data files. Joan determined that if he ran a program that removes very small facets in the triangulation of the template surface, LDDMM-Surface will run on the previously unusable template files. He is using ivcon to read in byu surface files. I have pored over the code that creates the gts surfaces from the byu input, but haven't found any code that looks wrong, or even takes into account the quality of the triangulation. I did notice that gts seems to think the triangulation has one less vertex than ivcon think there should be, so the next thing to do is account for the discrepancy.
2006.08.09
Joan found a problem in LDDMM-Surface where the code asserted an error on certain template data files. Created a debug version of LDDMM-Surface (and ivcon.a and gts.a) to try to determine what the problem is.
2006.08.08
Found a bug in the 2D/3D Landmarking function in BW that produces an error when 3D landmarks are loaded. Built a new bwtest.
2006.08.07
Wrote the BW mailing list about the Surface Generation script function.
2006.08.04
Pointed Tim to the LDDMM-Landmark data so he could produce scatter plots of symmetry findings for Dr. Miller.
2006.08.03
Finished combining cost function results from running LDDMM-Landmark on the Botteron data.

Finished coding the "SurfaceGeneration" script function in BW. Tested it by creating a surface in the GUI, then creating the same surface via the script function, and comparing the resulting files. They are identical. Checked the code modifications in, then deployed the new executable to /usr/local/BrainWorks/BrainWorksTest
2006.08.02
Worked on combining results of running LDDMM-Landmark on the Botteron data to determine the cost function values. The resulting files are columns of a matrix of size 170x170 (for each side).

Malini requested the addition of a "SurfaceGeneration" script function in BW. Used the existing Surface Generation function (given an image and a Threshold, generate a surface) found in the Image Viewer.
2006.07.28
Modified lddmm-surface to write cost file. Checked code in and produced an i686 version which I coppied to /cis/project/software/lddmm-surface.

Created scripts to rerun the Botteron data through lddmm-landmark to produce cost function values on all the data. Determined that about 10 CPUs could handle the processing over the weekend, so I divided the job into ten, then started it on ten processors to run until finished.

Vacation 7/31-8/1.
2006.07.27
Finished working to fix BW code that writes a carriage return at the end of every landmark label for landmark files that it reads in. Checked code in, moved new executable to BrainWorksTest.
2006.07.26
Checked in the modified lddmm-landmark code that writes the cost function out. Produced and copied lddmm-landmark executables to the /cis/software area for x86_64 and i686.

Sent to and received from Marc V. an e-mail to pinpoint where the cost function value could be determined in lddmm-surface code. Made modifications to lddmm-surface software to write this value into a file.

Began working to fix BW code that writes a carriage return at the end of every landmark label for landmark files that it reads in.
2006.07.25
Modified lddmm-landmark to write cost function at each iteration during the solution. The cost by iteration file will have extension .lci; the final cost output file has extension .lcs.

Started looking at getting the cost function value from the lddmm-surace code.
2006.07.24
Modified lddmm-landmark code to write out the cost mismatch value. Began working on writing out the cost function at each iteration during the solution.
2006.07.21
Continued working with ArgoUML to construct UML class diagrams for Joan's momentum minimization point matching program. Continued to read his code, and his description of the Matching Framework.

Met with Joan to discuss C++ design of his Point Matching code. Discussed the Fast Gauss Transform, and how it is used to quickly produce kernel sums.
2006.07.20
Continued working with ArgoUML to construct UML class diagrams for Joan's momentum minimization point matching program. Continued to read his code, and his description of the Matching Framework.

Met with Joan to discuss C++ design of his Point Matching code. Discussed the formulas for the functional that Joan minimizes to perform the matching, and the gradients for the data regularization terms, and the data attachment terms. Discussed the ODE solver he implemented to solve for the momentums.
2006.07.14
Continued working with ArgoUML to construct UML class diagrams for Joan's momentum minimization point matching program. Continued to read his code, and his description of the Matching Framework.

Met with Joan to discuss C++ design of his Point Matching code. Folded his comments back into design.

Vacation from 7/17-7/19.
2006.07.13
Continued working with ArgoUML to construct UML class diagrams for Joan's momentum minimization point matching program. Continued to read his code, and his description of the Matching Framework.

Met with Joan to discuss C++ design of his Point Matching code.
2006.07.12
Continued working with ArgoUML to construct UML class diagrams for Joan's momentum minimization point matching program. Continued to read his code, and his description of the Matching Framework.

Read chapter about the Composite pattern in the Design Patterns book. The collection of data structures that comprise the template and target objects could be represented by this pattern.
2006.07.11
Continued working with ArgoUML to construct UML class diagrams for Joan's momentum minimization point matching program. Continued to read his code, and his description of the Matching Framework.
2006.07.10
Started using ArgoUML, a java based UML design tool. Began constructing some preliminary designs based on what Joan and I had discussed.
2006.07.07
Downloaded a variety of UML design tools for for the Point Matching design effort. Started out by trying StarUML. It has some strange problems with Copy and Paste operations which gave me some concerns as to whether I would like to go forward with the design using it. Tried a program called BoumiUML. BoumiUML did not have a graphical interface that made sense to me.
2006.07.06
Started looking around for UML design tools to produce a visual design artifact for Joan's Point Matching code.
2006.07.05
Met with Joan to discuss his Matching program and how it works. It performs a general matching problem, where the data can be unlabelled points, curves, landmarks, or triangulated surfaces. It groups all the data together into one large point set, calculates the diffeomorphism, then computes the functional of each component of the matching separately. The energy function is the sum of the functionals of each of the components, plus the data regularization term (from the diffeomorphism).

Met with Joe to discuss Joan's program, and how it could be wrapped into a C++ wrapper.

After considering the project of converting Joan's code into C++, I think the lddmm problem can be generalized into class components of data sets, diffeomorphisms, optimization functions, and matcher objects that contain any number of these. I began downloading UML tools to draw up some of these concepts graphically.
2006.07.03
Finished LDDMM-Landmark webpages.
2006.06.30
Worked on LDDMM-Landmark webpages.
2006.06.29
Worked on LDDMM-Landmark webpages.
2006.06.28
Looked at several publications and webpages to try to familiarize myself with the mathematical concepts involved in landmark matching, and lddmm in general.
2006.06.27
Began working on lddmm-landmark webpage.
2006.06.26
Attended a meeting to discuss CIS IT issues. Discussed a 6AM to 6PM plan for coverage at CIS. Discussed the importance of building up the CIS core value software, and Joe being involved in the process. Discussed how Joan's programs combine surface, image, point matching into one cost function.

Read a document Joan wrote to describe the lddmm framework. Looked over Joan's Point Matching code again in preparation for working with him to discuss what we want to do with his code.
2006.06.23
Vacation.
2006.06.22
Vacation.
2006.06.21
Vacation.
2006.06.20
Added the SurfaceExtractRegion function to scripting. Did some testing and built a new version of BrainWorksTest for Malini and her student to test.

Built versions of ivcon, the image file format converter, for x86_64, ia64, and i686.
2006.06.19
Worked on creating a scripting interface to the Surface Extract by Region functionality. Added a class that inherits the SurfaceExtractRegionBase class and provides a public method calculate() that sets all the inherited data members of the class, then calls the calculate method of the parent class.
2006.06.16
Vacation.
2006.06.15
Completed separating the Surface Extract by Region functionality from the GUI class that currently performs this function. Did some quick testing to make sure GUI operation still works as before with the new code. Began building in the scripting capability for this function.
2006.06.14
Began working on an enhancement to BW scripting capability. Malini has requested that the Extract Surface by Region be made scriptable. The class that performs this functionality is a GUI class, so all the functionality that performs the actual calculation has to be isolated into a base class that can be called by the GUI class and the scripting class alike. Began the process of separating out this functionality.
2006.06.13
Fixed a brainworks problem that Dominic was having with the surface cut functionality. Problem had to do with code clearing out a list of GLPrimitives then updating the SurfaceViewer, but the SurfaceViewer was unaware the Primitives had been deleted. Fix was to make the SurfaceViewer aware list had been deleted before the update.

Built gts-CIS for/cis/project/software for ia64 and i686.
2006.06.12
Finished working on autotool files to build a static lddmm-landmark on ia64. Copied static lddmm-landmark to /cis/project/software/lddmm-landmark. Checked in code and autotool file changes to CIS cvs repository.

Built blitz++ in /cis/project/software for ia64, x86_64. Built gts-CIS for/cis/project/software for x86_64.
2006.06.09
Began working on autotool files to build a static lddmm-landmark on ia64.

Built blitz++ in /cis/project/software for x86_64.
2006.06.08
Unable to build vxl-1.5.1 for ia64. Sent a message to the vxl-users group with the error. Built vxl-1.4.0 to link with ia64 version of lddmm-landmark.
2006.06.07
Finished building vxl-1.5.1 for x86_64, i686.
2006.06.05
Began trying to build latest versions of vxl (1.5.1) and install them in /cis/project/software/opt. Need to build a verisons for x86_64, i686, ia64.
2006.06.02
Finished autotool file modifications to support an ia64 build with icpc compiler. Checked in modifications to CIS cvs repository.
2006.06.01
Merged ia64 configuration mods to lddmm-surface into cvs repository. Broke out the ivcon code into a subdirectory (it was previously a zip file). Having the code in the repository that way allows me to handle code changes more effectively.
2006.05.31
Finished the SurfStats BrainWorks script command.
2006.05.30
Continued working on merging lddmm-surface code and config changes back into cvs repository files for building on ia64 architecture.

Started working on a new BrainWorks script command: SurfStats. The command write the surface area and volume of a surface to a text file.
2006.05.24
Began working on merging lddmm-surface code and config changes back into cvs repository files for building on ia64 architecture.
2006.05.23
Finished building lddmm-dti for IA64. Copied the statically linked executables into /cis/project/software/DTI/ia64. Noted the new architecture on the lddmm-dti webpage. Checked in modifications to the code and the autotool input files for the ia64 architecture.
2006.05.22
Continued working on lddmm-dti on IA64. Rebuilt module and third party libs using g++.
2006.05.19
Worked on building lddmm-dti on IA64. Attempted to build third party software (blitz++) and CIS code using the icc compilers but the linker reports a large number of undefined references.
2006.05.18
Finished the CIS modules time line.
2006.05.17
Began working on a timeline for the schedule of the configuration effort of the CIS modules. The timeline will include the configuration of existing lddmm modules and the deployment of Third Patry libraries used by CIS
2006.05.16
Finished building the lddmm-surface modules for i686, x86_64, and cygwin. Copied them to /cis/project/software/lddmm-surface and modified the lddmm-surface webpage to tell users of these copies.
2006.05.15
Provided distributions of gts (modified by Marc V.) and ivcon to Zhang.
2006.05.12
Continued producing staticly linked lddmm-surface executables to add to the CIS software area for public use. Created a version for the i686 architecture. Had problems with ar in rebuilding the ivcon library on cygwin. Tried to uninstall and reinstall cygwin but the problem continued. Had to copy the object files used to build the libraries onto the C: disk so ar could handle them. It's a little complicated, but apparently ar can't handle files on different file systems (as the U: contains the object files and ar uses the C: drive for temporary files).
2006.05.11
Began producing staticly linked lddmm-surface executables to add to the CIS software area for public use. Created a version for the x86_64 architecture.
2006.05.10
Introduced Yan to cvs and kdevelop. We went over checking out a module, using configure to create the build environment, modifying code, and checking in changes.
2006.05.09
Officially released lddmm-dti for CIS use.
2006.05.08
Created Doxygen documentation on lddmm-landmark and lddmm-surface for Tilak.

Copied lddmm-dti executables to the cis software area for x86_64 and i686 architectures.
2006.05.05
Finished lddmm-dti webpages.
2006.05.04
Worked on lddmm-dti webpages.
2006.05.03
Checked out DTI source code to verify build. Worked on mods for building on a 32 bit machine.

Began working on lddmm-dti webpages.
2006.05.02
Finished the code configuration process on DTI_Vector. Wrote autoconf input and Makefile.in for use in configuring builds of DTI_Tensor and DTI_Vector. Checked in initial version of code.

Spoke with Joan Glaunes at length about Point Matching. He is going to send me a recent version of his matching routines (MATLAB) that have a lot of obsolete code removed from them.
2006.05.01
Finished beautifying DTI_Tensor Code. Worked on a Makefile to build the code.
2006.04.28
Began the code configuration process on DTI_Tensor.
2006.04.27
DTI Vector and DTI Tensor are directories that contain some of the same code files. I created a new directory (DTI_Common) and copied the common files into it. I started configuring the code according to the CIS practice. practice. I wrote a sed command script file to properly space the code. I wrote a configuration file for the DTI_Coomon library.
2006.04.26
Met with Dr. Miller, Joe, Tim, and Anthony about strategy. Dr. Miller said that it was very valuable to have the various lddmm modules available to CIS researchers in a simply usable way. My responsiblity is to provide a command line interface to the various lddmm modules that I work on - I met with Yan to discuss her DTI programs, and started the code configuration process on her code. She has written DTI vector and DTI tensor which perform lddmm matching. I was able to get the DTI vector code to build and run on my machine.
2006.04.25
Continued to work with Anqi's code. Built a 32 bit version of "boost". Was able to build the program on a 32 bit machine (omega) and it ran properly. Something about the code is incompatible with 64 bits. I built a static version (used the -static switch during the link) so that Anqi can use the code on different machines at CIS.
2006.04.24
Checked the new landmark matching code into the CIS cvs repository.

Began working on Anqi's surface eigenvalue code. Had to build the "boost" libraries. Was able to build the code on my machine in 64 bit. It failed to run correctly. GTS is reporting errors in the construction and manipulation of some GTS objects. Ran the program in debug mode and noted the errors were initially reported in a class called the GtsAdapter, which Anqi wrote to interface with GTS.
2006.04.21
Finished the directory option for landmark matching. Added code to write two output files: one for the distances and one for the input file names. Write a zero into the output file if the template file name is in the directory of target files. Copied an executable into an area that Tim can read.
2006.04.20
Worked on further improvements to the landmark matching program to support Tim. He has a directory for each subject, and in the directory is the runsim output between the subject and every other subject in the study. I am working on modifying landmark matching to allow the user to specify a file name and a directory name, and matching will be performed between the specified file and every file in the directory.
2006.04.19
Began working on improvements to the landmark matching program to support Tim's VETSA effort. Added the ability to read landmark files in the runsim output format.
2006.04.18
Modified lddmm - landmark to reflect what I discovered about stdarg. The problem call was changed to pass a pointer instead of a reference for the first parameter. The program runs considerably slower on moros (36 s) than on my machine het (15 s). This is in debug mode so an optimized build may perform better.

Checked in several changes that were not related to getting the program to compile with icc on ia64. Decided to add the ia64 version to cvs on a separate branch, based on the belief that the ia64 may have a limited lifetime.

Worked with Anqi for a while to try to help her get her surface eigenvector program built and operating properly. She has left a copy of the code in ~anqi/Programs for me to continue to work on.

Spoke with Yan Cao about her providing her DTI Tensor and DTI Vector for me to look at and incorporate into our mapping structure. She copied the code to ~yan/outgoing.
2006.04.17
The stdarg test program that I worked on showed that using a reference as the first argument (the only named one) in the a call to a variable parameter list function causes the rest of the args to be wrong. This isn't a problem on programs compiled with gcc, whether on ia64 or x86_64. Tarred my results and sent them to my vxl-users correspondent.
2006.04.14
Made more efforts to get stdarg to work properly. Received a response to my post on vxl users asking that I provide a minimal test program to demonstrate the problem. Began working on that code.
2006.04.13
Tried a number of different means to get the call to createState, which has a variable parameter list, to work properly. I posted a message to the vxl users group (since I was using vcl_cstdarg.h) to ask for help.
2006.04.12
I built lddmm - landmark on x86_64. It runs successfully. I began running the program on moros in debug mode using ddd, a graphical debugger. It appears that the program crashes while trying to pass variables in a variable argument list.
2006.04.11
The version of lddmm landmark compiled with the original vbl_array_1d code crashes. Since I do not have kdevelop on the cluster I decided to fold the changes into the x86_64 version and test on het. I needed to rebuild the original version of vxl for x86_64.
2006.04.10
During the landmark matching effort I made mods to a vxl class vbl_array_1d to make it work more like vbl_array_2d. I decided it would be a good idea to use vbl_array_1d as distributed in the library, so I made changes to the landmark matching code to handle the standard version.
2006.04.07
After struggling this week with building Third Party code and CIS code on ia64, I discovered that most issues were solved by setting the following environment variable before running configure or CMake. Generally this forces compilation to look at the Intel libraries vs. the gcc libraries. This clears up a lot of undefined reference problems with the std namespace.

Other undefined symbol problems are addresed linking in libcxa and libcprts.
2006.04.05
Began working on building lddmm - Surface and lddmm - Landmark on the cluster. Needed to rebuild vxl - which didn't exist on the cluster. The binaries for ia64 are not available for v2.2, so I tried to dowload and build the source but had numerous problems. Anthony downloaded the 2.0 binaries, so I was able to build from the source by running the older Cmake version to configure the build for the newer cmake version.
2006.04.04
Continued working on modifications to BW requested by Tim Brown. Was able to order the LM list by means of setting the index of the push button when it is added to the container. Repeatedly runing this operation, however, causes crashes in the software which I have not been able to fix.

Continued to test and work on adding multiple surfaces to the Surface Editor for Dominic's work. This is also causing crashes during operation, which I have not been able to figure out.
2006.03.31
Had a meeting with Dr. Miller, Anthony, Tim, Tilak, Joe Hennessey. Discussed the next generation of Brainworks software. Joe has the responsibility to develop the next generation of Brainworks software. My responsibility will be to provide as modules the lddmm programs written by CIS researchers. Joe will specify the interface to which these modules will conform.
2006.03.30
Downloaded and built open source software called load5, which allows a user to perform partial loads on MATLAB files. Malini found the module on the internet to help Neeraja load large image files - which were taking a long time to read, but from whihc Neeraja only wanted a small region of interest.

Began working on modifications to BW requested by Tim Brown. I was able to add a confirmation dialogue to protect again unwanted clearing of all landmarks in the Surface Landmarking module. Began working on allowing the user to insert Landmarks into the lm set at any position desired, versus only at the end of the list. None of the structures used in the Add Landmarks code support insert operations, so this had to be built in. I have not yet been able to reorder the LM list that appears in the GUI.
2006.03.28
Continued working on Surface Editor.
2006.03.27
Began working on the capability to add addtiional surfaces to the Surface Cut module of BrainWorks (a Tilak request on behalf of Dominic).
2006.03.24
Executing and observing Joan's Point Matching MATLAB code to try to determine how it would best fit into the existing Matching code structure.
2006.03.22
Final testing of scripting, then examination of code mods in tkcvs, then checked in the script functionality. Informed BW users of new capability via the BW mailing list.
2006.03.21
Two days testing of BW script functionality.
2006.03.17
Folded Antsy3P back into the BW scripting code I had been working on. Cleaned up code for other BW script commands, and added boolean variables to circumvent GUI calls that we don't want being executed while a script is running.
2006.03.16
The full blown Antsy program can be broken down into
  1. Resegment the image from 5 to 3 peaks.
  2. Create an smap (unless already given one)
  3. Write out the distance files for the different tissue types.
I began working on simple programs that do those steps separately, in the hopes of making them simple enough that folding them back into BW in a new scripting capability. Finished up Antsy3P, ResegMRB. Began folding this functionality into my existing BW scripting efforts.
2006.03.15
Tilak gave me a program that Clare had written (in idl) where if the user inputs a 3 peak segmentation and an smap file (put out from SD) the Antsy processing is very simple. I created a new directory /cis/project/bwwrap/Antsy3P to implement this program.
2006.03.14
I copied the old /cis/project/bwwrap/Antsy directory into Antsy5or3 to begin working on fixing the 3 peak Antsy problem. I cleaned up the code and began looking into fixing the problem.
2006.03.13
Tilak, Malini, and I spend a while discussing how the Antsy command line program does/doesn't work with a 3 peak segmentation. I created a version of the code which could be run in the debugger and Malini and I stepped through some of the code to determine what problems there were.
2006.03.09
Set up a kdevelop project to add Point Matching to the existing Landmark Matching project. Called it AllMatching. Wrote a class to contain the parameters needed for Point Matching.
2006.03.07
Malini had previously written to the BrainWorks mailing list about a problem the program had with displaying Landmark data. It consistently displayed the landmarks in the wrong slices in the image viewer. I made modifications to the code so that the landmarks were displayed properly, then checked in my fixes for this problem, and the ones noted in the list below.
2006.03.06
The problem noted by Dominic is caused by an X windows call with an illegal parameter. This problem needs to be hunted down and fixed, but for now it was observed that the problem dowsn't occur on machines running CentOS, so Dominic can continue his efforts on those machinesfor the time being.

In the matrix of metric distances that I created for the Botteran data (left side), there appears to be a row of data that lies well outside the normal range of distances (row 78). I scanned the landmark values, and noted that it was possible that the L1 and M1 landmarks appear to be switched. I modified existing perl scripts to create MATLAB scripts to generate the values for row and column 78 (since the matrix is essentially symmetric). I ran the scripts and created the required metric distance data. I created a new distance matrix with the new data and mailed the results to Tim Brown.
2006.03.03
A student (Dominic) was having trouble with BrainWorks crashing when he would do the following on hpmarc:
  1. Load an image file.
  2. Load two surface files.
  3. View one of the surfaces, then close the viewer.
  4. Try to view the other surface.
To try to debug this problem I built BrainWorks (with debugging information) on hpmarc. While running the program, I noticed and fixed the following issues:
2006.03.02
Continued studying Joan's Point Matching code.
2006.03.02
Continued studying Joan's Point Matching code.
2006.03.01
Compiled a debug version of the Reseg command line program, and stepped through its execution. Tilak noticed that the PV1 and PV2 variables were of type integer. Since the values are between 1 and 0, this is what caused the problem. The problem existed in the Antsy program too. Created new directories for the two programs, fixed the problems in the code, and recompiled the executables.
2006.02.28
Worked with Tilak to produce phantom data sets to try to determine why some data he is preparing for a proposal didn't look right. After going over the data and looking through BW code, Tilak thought it would be best to produce test data sets. Compiled existing phantom data set code which produces a variety of shapes, then wrote one which produces an isocontour that is sinx*siny, to test Tilak's theory that the bad numbers are produced by the highly convoluted surface of the brain.
2006.02.24
Looked through BW code to answer some questions Lei Wang had about the blurring function.

Continued studying the Point Matching code.
2006.02.22
Spent the day attempting to reproduce to slides that Dr. Miller used in his presentation. I was unable to locate many of the data files referred to in a number of the scripts in /cis/project/wholebrainmapping/MATCH. I suspect that they reside on a laptop that Malini uses. I began trying to reproduce these data files, but I had to download and build a program called qslim in order to produce a downsampled byu file that I could work with effectively in the MATLAB OpenGL renderer. I was finally able to reproduce images that looked something like what I needed, but I was unable to determine where the correct data set was in time to modify Dr. Miller's slides.
2006.02.21
Continued looking at Joan's Point Matching code and created an html document to write down observations of what I saw, as I did with Landmark Matching.

Attended a dry run of the NIH on-site visit. Dr. Miller wanted to modify some slides of his so I began looking into how those slides were created so that I could make the desired edits.
2006.02.16
Did new performance testing on vectorized version of code and determined that the processing steps that take the most time overall are the matrix multiplies inside the calculation of the next variation of the landmarks and momentums. These multiplications take place inside calls to the vnl fastops class, so they probably can't be coded any faster. The only possiblity for performance improvement would be to check the data matrices for sparseness, diagonality, identity, symmetry, etc., and make adjustments to the processing if that is the case.

Checked in the latest version of the landmark matching software.

Began looking at Joan's Point Matching MATLAB code. The code resides in /cis/project/wholebrainmapping/MATCH. Malini says that the code uses C++ modules that Joan links in.
2006.02.14
Implemented all the vector data in the integrator as vnl_vectors. Modified code to use vector calls instead of explicit loops.
2006.02.13
Made modifications to the Function method of the Integrator so that it no longer does a complete copy of the input and output state for each function call.

Made modifications to the ShootLandmarks class so that the space for the input and output components of the state vector don't need to be reallocated on each function call.
2006.02.09
Finished modifying ShootLandmarks to try to save memory allocations. The improved program required about 22 seconds to process the test data, a savings of approximately 10%. The memory allocations went from 2Gb to about 275Mb. Checked in the modified code.
2006.02.08
Produced scripts so that a user can check out landmark matching, then configure and build the optimzed and the debug profiles of the code without using kdevelop. Checked in these scripts as well as the test data I have been using. Also checked in the results of the valgrind memory allocation statistics so I can keep track of how attempts to save memory allocate operations are progressing. The first cut of the code resulted in nearly 2Gb of data allocation.

Began to make an effort to use vxl more efficiently (i.e., save on memory allocations during matrix math operations). Started modifying ShootLandmarks code.
2006.02.07
Produced an optimized build target for Landmark Matching. It runs in almost exactly the same time as the MATLAB code (about 24.5 seconds on the test data I've been using) and produces the same results.

Checked out a utility called valgrind which is mostly used to check for memory leaks in C++ programs running on LINUX. The one we have compiled in /usr/bin is an old version that doesn't work on 64 bit executables. I downloaded the source distribution for a more recent version and built in locally on my machine. The utility found an issue with a variable being referenced before being initialized (a variable I added for debugging purposes). It is flagging some memory allocations in some of the vnl and vbl class destructors, but they look correct to me. I will do some more testing to see if perhaps the memcheck utility is having trouble following the destructions.

Cleaned up the landmarkmatching directory. Checked in the initial version of the code into our CVS repository.
2006.02.06
The C++ version of the Landmark Matching code produces the same Metric Distance and energy values as the MATLAB code on the Landmark Set I've been using.
2006.02.02
Debugged the problem with the integrator. The code matches the MATLAB code results through the svd calculation and the determination of the new momentum.

Wrote a function to put out a formatted version of the state vector expected by ShootLandmarks.
2006.01.31
Looked at intermediate results of ShootLandmarks which looked okay. Wrote a function to create a default identity 4D matrix, since this is the input for the first iteration of integration. Straightened out some problems with the integration and looked at the ode solver results. The landmark data looked perfect and the momentum data was correct (all zeros). The variation in momentum data was all zeros and shouldn't have been and the variation in landmarks data was not right but close. Need to look at why the intermediate results look okay but the final results are no good. But the fact that 2 of the four componenets of the state vector look correct is a good sign that the integrator is working...
2006.01.30
Needed to specifically write the << operator for the dVectorMatrix class (which is actually vbl_array_1d<vnl_matrix<double> >, a template class built from a template class type) because the existing one wouldn't work (undefined reference in link). Did the same for dMatrixMatrix.

Found what I believe is an error in the reserve function in vbl_array_1d. Joined the vxl-users mailing list and reported the problem. I haven't gotten a response to the post, but I changed my code anyway.

Began debugging the ShootLandmarks function that is being integrated.
2006.01.27
Completing writing the code and have starting looking into testing. Malini and I determined how to step through MATLAB scripts a line at a time. Began working on adding lines to the C++ code to put out intermediate results. The goal is to step through the MATLAB and C++ code side by side, at least where the two programs are similar enough to expect similar results at intermediate points.
2006.01.25
Looked up some lddmm functionality for Runge Kutta integral estimation. The lddmm file, FluidMatchRungeKuttaInterpolateMap.C implements a fourth order Runge-Kutta method on filed data. There are no matrix container classes, and it appears to resolve the question of whether the data is 2 or 3 dimensional at compile time. I think this code may be general purpose enough to support a fourth order RK integration of a given function.

This code is not actually referred to by any other code in lddmm. I am building a general purpose ODE solver package based on vxl. Specific types of integrators, such as RK45 are derived from this. It is possible that a fourth order RK integrator based on the lddmm code could be built into the framework. I am not sure that it would have advantages over the RK45 that I am building. When the complete Landmark Matching package is published, I can look into using the ODE solver class in various places in lddmm.
2006.01.24
Wrote a Change Log to detail my efforts on the creation of the C++ version of Dr. Younes' Landmark Matching code. The Change Log starts at roughly the beginning of the year, so previous entries in this log may overlap with the Change Log. Future development of that code will be logged there and not on this page...
2006.01.23
Wrote a function for converting ode output into the format required by for svd in the newtonLandmarksMeta method. Used the vnl class vnl_svd for the svd calculation on the "variation in momentums" data.
2006.01.20
Spent more time studying newtonLandmarksMeta.m and how it uses the output of the ode solution for landmark matching.
2006.01.19
Began working on coding the newtonLandmarksMeta method into the LandmarkMatch class. Coded the initial call to the ode solver, studied conversion of the output into a format for singular value decomposition.
2006.01.18
Finished reading newtonLandmarksMeta.m to make sure all the data objects required are in the LmkMatchingParameters and LandmarkMatch classes. Moved the counter of singular values in the matrix of momentum variation values from LmkParameters to the LandmarkMatching class itself. Changed the _momentumOverTime and _trajectories variables to be vectors of matrices, where time is the vector index.
2006.01.17
Finished writing the system of differential equations solved during Landmark Matching.

Combined runMatching.m and landmarkMatching.m into a nested loop in a single method LandmarkMatch::run(). Began reading newtonLandmarksMeta.m to make sure all the data objects required are in the LmkMatchingParameters and LandmarkMatch classes.
2006.01.16
Looked over the BWs code to try to determine why command line performance is not as good as from the GUI. The code is essentially the same. The two possiblities is can see are that the libraries that are linked in to my GUI version are 32 libs, and for the command line version they are 64 bits libs. Another thing that could be looked at is whether some processing is done on Surfaces when they are loaded through the SurfaceLoading class (as done in the GUI version) vs. loaded directly from a file (as in command line version).

Continued study of Laurent's Landmark Shooting equations. Began writing the code that calculates some of the factors in the equation that are used over and over.
2006.01.13
Continued study of Laurent's Landmark Shooting equations.

Worked with Lei to try to determine why the command line version of Surface- Surface Distance Extract function was so much slower than the GUI version. We checked to make sure that the functionality was the same, and that all the compiler options (particularly the optimization settings) were the same. Saw no difference between the two configurations.

Read over more of the vxl documentation. Discovered a class called vnl_vector_ref, which is inherited from vnl_vector, with an added constructor that allows the creation of of a vector from an already existing block of memory, without copying. I hope to use this for Landmark Matching, particularly in the ODE solver class, so the "state vector" won't need to be copied in each step of the solution.
2006.01.12
Worked on implementation of Landmark shooting function. Read and tried to understand Laurent's system of differential equations, and how he implemented it in MATLAB.
2006.01.11
Decided on the 4 types of output the ODE solver should provide. They are: Finished writing the "integrate" function. Provided a constructor and a destructor for the RungeKutta45 class. Built the project in its current state.
2006.01.10
Continued working on adding IntegratorT into the ODE framework I have created. Worked on the "integrate" function which performs the RK45 algorithm on the ode system.
2006.01.09
Worked on integrating IntegratorT into the ODE framework I have created. Finished determineInitialStep, which determines an initial integration step size if the user does not furnish one in his RK45Parameters object.
2006.01.06
Renamed RungeKutta class to RungeKutta45 class. Created the ODESystem virtual class, and the ShootLandmarks class, which describes the system of differential equations that need to be solved for LandmarkMatching. Need to fill in the "functionEval" method for ShootLandmarks, which is the system of equations. Created a class RK45Parameters derived from ODEParameters which provides params specific to non-stiff integrators.
2006.01.05
Had a meeting with Anthony and Tim about timelines for the year ahead. Rewrote my timeline. Moved

Started building IntegratorT code into Landmark Matching code.
2006.01.04
Unit tested a C++ version of a 4D container, based on the vbl_array_3d.h template class from the VXL library.

Discussed the ODE solver with Tilak and Laurent. Laurent was able to provide the equations he is using for his system of differential equations that he solves during Landmark Matching. I can code these equations directly in C++ vs. decoding the MATLAB syntax to determine what is being calculated.

Began reworking class structure of Landmark Matching. I will create an ODESystem virtual class that the ODE virtual class will provide a solution to. A class called RungeKutta45, derived from the ODE class, will solve a system of diff. eqs. defined in the class ShootLandmarks, derived from ODESystem.
2006.01.03
Discussed the ODE solver with Tilak. We looked at the original MATLAB ode45 code and IntegratorT (the C++ version of DIPRO5, which ode45 is based on). We have set up a meeting with Laurent at 2PM on 01/04/2005 to discuss his code.

The differential equation that is solved in Landmark Matching, respresented in Laurent's MATLAB script nextShootForLandmarks.m, uses 4D arrays to contain intermediate results and do calculations. I wrote a C++ version of a 4D container, based on the vbl_array_3d.h template class from the VXL library. Began writing test code to test this class.
2006.01.02
Continued working on the Ordinary Differential Equation solver. Laurent uses a standard one in MATLAB (called ode45). I have tried directly translating this code into C++ but it's gotten very complicated to do that so I have considered starting from scratch and writing my own. In the meantime I have found an implementation of this type of ODE solver (non-stiff, Runge Kutta 4,5) in C++ which is based on the FORTRAN code that ode45 was written from. I am looking this over because I think I can use it. I will need to understand it well to go over it with Laurent, which I hope to do this week.

In the meantime I switched out the matrix library I was using (MatrixLib), which I selected because it was specifically built to translate MATLAB to C++. It's not open source, it's not that great, I don't have the source code, etc., etc. I think it makes sense to use the numerical libs that itk uses - vxl. They've been great so far and I think it is the right library for this job, and will make the integration of landmark matching into itk simpler.
2005.12.22
Continued work on Landmark Matching C++. Worked on ODE class functionality. Derived a Runge-Kutta class from the base ODE class.
2005.12.21
Continued work on Landmark Matching C++. Worked on ODE class functionality.
2005.12.20
Continued work on Landmark Matching C++. Worked on ODE class functionality.
2005.12.19
Continued work on Landmark Matching C++. Began a search for an implementation of multidimensional arrays in C++. The function nextShootForLandmarks.m requires 4D Arrays which are not supported by MatrixLib. The available libs are: TNT currently only supports up to 3D arrays but it may be possible to build higher dimensionality into the library. I have tried blitz++ in the past for the Landmark Matching problem and found it very difficult and complicated to work with. Blitz++ has a lot of mathematical functions implemented, as well as a lot of useful access methods on arrays > 2D. Boost looks simpler to use but I am not sure how much math capability is built in.

Another approach is to implement the 4D arrays in the same way I implemented the Array3D, by building collections of 2D objects.
2005.12.16
Continued work on Landmark Matching C++. Began working on the ODE class implementation. Created an ODE parameters class.
2005.12.15
Continued work on Landmark Matching C++. Continued searching through publicly available code samples of implementations of Runge-Kutta solutions of ODEs. Found several examples but nothing I thought would be a useful place to start. Decided to create an ODE class for general purpose solutions, with Runge-Kutta implemented as a subclass.
2005.12.14
Continued work on Landmark Matching C++. Discovered that the LAPACK triangularization routine destroys the lower (or upper, depending on input params) part of the symmetric triangle.

Started searching for suitable ODE code for the matching. Laurent's code uses ODE45, a fifth order Runge-Kutta solver included, I believe, with MATLAB. The code examples I've seen are all in one dimension. My plan is continue to search around the web, then decide whether 1) I have found some suitable open source code that meets the requirements, or 2) I need to translate the ODE45 from Matlab, or I should start from scratch...
2005.12.13
Continued work on Landmark Matching C++. In an effort to solve the memory corruption problem I have decided to swap out the gsl cblas lib on the theory that it does things in a column major way when row major is what is required. It seems to solve the problem but the results are wrong.
2005.12.12
Continued work on Landmark Matching C++. I have been experiencing a lot of trouble with the interface to LAPACK/BLAS. The call I am using to triangularize a symmetric matrix appears to be corrupting memory in a way I don't understand. I have looked over the parameters to the call and how Matrix Lib works and a great number of things to try to determine what the problem is.
2005.12.09
Continued work on Landmark Matching C++. Began testing rigid registration.
2005.12.08
Continued work on Landmark Matching C++. Worked on rigid registration.
2005.12.07
Continued work on Landmark Matching C++. Created code to interface with LAPACK and BLAS. Worked on rigid registration.
2005.12.06
Continued working on Landmark Matching C++.

Built LDDMM - Surface on moros with gcc for Can. Required mods to the configure script to find the correct libraries and mkl. Checked it in to cvs.
2005.12.05
Finished building a Landmarks class based on ml_matrix.
2005.12.02
Built a class for 3D arrays based on the 2D ml_matrix.

Starting building a Landmarks class based on ml_matrix.

Added some hierarchical structure to classes to make room for putting future shape/suface matching efforts into the same code framework.
2005.12.01
Began another effort on C++ version of Landmark Matching (previous attempts used blitz++ and Brainworks libraries). Used previous BW effort as basis - its existing classes are the best fit.

Added some hierarchical structure to classes to make room for putting future shape/suface matching efforts into the same code framework.
2005.11.30
Worked with Can to get him started on Landmark Matching and Surface Matching. Provided him with Laurent's Landmark Matching MATLAB code, a wrapper I had written for this code, and a perl script to write the scripts for a complete i, j experiment for all the files in a given directory, as well as instructions on how to run it. For Surface Matching, I checked out Marc's code from the repository into Can's local directory. Marc had added the code to write out the metric distance - I added code to write the distance to a file. I also made some changes Marc requested that affected template instantiation, and a change so that code code would compile without warnings. Checked in everything.

Can's machine does not have the MKL libraries, so I had to set the DONT_USE_MKL switch for the compile. Some checks in the configure script (and hence the configure.ac file) didn't work right for this, so I had to make a fix there and check that back in.

Can is also using some shell other than bash, but the pathConfig.sh script I write as part of the config process doesn't work right with that. An inprovement would be to check what the current shell is and change the syntax of this file accordingly...
2005.11.29
Downloaded and installed MatrixLib, a product designed as a code base for projects requiring translation of MATLAB code to C++. Provides 2D matrices and MATLAB functionality. Installation required LAPACK and BLAS. I have substituted gsl for ATLAS in the installation. Built and ran the included test program, which ran successfully.
2005.11.28
Completed study of Landmark Matching MATLAB code.
2005.11.24-25
Holiday.
2005.11.23
Continued the study of the Landmark Matching functionality.
2005.11.22
Continued the study of the Landmark Matching functionality.
2005.11.21
Continued the study of the Landmark Matching functionality. Created an html page which can be viewed here.
2005.11.18
Agreed to terms of an eval license for the MatrixLib package of C++ code and libraries that will be useful in converting from MATLAB code to C++. Company is Princeton Satellite and contact is William Turner. Typical eval package allows up to 4x4 arrays. Mr. Turner agreed to give us a package which contains full MatrixLib functionality, for one month.

Continued the study of the Landmark Matching functionality.
2005.11.17
Worked with Lei to determine why SSD was performing so much more slowly in its command line form vs. from the BW GUI. Rebuilt SSD from an archived version of the code. Determined the code was slow because it was not compiled with optimization switches turned on. Put the code in a separate directory for Lei to use.

Worked with Malini to explore why Surface Cut functionality is causing BW to crash much more often on her 64 bit machine than it used to on her 32 bit machine. Had difficulty getting my recompiled version to even perform the Cut functionality. When I was able to get it to crash it appeared there were a number of high bits set in a pointer value, which may or may not have been the problem. Problem needs further study.
2005.11.16
Explored some more C++ matrix packages that might be very helpful in converting MATLAB to C++. They included MET, MTL, and IML. Found a few on-line lists of existing packages and bookmarked them in my browser.

Found a commercial package called MatrixLib created by Princeton satellite specifically to convert code from MATLAB to C++. E-mailed them to discuss getting a eval copy of the software to test with.

Attended a meeting with Dr. Botteron and Tom Mishino and CIS staff and faculty to discuss the twin study processing we're conducting here. They will be getting much more data to us in the December time frame that will require Landmark Matching Processing. The C++ version of the code needs to be ready for the additional data sets.
2005.11.15
Began working on a description of all Landmark Matching MATLAB modules. The method of plowing through and trying to convert to c is causing some confusion while I try to build objects. I expect this analysis of the existing modules will help that process.
2005.11.14
Had a meeting with Dr. Miller at which he expressed his displeasure at the difficulty CIS had in producing interesting visuals for two different teams that were here to film CIS's work on its projects. Dr. Miller asked that we prepare some things that could be ready to be used to show visitors to CIS. I will produce: Completed Curve Matching module. The CurveMatching class inherits from some XWindowForm class, so I needed to create a CurveMatchingBase class that contained the non-GUI data and method class members. Tested the code, then put together a directory of the existing script functionality and wrote an e-mail to Neha to ask her to give it thorough testing.
2005.11.11
Continued to work on Curve Matching module
2005.11.10
Did some performance testing on SSD for Lei Wang of Wash U. Began working on completing more of the BW scripting code modules to hand off to Neha. Discovered some problems with the way I implemented Antsy and Reseg. Commented out these routines and began finishing up Curve Matching.
2005.11.09
Continued working on MATLAB to C++ conversion of Landmark Matching code. Began looking into the possiblity of using MATLAB C++ libraries and matrix representations for MATLAB - C++ conversion.
2005.11.08
Continued working on MATLAB to C++ conversion of Landmark Matching code. Spoke with Laurent about the desired parameters for running LM, and worked his comments into the code.
2005.11.07
Finished Landmark Matching processing of left side data sets. Compiled the results matrix for the left and right sides, and e-mailed them in tab delimitted ASCII format to Youngser.

Continued working on MATLAB to C++ conversion of Landmark Matching code. Wrote the Landmark Matching parameters class and integrated it into the Landmark Matching class.
2005.11.04
Continued Landmark Matching processing of left side data sets. After looking through previous e-mails I noticed that some of the data on the right side was not processed properly. Specifically, the points that were wrong in the B1B10 file were wrong not just in the x coordinate; all the coordinates were swapped. Modified the data and re-ran the data sets.

Wrote a script to re-run the bad left side data set as a target. Repair the bad data in the left side data set and ran the repair scripts.

Continued working on MATLAB to C++ conversion of Landmark Matching code. Used existing BW library functions to apply a rigid registration if the Landmark Processing parameters call for it.
2005.11.03
Continued Landmark Matching processing of left side data sets. Discovered a problem with the data in one of the data sets. Contacted Casey Babb at Wash. U. to verify the point was bad.

Added the BW library code to my existing Landmark Matching C++ code project. Began modifying existing project to use BW libs.
2005.11.02
Continued Landmark Matching processing of left side data sets.

Continued working on MATLAB to C++ conversion of Landmark Matching code. Worked on building BW libraries separately from the rest of the Brainworks structure to be sure they could be used independently.
2005.11.01
Get started on the processing for the left side landmarks for the Botteran project. Looked in the B1B10 file for the left side to see if any obvious errors existed. Ran my i,j perl script to set up MATLAB processing scripts for 34 processors, so I could have scripts that would reliably finish by morning (if started in the evening). Began to run these scripts on available processors.

Worked on conversion of Landmark Matching from MATLAB to C++. The Brainworks Library has Vector, Matrix, and Array processing capability, along with some math functionality. I decided to use these container classes for the basic structure of Landmark Matching. I should be able to link in gsl routines if necessary for any math functions I can't get through Brainworks/BLAS/LAPACK.
2005.10.31
Looked at the results of Landmark Matching Processing set up to run over the weekend. Everything completed as expected except for the first set of 17 templates. Looked to see why those had not completed, and noticed the one problematic landmark set B1B10, took about 1.5 days to complete, vs. the usual two hours. Looked through that set and found another switched x coordinate value.

Created a script that ran the corrected data set as a template, and started it on my local machine. Then I needed to modify my perl script to match all the other data sets against the corrected dataset as a target. Then I assembled all the matching outputs into one (170, 170) array, wrote it out as a tab delimited text file, and sent it to Youngser.
2005.10.28
Set up processing for Landmark Matching on MATLAB 7 on Windows machines at CIS. Needed 10 CPUs to complete the processing over the weekend. Used het, tet, gimel (dual CPU machines here in Clark 305), and a 4 CPU machine in Clark 307.

Discovered errant datapoints in file B1B10. The x axis points were swapped in two landmarks.
2005.10.27
Checked out which machines can support MATLAB 7. Determined how to run MATLAB in a simple command line interface. This interface unfortunately stops updating to the screen once it's window loses focus. It doesn't come back when it gets back into focus. But the program will run to completion this way. Discovered a processor counts for the available machines - then ran my script to create the necessary MATLAB scripts.

Looked into C conversion issues for the Landmark Matching code. Started looking into GSL (Gnu Scientific Library) as a basis for Matrix and vector structures and operations as well as math routines including odes. Only issue is that the code is entirely C based. Considered possiblities for creating C++ wrappers for desired structures.
2005.10.26
Finished the script that writes the MATLAB scripts for an i,j dataset landmark study. Did some testing and determined Linux processors running MATLAB 6.0 take about 10 times as long to process Landmark Matching. Seems like a waste of time to split the processing into 10 Linux processors since it will take as long as one PC.
2005.10.25
Worked on perl script for creating MATLAB scripts for i, j processing. Added parameters for: Began to modify perl script to generate a MATLAB script which will gather all the partial results into one MATLAB array and write it into tab delimited form.
2005.10.24
Discussed with Tim & Anthony how to do Landmark Matching processing here on CIS resources. Anthony e-mailed a list of Linux processors that can be used for the processing. Unfortunately at CIS our Linux machines only have MATLAB 6.0 which is very slow for this processing (Laurent said he optimized the code for 7.0).
2005.10.21
Worked with Neha on testing Brainworks scripting functionality.

Continued to work on the Reseg scripting function.
2005.10.20
Sent an e-mail to the help desk at Teragrid to ask if and how I could use Teragrid resources to do the Landmark Matching processing. They responded that they do not have a lot of MATLAB licenses that I could use, and MATLAB isn't supported for their 64 bit machines, so it runs very slow.

Did some testing of output file formats for Landmark Matching processing.

Continued to work on the Reseg scripting function.
2005.10.19
I wrote a short MATLAB function that will take two file names and call Laurent's MATLAB Landmark matching function, and save the deformation distance between the two landmark sets. I finished a perl script to write a MATLAB script that will perform (i, j) matching on all the files from each side. The output is a native MATLAB file that contains an NxN matrix of the deformation distances. To do the 4X4 sample (12 matches), the processing took approximately 9 minutes on my desktop machine. To complete the 169x169 for both sides will require about 30 days.

Spoke with help desk at NCSA to have my password reset. Faxed the Acceptance statement for the Teragrid User Responsibility Form to NCSA.

Supported IT needs of the CIS visitors.
2005.10.18
Worked on adding Reseg script capability to BW. Discussed the resegmentation function and segmentation data in general with Malini. Resegmentation is a part of what happens in the Antsy function. The command line version of this function creates a new class called Reseg, by using a copy of the Antsy class and modifying the functionality to strip away non reseg functionality. My plan is to use the Antsy class that exists in BW (which I modified for the Antsy script command) and modularize the Reseg function to make it publicly callable.

Supported IT needs of the CIS visitors.
2005.10.17
Worked on getting wireless up and running in Clark 110. Took laptop computer downstairs but couldn't find signal. Determined location of wireless router but couldn't get a signal for CIS network. At Tim's suggestion, unplugged router, then plugged it back in. Seemed to solve the problem.

Supported IT needs of the CIS visitors.

Finished adding newlmk script capability to BW. Command line version created new classes and input/output code for landmarks and surfaces. I used existing BW landmark and surface classes to achieve the functionality. Did some initial testing.
2005.10.14
Worked on adding newlmk BW scripting function.
2005.10.13
Met with Joan to discuss the theoretical framework of various matching programs. He provided a summary of what he described here.

Met with Malini and Neha to get Neha started on testing some of the BW scripting functionality I have built in. Copied my existing BW code base over to her machine and rebuilt it. Showed her how to use Kdevelop. Explained how the scripting will work and gave her a couple of example script files.

Added the BW script command SubSD to the new BW scripting framework.
2005.10.12
Met with Laurent to discuss how Landmark matching MATLAB code can support Botteron project. Laurent agreed to look at some of the Landmark sets and tweak the parameters the code uses to better fit the data. He also agreed to have the program output a metric distance between the two landmark input sets. This metric distance will provide the basis for Youngser's statistical analysis.

Marc is trying to modify his Surface Matching code to put out a distance metric between the two surfaces he's trying to match. He was having problems building the gts package so he asked me to log in to his account and try to build it. He had the environment variable CC set to "g++". CC specifies the C Compiler, so it's best to leave this unset, or use gcc.

Added the BW script command SubSD to the new BW scripting framework.
2005.10.11
Discussed Botteron project with Youngser, Tim, and Laurent. I am trying to find out what output Youngser is looking for from Landmark Matching.

Helped Joan with looking at the Surface Matching cvs repository.

Continued working on a perl script to create MATLAB scripts for the Landmark matching part of the Botteron project. Need script code to take into account right side/left side naming conventions.
2005.10.10
Met with Dr. Miller to discuss "public" access to CIS processing capabilities. He considers LDDMM a CIS "brand". This includes Surface Matching with Currents, which should be renamed to LDDMM - Surface.

Began working on a perl script to create MATLAB scripts for the Landmark matching part of the Botteron project.
2005.10.07
Worked on Antsy. Finished getting the GUI code separated out. Was able to compile the rest. I have set aside a directory for Neha to begin testing SD, SSD, and Antsy when she's ready.

Spoke with Dr. Younes about input formats to his Landmark matching program. He provided a couple of file i/o scripts for .lmk files. He also provided some C++ matrix classes.
2005.10.06
Worked on Antsy. Need to modify some of the Brainworks code associated with Antsy because it wants to call GUI functions for operator input in the middle of processing.
2005.10.05
Completed work on SD code. Began working on Ansty script. ,br>
Some status for a couple of my projects: BW Scripting - I have built and tested the code for poly_Redist and it looks good. I have recently added Surface-to-Surface distance (SSD) and Surface Distance (SD), but they are as of yet untested. One of the issues with SSD is that I am not sure what exactly people expect it to do. There is a Brainworks menu selection to do Surface to Surface Extraction, which essentially takes two surfaces, and if any of the points on surface 1 are more than the specified maximum distance from surface 2, it removes those points. I have to send an e-mail message to the BW mailing list to find out exacly what people are looking for. I also need to find some decent test files for SSD and SD. I have finished an attempt at coding the Antsy script command. Neha is going to help me test some of these modules so I need to set up a test environment for her.

Landmark Matching Code - I have been working on converting the MATLAB to c++. I basically haven't gotten anywhere yet. I originally decided to try to plow through and use an open source third party math package called blitz++ (used by Marc V. in his Surface Matching code) to handle some of the math functionality, but thus far have found it very time consuming to try to understand how to make it work. I have gone back to the drawing board to look at some other packages that seem more promising - MATPACK in particular.

But people still want to use Laurent's new code for current processing so I am going to support the MATLAB version when needed. Tim and I are going to get together and do some processing this afternoon.
2005.10.04
Worked on testing Surface-to-Surface distance calculation to BrainWorks scripting functionality. Started working on SD (Surface Distane) code.
2005.10.03
Worked on adding Surface-to-Surface distance calculation to BrainWorks scripting functionality. Started testing SSD output.
2005.09.29,30
Vacation.
2005.09.28
Continued working on new Landmark Matching program. Began looking at some other math packages because of the complexity of blitz++. A possible candidate could be the MatPack Package. This provides the one and two dimensional arrays used by the Landmark Matching code. Also discussed with Dr. Younes some matrix implementation he had coded, and the possibility of producing a home grown code base that would provide an easy way to convert MATLAB matrix operations into C++ code.
2005.09.27
Continued working on new Landmark Matching program. Trying to use blitz++ to do MATLAB array operations.
2005.09.26
Continued working on new Landmark Matching program. Trying to use blitz++ to do MATLAB array operations.
2005.09.23
Continued working on new Landmark Matching program.
2005.09.22
Began working on conversion of new Landmark matching program from MATLAB to c++. Looked at various math packages to use - decided to use blitz++. It's already being used in the shapa analysis code, and it provides some data structures that correspond to MATLAB arrays.
2005.09.21
Created a KDevelop project for Surface Matching. Read over some of the code.

Spoke with Dr. Younes about his new Landmark matching MATLAB code. He agreed to provide a zip file with the scripts, for conversion to c++.
2005.09.20
Finished testing and debugging calling the first BW script command, poly_Redist, from a script. Verified output visually, as well as error handling and reporting code. Also can call shell commands from scripts.
2005.09.19
Started testing BW scripting using Kdevelop in debug mode.
2005.09.16
Exploring KDevelop. Communicating with Tim to get familiar with using this application for a code development and debugging environment.

Finished building the first command line scripting routine into BrainWorks. Working on building a script and getting data for testing.
2005.09.15
Met with Dr. Miller, Laurent, and Tilak to discuss progress and plans for Shape Analysis code. Showed them our progress and our timeline. Dr. Miller recommended, as part of the next phase of shape analysis code configuration, that I take Laurent's latest Landmark matching code and replace Marc's landmark matching code with it. The code is in MATLAB and shall be converted to C++ and modularized, as according to our current plan for shape analysis code.
2005.09.13
Worked on BW script functionality prototype code. Working on getting a single command line module built into the new scripting framework (poly_reDist).
2005.09.12
Worked on BW script functionality prototype code.
2005.09.09
Began some BW script functionality prototype code.

Participated in a meeting with Dean Jones to discuss IT issues at the Whiting School.
2005.09.07
Posted the BrainWorks scripting writeup and requested comments from Tim, Anthony, and Malini. Got a freshly checked out version of BrainWorks, added the changes I had already made to Main.C, and Started writing a script reader class.
2005.09.06
Spent more time looking at BrainWorks scripting functionality. Produced this summary of what needs to be done and about how long I think it might take.
2005.09.05
Looked into BrainWorks and its code today. Built and ran on het, and made a few test mods to the menus just to look at the main program and how it works. Looked at one of the batch mode code files to see how they works, and spent some time thinking about building some generaized scripting functionality into BrainWorks, based on what already exists.
2005.08.31
Finished on-line compliance training.

Meeting with Malini to discuss status of BrainWorks batch mode modules. Meeting is summarized here.
2005.08.30
Worked with Anthony and Tim to get Surface Matching web pages in CIS webpage directory, and to check Surface Matching code into CIS repository.

Looked at BrainWorks code in response to a question on the BrainWorks mailing list from Lei Wang. Question was related to whether certain BrainWorks functionality (extract from surface) had been encapsulated in a batch mode program. Malini was aware of an incomplete effort to do this. She is looking into the status of that project.

Loaded the Paxinos/Franklin Mouse Brain Atlas onto het.

Began study on the final compliance on-line training course I need to take.
2005.08.26
Began looking into 64 bit debugger issues (i.e., gdb64).
2005.08.25
Received and incorporated Surface Matching website comments.

Worked on visualization of Surface Matching results. Researched some conversion programs for gts data. Found gts2stl, a program that converts gts to the "Stereo Lithography" format, which can be loaded into Paraview. Looked at some of the Surface Matching inputs and results with Paraview.
2005.08.24
Finished first draft of SurfaceMatching webpages. Notified Marc and CIS-CC and requested amendations/additions.

Read more about Tilak's Mouse BIRN study.
2005.08.23
Finished testing Surface Matching code (ran on Cygwin). Began creating (from LDDMM webpages) webpages for Surface Matching.
2005.08.22
Started testing Surface Matching code. Checked out and built on Linux 32 and 64 bit machines, and on Cygwin. Ran CurrentMatch on Linux machines, ran ApplyDeformation on Linux 64 on a third surface, to verify that these surfaces would make good candidates for the examples section on the Surface Matching webpage.
2005.08.18
Began final clean up and configuration of Surface Matching code. Added tests to configure.ac in Surface Matching distribution to make sure user has the required CIS modified version of GTS. Worked on eliminating compiler warnings from ivcon, the image data format conversion third party library required by Surface Matching. Created autoconf and automake files for ivcon, in keeping with the principle that users should not have to edit makefiles. Added mkinstalldirs support, which automatically creates installation directory structure from an invocation of make, to ivcon and Surface Matching modules. Eliminated files not necessary for distribution from all source directories. Wrote a new INSTALL file for Surface Matching and ivcon. Tarred and gzipped the ivcon and the gts distributions and moved them underneath the Surface Matching distribution. Imported the clean distribution directory into a new SurfaceMatching test module in a repository in my home directory. Checked out the module on hpmarc to test building and installing ivcon, gts, and SurfaceMatching using hpmarc, a 32 bit Linux machine with gcc 3.2. After ironing out installation issues and updating INSTALL file, checked in changes to the distribution. Checked out latest version on het, a 64 bit Linux machine with gcc 3.4
2005.08.15
Finished building gtsview visualization utility - visually verified output of surface matching program run on 32 and 64 bit Linux, and on Cygwin.

Worked with Tilak on mouse brain project. Still having problems getting test data sets to process properly with "match".
2005.08.11
Tried to build gtsview visualization utility to view results of various CurrentMatch runs. Was unable to get gtkglare to build on Unix.

Started working on building Shape Analysis programs on Cygwin. Was unable to get blitz++ to build on Cygwin.

Worked with Tilak on mouse brain project. Read code and provided Tilak with overview of landmark matching processing (inputs and outputs). Created test landmark template and target data sets to force the matching program to perform compressions, and to perform translations.

I am looking into how best to include our CIS modified version of the GTS library into the distribution of SurfaceMatching. I have tarred and compressed the source code and placed it under the SurfaceMatching code directory. I am considering whether it makes sense to have the configure script automatically install the library (including configuring its Makefile) if it can't detect a previously installed CIS modified GTS library. Same concept (with a few different details) applies for ivcon.
2005.08.10
Produced LAPACK prototypes to get the Surface Matching programs compiled and running on Cygwin. Ran CurrentMatch on a test set on Cygwin.
2005.08.09
Finished Aug 2005 - Aug 2006 timeline and posted it. Met with Anthony and Tim to discuss and coordinate timelines for Computational Core.

Continued working on producing LAPACK prototypes for alternative to calls to mkl.
2005.08.08
Continued to work on building surface matching code. Modified autoconf input file configure.ac so that it looks for GSL includes and libs instead of mkl, if necessary. Worked on producing LAPACK prototypes for alternative to calls to mkl.

Worked on a timeline of goals for Aug 2005 - Aug 2006.
2005.08.04
Continued to work on building surface matching in Cygwin. Ran into difficulty using MKL. Can't get a free version of MKL for Windows. Modified all the mkl calls in Surface Matching to match the BLAS/LAPACK protocol in mkl_cblas (instead of mkl_blas). This should make it easy to build on the Cygwin side by using standard BLAS/LAPACK libraries, and will provide the builder the option of using MKL or standard libraries on LINUX. Tested Surface Matching code on LINUX.
2005.08.03
Completed work on autoconf/automake for CurrentMatching code. Finished testing on hpmarc. Downloaded and installed cygwin. Began testing build for Surface Matching on cygwin.

Added brief sections on workflows and file browsing to the CIS Visualization Goals webpage. Put the Goals in the Visualization wiki.
2005.08.02
Continued work on autoconf/automake for CurrentMatching code. Added checks for necessary header files. Modified Makefile.am so that it properly uses LDFLAGS, LIBS, and CPPFLAGS values set by configure. Modified test for mkl libs to try to prevent undesirable directories from being considered our mkl lib dir. Began testing on hpmarc.
2005.08.01
Worked on autoconf/automake for CurrentMatching code. Used Jason's ac/am work done so far and made some mods to implement user overrides of the configure script's attempt to guess at library locations. Added overrides for MKL, GTS, and IVCON installation locations, and an override for the GLIB version number. If overrides are absent, and an install directory is specified, add the install dir to the paths for library and include file searches. Straightened out checks for libraries. Capture lib and include file path information in the LDFLAGS, LIBS, and CPPFLAGS variables.
2005.07.29
Checked the shape analysis module into cvs. Wrote an INSTALL instructions file. Wrote a README.html file that can be used as the basis for a webpage on the Shape Analysis module.
2005.07.27
Worked on the Shape Analysis makefile to make it simpler to understand and modify.

Tried to bring byu2img code under shape analysis. The code is built from BrainWorks functionality. I was able to make it build, but only on Marc's machine, and only by bringing in already compiled libraries from can's brainworks directory.
2005.07.26
Finished modifying ShapeAnalysis code and makefile to build on het with gcc 3.4 and hpmarc with gcc 3.2.
2005.07.25
Began the process of creating a CVS module of source code for "Shape Analysis". Brought over all of ~vaillant/research/initial/src/c++ Cleaned up directory, began trying to organize makefile to be easily modifiable for a user to build the ShapeAnalysis executables. Worked on the code to make it compatible with gcc 3.4.
2005.07.22
Read some more visualization info. Attended a meeting with Anthony and Steve Pieper to discuss Slicer 3D and BIRN AHM and visualization issues in general.

Searched selected home directories (~vaillant, ~clare, ~younes) for versions of runSim source code dated last August but found nothing. Continued to work with source code found under ~mvaillant/research/initial/src/c++. Built runSim on my machine (het.cis.jhu.edu). Needed to download and install new version of blitz (0.8). Tested to make sure this version could be used to build runSim on hpmarc (32 bit, g++ 3.2) and het (64 bit, g++ 3.4). Worked on building two different executables (runSim and runRigid) from the same source code (runRigid will be built with the compiler switch -DRUNRIGID, which is tested by the cpp).

Attended undergrad presentation on LDDMM scripting for BIRN.
2005.07.21
Completed initial survey of visualization development environments, rendering and networking technologies.

Started code config for runSim executable.
2005.07.20
Surveyed visualization technologies via the Internet.

Read NA-MIC website.
2005.07.19
Surveyed visualization technologies via the Internet.

Attended CIS Seminar - Bayesian Modeling of Dynamic Scenes for Object Detection

Produced an index file for the MindView directory under the CIS Software Website.
2005.07.18
Worked more on visualization goals for CIS.
2005.07.15
Got the landmark matching code to compile on hpmarc. Found scripts and data sets archived by Clare. Sent an e-mail to Tilak to discuss working on running landmark matching code.

Put together a package of Surface Matching code for Jason Minion to get started working on autoconf/autoconf. Sent him an e-mail to describe the requirements for configuring the package.
2005.07.14
Began trying to gather landmark matching code for Tilak. Copied source files from ~vaillant/research/initial/src/c++, built blitz++ (required for Lmk), created a makefile, tried to compile code. One of the source file's permissions is unreadable.

Met with Jason Minion to discuss producing autoconfig and automake input files for surfaceMatching. Started working on configure.ac.
2005.07.13
Added a little more structure to the webpages under ~mbowers.

Continued to brainstorm visualization goals.
2005.07.12
Wrote a web page to summarize my efforts to build/configure/document the Surface Matching Code. Added some links on my index.html file for the visualization effort page and the source code configuration effort page.
2005.07.11
Worked on Marc Vaillant's Surface Matching program. Fixed the glibc problem. Running ldd on the executable showed that the glib-1.2 and the glib-2.0 libraries were both linked in. Specifying -lglib2.0 (vs. -lglib) forces the linker to pick up the 2.0 version (required since gts is already linking in 2.0).
Ran the program, and though I didn't time either run, this one appeared much faster (I would estimate one hour vs. four hours).
Had a brief discussion with Jason Minion on creating some automake/autoconf files for Surface Matching.
Read about FreeSurfer and its segmentation process.
2005.07.08
Continued work on CIS Visualization Goals.
2005.07.07
Completed the following online Compliance Courses:
2005.07.06
Exchanged e-mails with Anthony regarding the visualization effort. Began working on a web page that will describe the current state of visualization/research applications at CIS, what the researchers' visualization needs are, and the features that will be required of the next generation of visualization software.

Started taking the set of online Compliance Courses.
2005.07.05
I was able to get Marc Vaillant's code to build on my 64 bit machine. I needed to: The program (CurrentMatch) will not run. It fails with the error:
*** glibc detected *** realloc(): invalid size: 0x000000000065cc60 ***
Aborted

2005.07.01
Read web pages on the subject of VTK and visualization from a list of links e-mailed to me by Anthony.
Prepared a list of issues that I think need to be resolved to move forward on the next generation of visualization software at CIS.
Met with Anthony to discuss these issues.
Worked to compile Marc Vaillant surface matching code on my (64 bit) machine. The compiler is issuing errors when it encounters references to protected members of a template class's template base class. It seems like this is in error. I need to look at the issue further.
Worked with Tilak to try to debug a script that he has that generates a LISP program that produces graphical web output.
Expanded on some of the issues in the list mentioned above, and posted it on the visualization wiki.
2005.06.27
I have downloaded and installed tkcvs locally on my machine. Some issues with tkcvs: If the remote server issues can be worked out I think the group should use tkcvs as its cvs client. Issues that were a problem when I last used it seem to have been resolved, and new features have been added.

My main interest in using cvs at this time was to try and understand the modifications that Marc Vaillant made to the GNU Triangulated Surface library, and to see if the configuration of these changes, with respect to GNU updates to the liobrary, could be managed in a relatively automated way by using CVS branches. To do this, I checked the original distribution (from Marc) into my repository. Then I created a branch tagged CISMods. I added Marc's modifications to this branch directory. Then I simulated a change to the original distribution, and checked it in. The modification went to the main truck. Then I opened, via tkcvs, the Directory Branch Diagram and Merge Tool. I was able to merge the changes in the trunk into my locally modified (CISMods) branch. So it looks like the basic concept may work.

Some things I would like to try include:
2005.06.21
Brief discussion with Lei Wang (?) from Wash. U. about Brainworks. Lei has the following comments about what BW needs.
2005.06.20
Marc Vaillant sent me a paper describing the research that his program supports. He is trying to build a test dataset with a relatively small surface for me to use as an example on the web pages dedicated to the Surface matching program.

I have attempted to survey the currently available (open source) source code formatting utilities that we could use to automate conpliance with Tim Brown's coding standards document, at least in terms of white space in program code. One, called Artistic Style, is easy to use, understand, modify, etc. It does not do everything we need to do in terms of coding standards, much of that may need to be done by hand (e.g., variable/function naming standards).
Marc's code already generally conforms to the standard. I will use Artistic Style to format the source, and apply the other rules by hand. I may revisit some of these other tools in the case of source code that is not nearly compliant with CIS coding standards.

An issue to consider before initial code check in is how (or whether or not it is necessary) to restructure the code so that desirable algorithms can be turned into library modules with an API that can be used by other CIS researchers...
2005.06.16
Anthony, Tim, and I informally discussed what needs to be done with Marc Vaillant's surface matching code.
2005.06.14
Met with Joe H. to discuss BW. Joe has a number of different operating systems on which he would like to attempt to build BW. Joe recommended that I partition my desktop to boot in several different OSs so I can test builds of BW on all of them.

Joe is attempting to compile BW using gcc 4.0 on a machine in the undergraduate lab. It appears to be failing as a result of inability to generate object code for implicitly instantiated template classes. I sent him some links that discuss this problem and possible solutions.
2005.06.13
Met with Jon Morra to discuss Mindview.

Mindview is a program written primarily in MATLAB that provides visualization and allows the user to process and edit datasets. The User/Installation Manual is here. MindView works only on Windows.

MindView can handle a number of different data set types thanks to MATLAB classes that were built to handle volume data sets in XML, tif, and Analyze formats. Data in these formats are converted to the internal VTIXML format which MindView operates on.

The MATLAB code is based on a single basic viewer called "orthogonal". The other GUIs are "inherited" from orthogonal in the sense that when they are invoked, they call the orthogonal script to set up the basic GUI, then they add their items to the GUI menus.

MATLAB performance issues (particularly in the case of loops in MATLAB scripts) sent MindView developers looking for ways to speed up some of the processing code. This was effected via linking java modules to implement some processing. The MATLAB/java interface is good so this solution worked well.

Point selection was a problem in the MATLAB based GUIs so the MindView developers had to come up with a solution. They were able to implement a MindView -> java -> VTK solution that permitted this. The current distribution of VTK required modification for this, so this version of VTK needs to be archived. Java support for the functions used in this implementation is also ending, so this solution may have problems in the future.

Marc Vaillant's surface matching code.

Marc Vaillant packaged up the source code necessary for building his surface matching program and placed it in his home directory. My first attempt to build the code and the third party libraries it requires on my machine is postponed until I am finished building it on his machine.

Issues I encountered while trying to build the code on my machine. I was able to build on Marc's machine. I have scouted around for some data, but have decided to ask Marc what data sets would be good to use...
2005.06.09
Tilak suggested I meet with Clare to discuss DTI visualization and processing tools. Met with Clare and Mike An (undergrad, CS?) to discuss a project that Clare was presenting in Toronto, which essentially compares the effectiveness of two algorithms used to produce brain fibre tracks from Diffusion Tensor Image (DTI) data.

Clare explained to me that DTI data shows the directional diffusion of water molecules through brain tissue. The imaging device pulses then collects readings in many different directions (30 in this case). The readings are used to mathematically create tensor matrix data throughout the brain. FA, the measure of directionality of the tensor matrix, is calculated, and can be visualized (mapped).

In this project, two different algorithms were compared for fibretracking effectiveness. Clare describes the research protocol here.

There are two principal pieces of software that are used in Clare's research, and they are: The data from these two programs is not compatible, so Clare wrote programs to convert the data sets where necessary. The programs were written in IDL.

Downloaded, configured (with CMake), and compiled itk (the Insight Segmentation and Registration Toolkit).
2005.06.08
Met with Dr. Barta and Anthony to discuss current state of visualization capability here, and future direction.
Dr. Barta has been overseeing the process of extracting functionality from BrainWorks. His goal has been to extract and modularize the BW processing functions that he sees as being worth saving. Dr. Barta thinks the following data processing functions should be mined from BW: Dr. Barta notes the that there are problems with the BW processes for: Dr. Barta feels that once this mining process is complete BW should be scrapped.

John Morra has been doing much of the mining and modularization. He should be back Monday. A good step would be to contact him and see the state of the processing modules he has, with an eye toward including them in CIS infrastructure and documenting them.
Dr. Barta's issues with BW: Dr. Barta noted the positive aspects of using MATLAB for research We also discussed MindView, which provides MATLAB classes for visualization data. Dr. Barta says that these classes inherit serializability automatically, which make for easy disk I/O. These classes exist for the types of data sets used in research here at CIS.

I mentioned the concept of a research framework that combines a MATLAB interface, a visualization component, and availability of all the processing modules in the CIS infrastructure. Other requirements would be scriptability and data format intercompatibility. Anthony mentioned that visual workflow interface for script building would be desirable.
2005.06.07
Meeting with Dr. Miller, Tilak, Anthony

Dr. Miller proposes two concurrent projects.
BW Demonstration by Clare Poynton

Clare Poynton demonstrated Brainworks functionality and how she uses the product to support her research.
Data Types
Data Formats - depend on scanner Data is converted to Analyze Format outside of BW.

Clare's project requires her to do the following:
Clare recommended the following improvements in Brainworks.
Meeting with Joe Hennessey, Anthony, Tim Brown

Anthony summarizes the meeting here.
2005.06.06
Meeting with Anthony and Tim to discuss CIS projects.
Anthony: Infrastructure.
Tim: BIRN, related projects
Mike: BrainWorks

Infrastructure: BrainWorks issues discussed:

Joe Hennessey's summary of the current state of the Brainworks software and its prospects for the future:

1. Overview of Brainworks
a. who uses it
Malini, Clare, Tilak some students, a couple of other people as far as I know.

b. what it's used for
Visualization and quantitative analysis of volumetric and triangular mesh data.

2. The good things in BrainWorks
It can handle 8-bit voxel data and triangular mesh data fairly well. And has a number of algorithms that you use routinely built in.

3. Compiling and CVS repository
It will compile with the latest version of gcc that is in Fedora core 3 or Centos 4. (i.e. gcc 3.0 - 3.4) CVS access is available under cvs.cis.jhu.edu:/cis/home/cvsroot under the project BrainWorks


4. What's misssing
a. scriptablility
b. API for adding modules
To many things to name. The program is showing its age badly. It only supports 8-bit voxel data. And this is hard coded everywhere. Their is no documentation to speak of either external or in the source code itself. It is not worth trying to add an API to it. And scriptability of anything beyond a limited subset not requiring user interaction is next to impossible.

5. Background on Blox and Matlab modules
Blox is our next generation software for imaging that Pat Barta and I are designing in Java. It is mostly voxel based with limited support for polygonal data. (Mostly it can just convert the polygonal data formats from one type to another) Mindview (our matlab based version of blox) is basically a conversion of some pieces of blox to use Matlab as a front end. Since there are some many people using Matlab. Most of the work is still done in Java as Matlab can be very slow when trying to do loops. Matlab call Java very seemlessly and the combination works well.

6. Opinions on the next gerneration of BrainWorks
BrainWorks should be scraped and key algorithms pulled and put into latest generation software. Even this takes significant effort as there is no documentation of Brainworks and the algorithms quite often are crippled to allow them to function with just 8-bit voxels (hard coding of 256 everywhere for example) BrainWorks is fairly buggy also having a number of fundamental flaws in its design. Coverting it to a modern program would be more work then starting from scratch. It would be wiser to base future work off someother software base. We are going to be needing 64-bit capabilities soon for example and this would be a nightmare to try to do with BrainWorks.