Weekly Post #8 & #9 - When I got locked out of my blog

Apologies for no blog last week (and this one being posted 2 days later than usual) - I got locked out of my own blog and none of the vendors could help me regain access. So this is actually a new blog, with all the old data copied in and back-dated.

This post may be a long one, as I try to recall everything which has happen over the past 14 days.

Assignment hand-in for CS38220 (the other module)
Lovelace Colloquium poster completed
Finish Report about De Luca & Termini's alignment metric
Mid-Project Demo
Vectorise (and therefore speed up) functions
Work on GUI
Start implementing Hybrid Entropy
Some more work on the GUI

Taiga update

I said I would try to keep using Taiga.io and I have done just that! Here's where I'm at!

(I didn't not do anything in Sprint 2 - I carried the work into the next Sprint and therefore it counts it as no progress :( )


Lovelace Colloquium Poster

Poster

Since this poster was created I have gone on to make quite a lot more progress, which I will cover in this blog post (and maybe another).

The Colloquium is this coming Thursday, so I will need to reduce the amount of story points being brought into this week as I will be travelling Wednesday and Friday to and from Sheffield.


 Mid-project Demo

On Thursday 17th March, I met with Dr. Patricia Shaw, my second marker, to give her a progress report and a demo of where I was at currently.

She was looking to mark me out of 5 in 3 key areas:

  • Project Description
  • Technical Work
  • Technical Issues

I am pleased to report I received 5,4,5 respectively. This equals 4.6% of my final mark completed - yay!


Vectorise

Something which I had noted time and time again is the poor performance of my De Luca & Termini algorithm (which I should start calling 'Non-probabilistic Entropy' as is it's real name). This was caused by the use of for loops in both the De Luca and the Membership functions.

To get an idea of what Vectorisation is, please visit the MATLAB page - http://uk.mathworks.com/help/matlab/matlab_prog/vectorization.html

To illustrate the power of vectorising my code, I produced some graphs doing a comparison between no vectorisation, part vectorisation (membership function only) and full vectorisation.

TTR

binaryCongeal is the function which calls de-luca, and then in turn the membership functions.

Total time

As you can see, I have more than halved the total run-time by vectorising the code, making it much for user-friendly.

There is still some way to go to bring it's speed up to that of Shannon Entropy (and Hybrid - this I'll speak on later), but it's a marked improvement over where I was initially.


GUI

If I had thought about my predicament of being locked out of my blog carefully I would've taken screenshots as I went along building my GUI - unfortunately, I forgot. So I will include screenshots of where I am now.

Load up

Upon loading you are presented with a plain screen. The User can either select to generate the large pgm file (from a folder of mammograms) or to load in one they have generated before.

This could be modified to be one load button - but for ease of implementing this is currently two. The naming of the buttons will most likely need to be tweaked to be more clear to people other than myself.

Load in image

When an image is loaded in, the meta data is displayed on the right hand side. The 'View' and 'Clear' buttons now also become active. View allows the user to view the image in a new figure and clear clears the image axes and the meta data, ready to load in a new image.

Output

Here is the output after running 3 iterations with the Hybrid alignment metric (again more on this later!). This section is still somewhat under construction with plans to add entropy information per iteration and the chance to run all 3 current alignment metrics so to compare the outputs side by side.

The information which used to appear automatically after running the Congealing algorithm is still available via the 'View all mean images' and 'Show adjusted input' buttons.


Hybrid Entropy

I kept mentioning it throughout this blog post - however I will most likely dedicate a post entirely to it, along with some timed comparisons against the other metrics.

Hang tight everyone, I'll post it up soon!


Next week

As I mentioned before, the Lovelace Colloquium is this week, so progress may be slightly slower, however I do have a couple of things I would like cracked by the end of Sprint 7.

  • Figure out the rotation of the scans - why have they done this?
  • Implement the 'Run all' function
  • Adjust the final saved mean image to include the metric name and iteration number - for usability's sake
  • Check Hybrid Entropy output is indeed correct
  • Write Hybrid Entropy report
  • Continue with Final Report write up