Coding Phase 2.1

Hello!

This week I worked on developing a method for the quantitative evaluation of the U-Net model. In the last meeting, it was decided that the Intersection-over-Union of the segmented image generated via this model and the corresponding ground truth would be an appropriate metric for model evaluation.

The IOU metric, also called the Jaccard index, measures the number of pixels common between the ground truth and inferred (predicted) masks divided by the total number of pixels present across both masks. 

Metrics to Evaluate your Semantic Segmentation Model | by Ekin Tiu ...


The intersection (Ground truth ∩ Predicted image) is comprised of the pixels found in both the prediction mask and the ground truth mask, whereas the union (Ground truth ∪ Predicted image) is simply comprised of all pixels found in either the prediction or target mask.

An easy way to calculate this value is by converting the images to matrices and then finding intersection and union by element-wise multiplication and sum respectively.

To load the image as a matrix, we can use
BufferedImage image = ImageIO.read(new File(pathToImage));
INDArray Image = loader.asMatrix(image); 

For finding the Union of ground truth and inferred image
INDArray resultAdd = gTruth.add(inferred);
int union = resultAdd.scan(Conditions.greaterThan(0.0)).intValue();

Similarly, for finding the Intersection of ground truth and inferred image
INDArray resultMul = gTruth.mul(inferred);
int intersection = resultMul.scan(Conditions.greaterThan(0.0)).intValue();

Finally, IOU will be calculated as
float iou = (float)intersection/union;




For verifying the correctness of code, the following images can form a test case:



It was already known that the IOU for these images is 0.142857. After using these images as input to the code, the same value for IOU was obtained.


Furthermore, I also found the IOU values for 20 inferred images and the corresponding ground truth (refer to the datasets: https://drive.google.com/drive/folders/1mPoAuW98VMYrqDFtz7QdX9ZA43iuUylUhttps://drive.google.com/drive/folders/1ZetKvrxNPULv_AejnTvnxEi0zlk2Zg4B)
(Many thanks to Mr. Yuta Tokuoka for helping).


I also observed that the Dice coefficient is another commonly used metric for evaluation of image segmentation. The Dice similarity coefficient of two sets A and B is expressed as:
                                 dice(A,B) = 2 * | intersection(A,B) | / ( | A | + | B | )
where |A| represents the cardinal of set A.

Intersection over Union 

Note: The Dice index is related to the Jaccard index according to:
                                        dice(A,B) = 2 * jaccard(A,B) / (1 + jaccard(A,B) )

The code for the Dice coefficient can be found at https://github.com/Medha-B/unetdl4j/blob/master/src/main/java/org/sbml/spatial/segmentation/UnetIOU.java (commit 4fb0cc4 and commit 07da0ac).

Besides doing this, I also run the model with 1 input channel by the trying to output the images as grey-scale instead of RGB type or bufferedBGR type.

Color c = new Color(image.getRGB(j, i));
int red = (int)(c.getRed() * 0.299);
int green = (int)(c.getGreen() * 0.587);
int blue = (int)(c.getBlue() *0.114);
Color newColor = new Color(red+green+blue,
red+green+blue,red+green+blue);
bufferedImage.setRGB(j,i,newColor.getRGB());
 
There was, unfortunately, no change in the output.

Also, the various stats related to training the model like score vs iteration chart (the value of the loss function on the current minibatch), model and
training, the ratio of parameters to updates (by layer) for all network weights vs. iteration, etc.
can be visualized in real-time (in the browser).
This can be done using the following snippet of code:

UIServer uiServer = UIServer.getInstance();
StatsStorage ss = new InMemoryStatsStorage();
uiServer.attach(ss);
model.addListeners(new StatsListener(ss));

Code can be found at
(commit 10346c0)
Note: I was unable to view the UI at http://localhost:9000/train/overview. According to the
something to do with my Eclipse configuration or Windows configuration. Adding this code does
not give any compilation errors. So, I will keep it in the loop and try to work around it.
In the meantime, I was able to save the stats (along with some gibberish) while training a model
with 30 images over 10 epochs

I am also looking at the possibility of testing the U-Net model using the ROCMultiClass in dl4j.

Until the next time...
Farvel!

References:

Comments

Popular Posts