All posts by choosehappy

Download TCGA Digital Pathology Images (FFPE)

Digital pathology image analysis requires high quality input images. While there are a large number of images available in The Cancer Genome Atlas (TCGA), the ones which are currently available in the data portal are frozen specimens and are *not* suitable for computational analysis. This post discusses how to download the Formalin-Fixed Paraffin-Embedded (FFPE) slides for corresponding patients.

Continue reading Download TCGA Digital Pathology Images (FFPE)

Using Matlab, Pytables (hdf5) and (a bit of) Pytorch

As we’re testing out for migration to new deep learning frameworks, one of the questions that remained was dataset interoperability. Essentially, we want to be able to create a dataset for training a deep learning framework from as many applications as possible (python, matlab, R, etc), so that our students can use a language that are familiar to them, as well as leverage all of the existing in-house code we have for data manipulation.

Continue reading Using Matlab, Pytables (hdf5) and (a bit of) Pytorch

Notes on Transfer Learning in Caffe

This is a very straightforward practical approach:

https://github.com/NVIDIA/DIGITS/tree/master/examples/fine-tuning

One trick I’ve learned from somewhere (can’t find the link, unfortunately), which is a break from the above tutorial, is to simply reduce the base learning rate by an order of magnitude when transferring, while simultaneously seting the “new” layers to have their lr_mult an order of magnitude higher than the rest of the network

so, to initially train:

  1. learning rate =.01
  2. each layer's lr_rate = 1

to transfer learn:

  1. learning rate = .001
  2. each previous layer's lr_rate = 1
  3. new layers lr_rate = 10

This saves a lot of editing of the files and allows for small amounts of adjustment to existing layers, while focusing the bulk of the learning on the newer layers, yet still resisting to overfitting small datasets.

This is pretty informative:

https://cs231n.github.io/transfer-learning/
And I think this is the original TL paper as far as I remember:

http://www.datascienceassn.org/sites/default/files/How%20Transferable%20are%20Features%20in%20Deep%20Neural%20Networks.pdf

On Stain Normalization in Deep Learning

Just wanted to take a moment and share some quick stain normalization type experimental results. We have a trained in-house nuclei segmentation model which works fairly well when the test images have similar stain presentation properties, but when new datasets arrive which are notably different we tend to see a decreased classifier performance.

Here we look at one of these images and ways of improving classifier robustness.

Continue reading On Stain Normalization in Deep Learning

Real time Data Augmentation using Nvidia Digits + Python Layer

One of the common ways of increasing the size of a training set is to augment the original data with a set of modified patches. These modifications often include (a) rotations, (b) mirroring, (c) lighting adjustment, (d) affine transformations (sheering, etc), (e) magnification modification, (f) addition of noise, etc. This blog post discusses how to do the most trivial modification, rotation, in real-time using a python layer through Nvidia Digits. Given this code, it should be easy to add on other desired augmentations.

Continue reading Real time Data Augmentation using Nvidia Digits + Python Layer

Revised Deep Learning approach using Matlab + Caffe + Python

Our publication “Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases” , showed how to use deep learning to address many common digital pathology tasks. Since then, many improvements have been made both in the field and in my implementation of them. In this blog post, I re-address the nuclei segmentation use case using the latest and greatest approaches.

Continue reading Revised Deep Learning approach using Matlab + Caffe + Python

Dividing and re-merging large images (Humpty Dumpty)

One of the challenges in working in digital pathology is that the associated images can be excessively large, too large to load fully into memory, as well as too large to use in common pipelines. For example, a Aperio SVS file that we’ll look at today is 60,000 x 42,600 pixels. If we tried to load such an image, in RGB space, uncompressed it would require ~7GB, making it too large to consider using in our deep learning pipelines as there wouldn’t be enough RAM on the GPU for both the data and the filter activations.

Continue reading Dividing and re-merging large images (Humpty Dumpty)

Tutorial: A resolution adaptive deep hierarchical (RADHicaL) learning scheme applied to nuclear segmentation of digital pathology images

This tutorial provides a tutorial on using the code and data for our paper “A resolution adaptive deep hierarchical (RADHicaL) learning scheme applied to nuclear segmentation of digital pathology images” by Andrew Janowczyk, Scott Doyle, Hannah Gilmore, and Anant Madabhushi.

Continue reading Tutorial: A resolution adaptive deep hierarchical (RADHicaL) learning scheme applied to nuclear segmentation of digital pathology images

Efficient pixel-wise deep learning on large images

This blog post is based on the net surgery example provided by Caffe. It takes the concept and expands it to a working example to produce pixel-wise output images, generating output in ~2 seconds (simple approach) or ~35 seconds (advanced approach) for a 2,000 x 2,000 image, an improvement from the  ~15 hours of a naive pixel wise approach.

Continue reading Efficient pixel-wise deep learning on large images