Anti-cancer immunotherapy dramatically changes the clinical management of many types of tumours towards less harmful and more personalized treatment plans than conventional chemotherapy or radiation. Precise analysis of the spatial distribution of immune cells in the tumourous tissue is necessary to select patients that would best respond to the treatment. Here, we introduce a deep learning-based workflow for cell nuclei segmentation and subsequent immune cell identification in routine diagnostic images. We applied our workflow on a set of hematoxylin and eosin (H&E) stained breast cancer and colorectal cancer tissue images to detect tumour-infiltrating lymphocytes. Firstly, to segment all nuclei in the tissue, we applied the multiple-image input layer architecture (Micro-Net, Dice coefficient (DC) $0.79\pm 0.02$). We supplemented the Micro-Net with an introduced texture block to increase segmentation accuracy (DC = $0.80\pm 0.02$). We preserved the shallow architecture of the segmentation network with only 280 K trainable parameters (e.g. U-net with ∼1900 K parameters, DC = $0.78\pm 0.03$). Subsequently, we added an active contour layer to the ground truth images to further increase the performance (DC = $0.81\pm 0.02$). Secondly, to discriminate lymphocytes from the set of all segmented nuclei, we explored multilayer perceptron and achieved a 0.70 classification f-score. Remarkably, the binary classification of segmented nuclei was significantly improved (f-score = 0.80) by colour normalization. To inspect model generalization, we have evaluated trained models on a public dataset that was not put to use during training. We conclude that the proposed workflow achieved promising results and, with little effort, can be employed in multi-class nuclei segmentation and identification tasks.
Pub. online:1 Jan 2018Type:Research ArticleOpen Access
Volume 29, Issue 1 (2018), pp. 75–90
The recent introduction of whole-slide scanning systems enabled accumulation of high-quality pathology images into large collections, thus opening new perspectives in cancer research, as well as new analysis challenges. Automated identification of tumour tissue in the whole-slide image enables further use of developed grading systems that classify tumour cell abnormalities and predict tumour developments. In this article, we describe several possibilities to achieve epithelium-stroma classification of tumour tissues in digital pathology images by employing annotated superpixels to train machine learning algorithms. We emphasize that annotating superpixels rather than manually outlining tissue classes in raw images is less time consuming, and more effective way of producing ground truth for computational pathology pipelines. In our approach feature space for supervised learning is created from tissue class assigned superpixels by extracting colour and texture parameters, and applying dimensionality reduction methods. Alternatively, to train convolutional neural network, labelled superpixels are used to generate square image patches by moving fixed size window around each superpixel centroid. The proposed method simplifies the process of ground truth data collection and should minimize the time spent by a skilled expert to perform manual annotation of whole-slide images. We evaluate our method on a private data set of colorectal cancer images. Obtained results confirm that a method produces accurate reference data suitable for the use of different machine learning based classification algorithms.