Pub. online:30 Oct 2024Type:Research ArticleOpen Access
Journal:Informatica
Volume 35, Issue 4 (2024), pp. 751–774
Abstract
Focusing on the problems of failing to make full use of spatial context information and limited local receptive field when U-Net is utilized to solve MRI brain tumour segmentation, a novel 3D multi-scale attention U-Net method, i.e. MAU-Net, is proposed in this paper. Firstly, a Mixed Depth-wise Convolution (MDConv) module is introduced in the encoder and decoder, which leverages various convolution kernels to extract the multi-scale features of brain tumour images, and effectively strengthens the feature expression of the brain tumour lesion region in the up and down sampling. Secondly, a Context Pyramid Module (CPM) combining multi-scale and attention is embedded in the skip connection position to achieve the combination of local feature enhancement at multi-scale with global feature correlation. Finally, MAU-Net adopts Self-ensemble in the decoding process to achieve complementary detailed features of sampled brain tumour images at different scales, thereby further improving segmentation performance. Ablation and comparison experiment results on the publicly available BraTS 2019/2020 datasets well validate its effectiveness. It respectively achieves the Dice Similarity Coefficients (DSC) of 90.6%/90.2%, 82.7%/82.8%, and 77.9%/78.5% on the whole tumour (WT), tumour core (TC) and enhanced tumour (ET) segmentation. Additionally, on the BraTS 2021 training set, the DSC for WT, TC, and ET reached 93.7%, 93.2%, and 88.9%, respectively.
Journal:Informatica
Volume 35, Issue 2 (2024), pp. 283–309
Abstract
In recent years, Magnetic Resonance Imaging (MRI) has emerged as a prevalent medical imaging technique, offering comprehensive anatomical and functional information. However, the MRI data acquisition process presents several challenges, including time-consuming procedures, prone motion artifacts, and hardware constraints. To address these limitations, this study proposes a novel method that leverages the power of generative adversarial networks (GANs) to generate multi-domain MRI images from a single input MRI image. Within this framework, two primary generator architectures, namely ResUnet and StarGANs generators, were incorporated. Furthermore, the networks were trained on multiple datasets, thereby augmenting the available data, and enabling the generation of images with diverse contrasts obtained from different datasets, given an input image from another dataset. Experimental evaluations conducted on the IXI and BraTS2020 datasets substantiate the efficacy of the proposed method compared to an existing method, as assessed through metrics such as Structural Similarity Index (SSIM), Peak Signal-to-Noise Ratio (PSNR) and Normalized Mean Absolute Error (NMAE). The synthesized images resulting from this method hold substantial potential as invaluable resources for medical professionals engaged in research, education, and clinical applications. Future research gears towards expanding experiments to larger datasets and encompassing the proposed approach to 3D images, enhancing medical diagnostics within practical applications.
Pub. online:12 Jan 2021Type:Research ArticleOpen Access
Journal:Informatica
Volume 32, Issue 1 (2021), pp. 23–40
Abstract
Anti-cancer immunotherapy dramatically changes the clinical management of many types of tumours towards less harmful and more personalized treatment plans than conventional chemotherapy or radiation. Precise analysis of the spatial distribution of immune cells in the tumourous tissue is necessary to select patients that would best respond to the treatment. Here, we introduce a deep learning-based workflow for cell nuclei segmentation and subsequent immune cell identification in routine diagnostic images. We applied our workflow on a set of hematoxylin and eosin (H&E) stained breast cancer and colorectal cancer tissue images to detect tumour-infiltrating lymphocytes. Firstly, to segment all nuclei in the tissue, we applied the multiple-image input layer architecture (Micro-Net, Dice coefficient (DC) $0.79\pm 0.02$). We supplemented the Micro-Net with an introduced texture block to increase segmentation accuracy (DC = $0.80\pm 0.02$). We preserved the shallow architecture of the segmentation network with only 280 K trainable parameters (e.g. U-net with ∼1900 K parameters, DC = $0.78\pm 0.03$). Subsequently, we added an active contour layer to the ground truth images to further increase the performance (DC = $0.81\pm 0.02$). Secondly, to discriminate lymphocytes from the set of all segmented nuclei, we explored multilayer perceptron and achieved a 0.70 classification f-score. Remarkably, the binary classification of segmented nuclei was significantly improved (f-score = 0.80) by colour normalization. To inspect model generalization, we have evaluated trained models on a public dataset that was not put to use during training. We conclude that the proposed workflow achieved promising results and, with little effort, can be employed in multi-class nuclei segmentation and identification tasks.
Journal:Informatica
Volume 21, Issue 3 (2010), pp. 409–424
Abstract
The paper addresses the over-saturated protein spot detection and extraction problem in two-dimensional electrophoresis gel images. The effective technique for detection and reconstruction of over-saturated protein spots is proposed. The paper presents: an algorithm of the median filter mask adaptation for initial filtering of gel image; the models of over-saturation used for gel image analysis; several models of protein spots used for reconstruction; technique of the automatic over-saturated protein spot search and reconstruction. Experimental investigation confirms that proposed search technique lets to find up to 96% of over-saturated protein spots. Moreover the proposed flexible protein spot shape models for reconstruction are faster and more accurate in comparison to the flexible diffusion model.
Journal:Informatica
Volume 14, Issue 4 (2003), pp. 431–444
Abstract
In this paper, we shall propose a new method for the copyright protection of digital images. To embed the watermark, our new method partitions the original image into blocks and uses the PCA function to project these blocks onto a linear subspace. There is a watermark table, which is computed from projection points, kept in our new method. When extracting a watermark, our method projects the blocks of the modified image by using the PCA function. Both the newly projected points and the watermark table are used to reconstruct the watermark. In our experiments, we have tested our scheme to see how it works on original images modified by JPEG lossy compression, blurring, cropping, rotating and sharpening, and the experimental results show that our method is very robust and indeed workable.