posters

Best Posters Voting

1 - 27
Machine learning based label-free fluorescence lifetime skin cancer screening.
Renan A. Romano*, Ramon G. T. Rosa, Ana Gabriela Salvio, Javier A. Jo, Cristina Kurachi.
Renan Arnon Romano
Skin cancer is the most prominent cancer type all over the world. Early detection is critical and can increase survival rates. Well-trained dermatologists are able to accurately diagnosis through clinical inspections and biopsies. However, clinical similar lesions are usually incorrectly classified. This work aims to screen similar benign and malignant lesions for both pigmented and non-pigmented types. Fluorescence lifetime images measurements were carried out on patients with dermatologist (and biopsy) diagnosed lesions. This technique does not require the addition of any markers and can be performed noninvasively. Metabolic fluorescence lifetime images were performed by using a Nd:YAG laser emitting at 355 nm to excite the skin fluorophores. Collagen/elastin, NADH and FAD emission spectral bands for both nodular basal cell carcinomas and intradermic nevus, as well as for melanoma and pigmented seborrheic keratosis were analyzed. Features were properly extracted from these lifetime decays and set as the input of a simple partial least squares discriminant analysis model. After splitting the train and test sets, it was possible to achieve around 0.8 of ROC area on the test set, both for melanoma and basal cell carcinoma discriminations.
label-free imaging, fluorescence lifetime imaging, computer aided diagnosis
1 - 23
High-throughput phenotyping of plant roots in temporal series of images using deep learning
Nicolas Gaggion, Thomas Roule, Martin Crespi, Federico Ariel, Thomas Blein, Enzo Ferrante
Rafael Nicolas Gaggion Zulpo
Root segmentation in plant images is a crucial step when performing high-throughput plant phenotyping. This task is usually performed in a manual or semi-automatic way, deliniating the root in pictures of plants growing vertically on the surface of a semisolid agarized medium. Temporal phenotyping is generally avoided due to technical limitations to capture such pictures during time. In this project, we employ a low cost device composed of plastic parts generated using a 3D printer, low-price cameras and infra-red LED lights to obtain a photo-sequence of growing plants. We propose a segmentation algorithm based on convolutional neural networks (CNN) to extract the plant roots, and present a comparative study of three different CNN models for such task. Our goal is to generate a reliable graph representation of the root system architecture, useful to obtain descriptive phenotyping parameters
plant root segmentation, high-throughput phenotyping, cnns
1 - 35
Skin tone and undertone determination using a Convolutional Neural Network model
M. Etchart, J. Garella, G. De Cola, C. Silva, J. Cardelino
Emanuele Luzio
In the makeup industry, skin products are recommended to a guest based on their skin color and personal preferences. While the latter plays a key role in the final choice, accurate skin color and foundation matching is a critical starting point of the process. Skin color and foundation shades are categorized in the industry by their tone and undertone. Skin tone is typically divided into 6 categories linked to epidermal melanin, called the Fitzpatrick scale, ranging from fair to deep, while undertone is usually defined by 3 categories, cool, neutral and warm. Other scales exist such as the Pantone Skin Tone Guide reaching 110 combinations of tone and undertone. Both tone and undertone can be well represented by a two-dimensional continuum or be discretized into as many ordered categories as desired. Non-uniform illumination, auto exposure, white balance and skin conditions (spots, redness, etc) all pose important challenges determining skin color from direct measurements of semi-controlled face images. Previous work has shown good results for skin tone classification in 3 or 4 categories, while undertone classification hasn't been yet addressed in the literature. We propose a solution for inferring skin tone and undertone from face images by training a CNN which outputs a two-dimensional regression score representing skin tone and undertone. The CNN was trained from face images with tone and undertone labeled in the discrete 6 tones and 3 undertone categories, mapped into a score for regression. This approach achieves an accuracy of 78% for skin tone and 82% for undertone. In addition, the score allows for a simplified matching scheme between skin tone/undertone and the foundation colors.
skin tone, regression, convolutional neural network
1 - 42
Unsupervised domain adaptation for brain MR image segmentation through cycle-consistent adversarial networks
Julián Alberto Palladino, Diego Fernandez Slezak, Enzo Ferrante
Julián Alberto Palladino
"Image segmentation is one of the pilar problems in the fields of computer vision and medical imaging. Segmentation of anatomical and pathological structures in magnetic resonance images (MRI) of the brain is a fundamental task for neuroimaging (e.g brain morphometric analysis or radiotherapy planning). Convolutional Neural Networks (CNN) specifically tailored for biomedical image segmentation (like U-Net or DeepMedic) have outperformed all previous techniques in this task. However, they are extremely data-dependent, and maintain a good performance only when data distribution between training and test datasets remains unchanged. When such distribution changes but we still aim at performing the same task, we incur in a domain adaptation problem (e.g. using a different MR machine or different acquisition parameters for training and test data). In this work, we developed an unsupervised domain adaptation strategy based on cycle-consistent adversarial networks. We aim at learning a mapping function to transform volumetric MR images between domains (which are characterized by different medical centers and MR machines with varying brand, model and configuration parameters). This technique allows us to reduce the Jensen-Shannon divergence between MR domains, enabling automatic segmentation with CNN models on domains where no labeled data was available."
unsupervised domain adaptation, cyclegans, biomedical image segmentation
2 - 4
Anatomical Priors for Image Segmentation via Post-Processing with Denoising Autoencoders
Agostina Larrazabal, César Martinez, Enzo Ferrante
Agostina
"We introduce Post-DAE, a post-processing method based on denoising autoencoders to improve the anatomical plausibility of arbitrary biomedical image segmentation algorithms. Some of the most popular segmentation methods still rely on post-processing strategies like conditional random fields to incorporate connectivity constraints into the resulting masks. Even if it is a valid assumption in general, these methods do not offer a straightforward way to incorporate more complex priors like convexity or arbitrary shape restrictions. Post-DAE leverages the latest developments in manifold learning via denoising autoencoders. We learn a low-dimensional space of anatomically plausible segmentations, and use it to impose shape constraints by post-processing anatomical segmentation masks obtained with arbitrary methods. Our approach is independent of image modality and intensity information since it employs only segmentation masks for training. We performed experiments in segmentation of chest X-ray images. Our experimental results show that Post-DAE can improve the quality of noisy and incorrect segmentation masks obtained with a variety of standard methods, by bringing them back to a feasible space, with almost no extra computational cost."
anatomical segmentation, autoencoders, post-processing
2 - 19
Graph Feature Regularization: Combining machine learning models with graph data
Federico Albanese, Esteban Feuerstein, Leandro Lombardi
Federico Albanese
"In recent years, the amount of available data has drastically increased. However, labelling such data is hugely expensive. In this scenario, semi-supervised learning emerge as a vitally important tool, which combines labelled data (supervised machine learning) and unlabelled data (unsupervised learning) in order to make better predictions. In particular, graph based algorithms takes into account the relationships between the instances of the data and the underlying graph structures to make those predictions. In addition, in the context of data analysis, there are scenarios that can be naturally think as graphs. This occurs in situations where in addition to individual properties, connectivity between the elements of the data set is also important. Therefore, it is logical that machine learning models include information from both a node and its neighbours when making a prediction. This works propose adding graph feature regularization terms (GFR) to the the objective function to maximize. This new regularization terms depends on the structure of the network, the weight of the edges and the features of the node. We conclude that adding this terms to gradient boosted trees can outperform complex network architectures such as the Graph Convolutional Networks."
graph, machine learning, regularization