posters

Poster sessions

1 - 1
(De)Constructing Bias on Skin Lesion Datasets
Alceu Bissoto, Michel Fornaciali, Eduardo Valle, Sandra Avila
Alceu Emanuel Bissoto
Melanoma is the deadliest form of skin cancer. Automated skin lesion analysis plays an important role for early detection. Nowadays, the ISIC Archive and the Atlas of Dermoscopy dataset are the most employed skin lesion sources to benchmark deep-learning based tools. However, all datasets contain biases, often unintentional, due to how they were acquired and annotated. Those biases distort the performance of machine-learning models, creating spurious correlations that the models can unfairly exploit, or, contrarily destroying cogent correlations that the models could learn. We propose a set of experiments to investigate bias, positive and negative, in existing skin lesion datasets. We show that models can correctly classify skin lesion images without clinically-meaningful information: disturbingly, the machine-learning model learned over images where no information about the lesion remains, presents an accuracy above the AI benchmark curated with dermatologists' performances. That strongly suggests spurious correlations guiding the models. We fed models with additional clinically meaningful information, which failed to improve the results even slightly, suggesting the destruction of cogent correlations. Our main findings raise awareness of the limitations of models trained and evaluated in small datasets, and may suggest future guidelines for models intended for real-world deployment.
bias, skin lesion analysis, deep learning
1 - 2
A Latent Space of Protein Contact Map's
Emerson C. F. Lima, Fábio L. Custódio, Laurent E. Dardenne
Emerson Correia Freitas Lima
"Finding a good homologous protein is crucial to predicting a target protein structure with good quality. The search for remote homologous is given by looking for target's neighbors in a given protein space. Deep Convolutional Generative Adversarial Networks (DCGANs) are deep learning models capable of learning meaningful embedded representation of data. Current methods are based on sequence alignments or contacts alignments. In this work, we build a latent space of protein folds through DCGANs with the aim of contributing to the problem of remote homology detection."
protein remote homology detection, computational biology, generative adversarial networks
1 - 3
A novel dynamic asset allocation system using Feature Saliency Hidden Markov models for smart beta investing
Elizabeth Fons, Paula Dawson, Jeffrey Yau, Xiao-jun Zeng, John Keane
Elizabeth Fons
The financial crisis of 2008 generated interest in more transparent, rules-based strategies for portfolio construction, with smart beta strategies emerging as a trend among institutional investors. Whilst they perform well in the long run, these strategies often suffer from severe short-term drawdown (peak-to-trough decline) with fluctuating performance across cycles. To manage short term risk (cyclicality and underperformance), we build a dynamic asset allocation system using Hidden Markov Models (HMMs). We use a variety of portfolio construction techniques to test our smart beta strategies and the resulting portfolios show an improvement in risk-adjusted returns, especially on more return oriented portfolios (up to 50$\%$ of return in excess of market adjusted by relative risk annually). In addition, we propose a novel smart beta allocation system based on the Feature Saliency HMM (FSHMM) algorithm that performs feature selection simultaneously with the training of the HMM, to improve regime identification. We evaluate our systematic trading system with real life assets using MSCI indices; further, the results (up to 60$\%$ of return in excess of market adjusted by relative risk annually) show model performance improvement with respect to portfolios built using full feature HMMs.
hidden markov model, portfolio optimization, feature selection
1 - 4
AI education in Latinamerica
Lesly Zerna
Lesly Zerna
Short research about different initiatives in Latin America that are willing and working to democratize AI education (foundations). Many different communities in Latin America, tech communities, lead by volunteers are seeking to help people to learn and apply AI knowledge
education, ed tech, ai education
1 - 5
Automatic emotion recognition from psychophysiological data: a preliminary bilateral electrodermal activity study.
Tomás Ariel D’Amelio & Alberto Andrés Iorio
Tomás Ariel D'Amelio
"Affective computing as a field of study has the objective of incorporating information about emotions and other affective states into technology. One of its areas of study is the automatic recognition of emotions. This can be achieved by different means, one of them being the application of psychophysiological methods. The aim of the present poster is to present the implementation of models of emotional recognition from bilateral electrodermal activity signals. In this way, the impact of introducing new bilateral features will be analyzed as a possible contribution to the existing affective state recognition models. "
affective computing, emotion recognition, bilateral electrodermal activity
1 - 6
Multiscale Pain Intensity Estimation from Facial Expressions using CNN-RNN
Jefferson Quispe Pinares, Guillermo Cámara Chávez, Rensso Victor Hugo Mora Colque
Jefferson Quispe Pinares
Deep Learning methods have achieved impressive results in several complex tasks such as pain estimation from facial expressions in video sequences. Estimation pain has a difficult way to measure, it due to subjective and specifics features by each person. However, its estimation is important for clinical evaluation processes. This research paper proposes the use of Convolutional Neural Networks (CNN) with the Transfer Learning and Sequence Model using Gated Recurrent Units (GRU) in order to get an accurate pain estimate in different scales. Before this, a preprocessing is performed using the landmarks. Nowadays, Prkachin and Salomon Pain Intensity (PSPI) based on Action Units (AU) have been investigated too much, but there is more scale to estimate the pain. For a correct estimation of the automatic intensity of pain for video, we use Visual Analog Score (VAS), and other scales; which allows us to achieve results taking into account the evaluation metric used by specialists.
pain, personalized, deeplearning
1 - 7
Bayesian encoding and decoding properties of neurons in the dentate gyrus of the hippocampus
Diego M. Arribas, Antonia Marin Burgin, Luis G. Morelli
Diego Arribas
The intrinsic properties of neurons in a population are diverse and distinct outputs may arise from this heterogeneity, reducing redundancy in the population. Due to the continuous birth of neurons in the dentate gyrus, neurons of different ages receive, process and convey information at any given point in time. While maturing, neurons develop their intrinsic properties in a stereotyped way producing heterogeneity in the population. We hypothesize that young neurons play an active role in the processing of information already in their immature state. We study how neurons of different ages transform their input into a spiking response by performing patch clamp recordings and injecting fluctuating currents. By fitting Bayesian models called Generalized Linear Models to our data, we can predict responses for a given stimulus with a high degree of accuracy while getting a reduced characterization of the recorded neurons. We use these characterizations to compare the encoding properties of different age populations. How do different neurons represent stimuli? We explore this question by using the neurons encoding models and recorded responses to estimate stimuli through model-based decoding. We can explore what features of the stimuli are preserved in the different neurons response and compute information measures.
neuroscience, bayes, coding
1 - 8
Bayesian model of human visual search in natural images
Melanie Sclar, Sebastián Vita, Gastón Bujia, Guillermo Solovey, Juan E Kamienkowski
Melanie Sclar
The ability to efficiently find objects in the visual field is essential for a number of everyday life activities. In the last decades, there was a large development of models that accurately predict the most likely fixation locations (saliency maps), which is reaching its ceiling. Today, one of the biggest challenges in the field is to go beyond saliency maps to predict a sequence of fixations related to a given task. Visual search can be seen as an active process in which humans update the probability of finding a target at each point in space as they acquire more information. In this work, we build a Bayesian model for visual search in natural images. The model takes a saliency map as prior and computes the most likely fixation location given all the previous ones, taking several visual features into account. We recorded eye-tracking while participants looked for an object in a natural interior image. Our model was indistinguishable from humans in several measures of human behavior, in particular, the Scanpath Similarity. Thus, we were able to reproduce not only the general performance but also the entire sequence of eye movements.
visual search, neuroscience, bayesian modeling
1 - 9
Bayesian multilevel models for assessing the quality of NMR resolved protein structures
Agustina Arroyuelo, Jorge A. Vila, Osvaldo A. Martin
Agustina Arroyuelo
"On the present work we exploit the benefits of multilevel Bayesian models trained with protein structural observables, with the purpose of protein structure validation. The Bayesian models trained in this work allow us to estimate a reference for the observables that is unique for each protein structure thus revising the computation of these values. Through a data set of high quality protein structures we obtain reference curves for the expected differences between observed and corrected magnitudes, and benchmark NMR resolved structures against these curves. We also present a computational tool for graphic validation of NMR resolved protein structures based on the presented Bayesian framework. "
bayesian, multilevel models, protein structure
1 - 10
Combining Deep Learning and Prior Knowledge for Crop Mapping in Tropical Regions from Multitemporal SAR Image Sequences
Laura Elena Cué La Rosa, Raul Queiroz Feitosa, Patrick Nigri Happ, Ieda Del’Arco Sanches 2 and Gilson Alexandre Ostwald Pedro da Costa
Laura Elena Cué La Rosa
Accurate crop type identification and crop area estimation from remote sensing data in tropical regions are still considered challenging tasks. The more favorable weather conditions, in comparison to the characteristic conditions of temperate regions, permit higher flexibility in land use, planning, and management, which implies complex crop dynamics. Moreover, the frequent cloud cover prevents the use of optical data during large periods of the year, making SAR data an attractive alternative for crop mapping in tropical regions. This paper evaluates the effectiveness of Deep Learning (DL) techniques for crop recognition from multi-date SAR images from tropical regions. Three DL strategies are investigated: autoencoders, convolutional neural networks, and fully-convolutional networks. The paper further proposes a post-classification technique to enforce prior knowledge about crop dynamics in the target area. Experiments conducted on a Sentinel-1 multitemporal sequence of a tropical region in Brazil reveal the pros and cons of the tested methods. In our experiments, the proposed crop dynamics model was able to correct up to 16.5% of classification errors and managed to improve the performance up to 3.2% and 8.7% in terms of overall accuracy and average F1-score, respectively.
crop mapping, multitemporal image analysis, deep learning
1 - 11
Convolutional-LSTM for multi image to single medical diagnostics
Luis Leal,Erick Ramirez,Fernando Juarez, Diana Letona,Mildred Aspuac,Marvin Castillo
Luis Leal
Deep learning has been successfully applied to computer vision medical problems commonly generating a medical diagnostic per image(in a supervised learning setting) and then combining the diagnostics using statistical techniques for a general diagnostic per patient , this requires a training set with a diagnostic per image but for many medical situations like head scans it is uncommon to have 1 diagnostic per image instead doctors emit a single diagnostic for the patient based on an unknown and variable number of images for every patient , we designed a convolutional-lstm architecture and a variation of stochastic gradient descent training pipeline to create a model for head scans medical diagnostics that is able to take a sequence of ct scans(with unknown and variable size) and emit a single diagnostic per patient trained on a multi-image single diagnostic setting.
medical,computer-vision,sequence-model
1 - 12
Data analytics opportunities for an education service provider: The case of Plan Ceibal
Germán Capdehourat, Federica Bascans, Fabián Frommel
Germán Capdehourat
"Plan Ceibal was created in 2007 as a plan for inclusion and equal opportunities with the aim of supporting Uruguayan educational policies with technology. Since then, it became the first nationwide ubiquitous educational computer program in the world based on the 1:1 model, providing every student and teacher in the K-12 public education system with a laptop or tablet and internet access at each school. In addition, Plan Ceibal also provides digital educational content and resources to enhance the teaching and learning process, most notably a Learning Management System, a Mathematics Adaptive Platform, a remote English teaching program and an online library. Today, with close to 700,000 beneficiaries, Plan Ceibal manages a large amount of data collected from many different sources such as the end user devices, the network infrastructure and the educational platforms. This fact presents a great challenge, but also the huge opportunity to convert the massive data into rich information, which can be used to support and improve the current technology and learning educational policies."
education, laptops, wi-fi
1 - 13
Decoding neurophysiological information from Convolutional Neural Network layers in single-trial EEG classification
Ramiro Gatti, Yanina Atum and José Biurrun Manresa
Ramiro Gatti
Predicting movement from brain signals is crucial for many biomedical applications. Careful engineering and domain expertise are required to perform electroencephalogram (EEG) feature extraction into a suitable representation for the classification stage. Representation learning methods can automatically perform feature extraction and classification through optimization algorithms. In particular, Convolutional Neural Networks (ConvNet) recently showed promising results in prediction of movement speed and force from single-trial EEG. The downside of this approach is that it does not provide direct insight into neurophysiological phenomena underlying a decision. In this regard, there has been some progress recently in the field of network visualization, but more research is still required in order to come up with strategies for extracting neurophysiological information. In this work, we analyzed the resulting layers of ConvNets that were trained to predict movement from single-trial EEG recorded from several channels over the motor cortex. Our results show that the discriminative information is predominantly decoded in the spatial filter layer, and that the network structure can be substantially reduced by taking this knowledge into consideration.
neurophysiology, convolutional neural network, single-trial eeg
1 - 14
Deep Learning based parameterization for lightning prediction
Mailén Gómez Mayol, Luciano Vidal, Pablo Mininni
Mailén Gómez Mayol
Lightning poses a serious hazard to the public and is responsible for several fatalities and damages, therefore, it is important to improve the community's ability to forecast lightning. Predicting the spatio-temporal location of lightning is a complex problem. The usual methods for forecasting meteorological phenomena are based on the numerical integration of the equations that describe them in a spatial grid. Lightning occurs in time and space scales smaller than the usual grids, and therefore a "parameterization" is used. The nature of lightning leads us to try a new approach: deep neural networks. In this work, we train a deep neural network to generate an empirical parameterization of lightning from available data. In particular, we present a model based on the training of a GAN with numerical forecast model data and lightning observations, without knowing the equations that govern the phenomenon. The network thus trained is capable of returning the lightning rate from physical variables that describe the state of the atmosphere. Comparisons of the network output with a lightning parameterization commonly used in the numerical weather forecast show that the deep learning model can match or improve the results obtained by the usual methods.
lightning prediction, gan, atmosphere model
1 - 15
Deep Learning for Image Sequence Classification of Astronomical Events
Rodrigo Carrasco-Davis, Guillermo Cabrera-Vives, Francisco Förster, Pablo A. Estévez, Pablo Huijse, Pavlos Protopapas, Ignacio Reyes, Jorge Martínez-Palomera, Cristóbal Donoso
Rodrigo Carrasco Davis
We propose a new sequential classification model for astronomical objects based on a recurrent convolutional neural network (RCNN) which uses sequences of images as inputs. This approach avoids the computation of light curves or difference images. This is the first time that sequences of images are used directly for the classification of variable objects in astronomy. We generate synthetic image sequences which take into account the instrumental and observing conditions, obtaining a realistic set of movies for each astronomical object. The simulated data set is used to train our RCNN classifier. Using a simulated data set is faster and more adaptable to different surveys and classification tasks. Furthermore, the simulated data set distribution is close enough to the real distribution, so the RCNN with fine tuning has a similar performance on the real data set compared to the light curve classifier. The RCNN approach presents several advantages, such as a reduction of the data pre-processing, faster online evaluation, and easier performance improvement using a few real data samples. The results obtained encourage us to use the proposed method for astronomical alert broker systems that will process alert streams generated by new telescopes such as the Large Synoptic Survey Telescope.
astronomy, image simulation, fine tuning
1 - 16
DockThor-VS: A Free Docking Server For Protein-Ligand Virtual Screening using the Supercomputer SDumont
Isabella A Guedes, André M. S. Barreto, Eduardo K da Silva, Camila S Magalhães and Laurent E Dardenne
Isabella Alvim Guedes
Receptor-ligand molecular docking is a structure-based approach widely used by the scientific community in Medicinal Chemistry to assist the process of drug discovery, searching for new lead compounds against relevant therapeutic targets with known three-dimensional structures. The DockThor program, developed by our group GMMSB/LNCC, has obtained promising results in comparative studies with other well-established docking programs for pose prediction of distinct ligand chemical classes and molecular targets. Recently, we developed machine learning-based scoring functions5 with protein-ligand interaction-driven features for predicting binding affinities of protein-ligand complexes. The competitive performance of the DockThor program for binding mode prediction and the accuracy of the affinity functions recently developed, encouraged us to develop the portal DockThor-VS as a free and reliable tool for virtual screening. The DockThor-VS portal utilizes the computational facilities provided by the SINAPAD Brazilian High-performance Platform and the petaflop supercomputer SDumont (freely available for the scientific community at http://www.dockthor.lncc.br).
drug design, molecular modeling, machine learning-based affinity prediction
1 - 17
Estimating deforestation events in the Semiarid Chaco Forest in Argentina using GIS, remote sensing and machine learning models.
Veronica Barraza, Vanesa Douna, Grings Francisco, Esteban Roitberg, Estefania Piegari
veronica Barraza Berradas
Semi-arid forest ecosystems play an important role in seasonal carbon cycle dynamics; however, these ecosystems are prone to heavy degradation. In subtropical Argentina, the Chaco region has the highest absolute deforestation rates in the country (200.000 ha/ year), and at the same time, it is the least represented ecoregion in the national protected areas system. There is a critical need for methods that enable the analysis of satellite image time series to detect forest disturbances, especially in developing countries (e.g. Argentina). The Forest Management Unit (UMSEF) in Argentina provides annual deforestation maps based on visual inspection of Landsat images (Landsat 7 ETM+ and Landsat 8 OLI), which take long processing times and the intensive and coordinated participation of many human resources. In this research, we assess the potential to use Random Forest (RF) algorithm with the Landsat dataset and geographic information system (GIS) information to detect cover change over the Dry Chaco Forest (DCF) in Argentina. To identify the factors that define the agricultural expansion we calculated feature importances. Results indicate that distance to previous deforestation areas, distance to rivers and remote sensing vegetation indices are sufficient to predict deforestation events.
deforestation, random forest, remote sensing
1 - 18
Evaluation of methodologies for classifying mutations in genomics
Camila Simoes, Lucía Spangenberg, Hugo Naya, Juan Cardelino
Camila Simoes
"Human genome sequencing has become a frequent tool in the clinical practice, facilitating the determination of a large number of genetic variants. The interpretation of these variants remains a great challenge and even though the development of rules and tools for variant interpretation has increased, many variants remain unclassified or with conflicting interpretation of pathogenicity . The ClinVar public database has become an indispensable resource for clinical variant interpretation, where clinical laboratories, researchers and others share their classifications of variants with evidence, documenting the clinical impact of more than 400,000 variants. ClinVar uses standard terms for pathogenicity level (recommended by ACMG/AMP), and differences in interpretation among submitters within those levels are reported as 'Conflicting interpretations of pathogenicity’. In this sense, the goal of this work is to carry out an evaluation of several Machine Learning techniques in order to reclassify variants with conflicting interpretations. To that end, we use the variants classified as ‘Benign’, ‘Likely Benign’, ‘Pathogenic’ and ‘Likely pathogenic’ from the ClinVar database, previously annotated with ANNOVAR as training set. We believe this approach could be helpful to disambiguate the interpretation of genomic variants, and improving the analysis of variants in pursuit of new insights into pathogenicity. "
genomics, variant classification, conflicting variants
1 - 19
Evaluation of the use of artificial neural networks in the classification of diabetic retinopathy with retinal fundus images obtained using smartphones
"Marilia Rosa Silveira, Juliana Herbert, Manuel Augusto Pereira Vilela "
Marilia Rosa Silveira
Diabetic retinopathy (DR) is a microvascular complication resulting from occlusion of retinal vessels caused by diabetes. The fundoscopy is used to identify clinical findings to classify DR in degrees according to the evolution of the findings. Machine Learning (ML) techniques, such as Convolutional Neural Networks (CNNs), have been used to recognize patterns in images. This project aims to apply a machine learning approach to classify fundoscopy images in order to establish priority care for an ophthalmology specialist, according to clinical imaging characteristics that converge to a higher or lower risk of blindness related to diabetic retinopathy. The project is a final work of Biomedical Informatics undergraduate course. We are doing a systematic review to identify, appraise and synthesize studies about analysis of fundoscopy images using artificial neural networks to identify and classify DR. At the same time, we are preparing the database with annotated images, image preprocessing, training and validation of the neural network. The database is composed by classified images obtained by the project "Analysis of Retinal Images Obtained with a Mobile Phone" approved by the Ethics Committee of Santa Casa de Misericórdia Hospital, at Porto Alegre city, as well as public data to complement this base.
fundus image, diabetic retinopathy, deep learning
1 - 20
External validation and characterization of cancer subtypes using SBC
Duitama, C. ; Ahmad, A.; Fröhlich,H.
Camila Duitama
The Survival Based Bayesian Clustering (SBC) by Ahmad and Frohlich (2017), infers clinically relevant cancer subtypes, by jointly clustering molecular and survival data. Originally the model was tested on real Breast Cancer and Glioblastoma (GBM) data sets, without external validation. The main objective of this project was to perform an external validation of the SBC based on the Verhaak samples along with a rigorous feature engineering process, and to characterise the clusters and signature with other clinical and omics data. A patient cohort of 421 samples (160 training, 261 validation) from the TCGA-GBM data set was retrieved with RTCGAToolbox and pre-processed. The feature engineering approaches with most distinct Kaplan Meier curves were Block HSIC-Lasso (p-value training = 1.08e-05, p-value validation= 0.05) and a PAFT model on a collection of oncogenic gene sets (p-value training =e+00 , p-val validation= 1.8e-02). In both cases there was an improvement of the initial Predictive C-Index (Block HSIC-Lasso = +1:5%, PAFT = +27:6%) and Recovery C-Index (Block HSICLasso = +8:7%, PAFT = +5:0%). The SBC has proven to perform successfully on an external TCGA-GBM patient cohort.
survival, glioblastoma, clustering
1 - 21
Flavor tagging algorithms for Jets in ATLAS, at CERN
ATLAS Collaboration
Maria Roberta Devesa
The identification of jets containing a b-quark (b-tagging) is an important component in the physics program of the ATLAS experiment at CERN. Several searches for New Physics increase significantly their sensitivity when identifying these jets. b-tagging algorithms are presented, some of them using machine learning techniques, and their performances are compared
b-tagging, atlas, b-jets
1 - 22
Fraud Detection in Electric Power Distribution: An Approach that Maximizes the Economic Return.
Pablo Massaferro
The detection of Non-Technical Losses is a very important economic issue for Power Utilities. Diverse machine learning strategies have been proposed to support electric power companies tackling this problem. Methods performance is often measured using standard cost-insensitive metrics such as the accuracy, true positive ratio, AUC, or F1. In contrast, we propose to design a NTL detection solution that maximizes the effective economic return. To that end, both the income recovered and the inspection cost are considered. Furthermore, the proposed framework can be used to design the infrastructure of the division in charge of performing customers inspections. Then assisting not only short term decisions, e.g., which customer should be inspected first, but also the elaboration of long term strategies, e.g., planning of NTL company budget. The problem is formulated in a Bayesian risk framework. Experimental validation is presented using a large dataset of real users from the Uruguayan Utility (UTE). The results obtained show that the proposed method can boost companies profit and provide a highly efficient and realistic countermeasure to NTL. Moreover, the proposed pipeline is general and can be easily adapted to other practical problems.
1 - 23
High-throughput phenotyping of plant roots in temporal series of images using deep learning
Nicolas Gaggion, Thomas Roule, Martin Crespi, Federico Ariel, Thomas Blein, Enzo Ferrante
Rafael Nicolas Gaggion Zulpo
Root segmentation in plant images is a crucial step when performing high-throughput plant phenotyping. This task is usually performed in a manual or semi-automatic way, deliniating the root in pictures of plants growing vertically on the surface of a semisolid agarized medium. Temporal phenotyping is generally avoided due to technical limitations to capture such pictures during time. In this project, we employ a low cost device composed of plastic parts generated using a 3D printer, low-price cameras and infra-red LED lights to obtain a photo-sequence of growing plants. We propose a segmentation algorithm based on convolutional neural networks (CNN) to extract the plant roots, and present a comparative study of three different CNN models for such task. Our goal is to generate a reliable graph representation of the root system architecture, useful to obtain descriptive phenotyping parameters
plant root segmentation, high-throughput phenotyping, cnns
1 - 24
Identifying pathogenic variants in non-coding regions of the human genome
Ben Omega Petrazzini, Fernando Lopez-Bello and Lucia Spangenberg
Ben Omega Petrazzini
"The World Health Organization (WHO) reports around 300M cases of rare diseases worldwide1, half of them are affecting children2. There are over 7000 different types3, most of which are genetically caused by low frequency Single Nucleotide Polymorphisms (SNPs). This makes them really hard to diagnose for traditional medicine4, which hinders any possible treatment. Fortunately, this same characteristic makes them suitable for bioinformatics approaches. Classic in-silico techniques work well for coding regions of the genome. However, non-coding (NC) regions are 49 times bigger and variants are difficult to classify due to the lack of biological evidence. This results in a massive dataset with little information, which makes it very difficult to asses. To address this problem we are developing a Machine Learning algorithm in order to prioritize pathogenic variants in NC regions of the genome. To that end, we use the ClinVar public database annotated with ANNOVAR as training set. Once the best model is fitted we will use it to reduce the number of NC variants of interest in patients with an undiagnosed rare disease. We believe this approach will accelerate the diagnosis process in rare disease patients, giving a mayor relief for the individual and its family."
genomic diagnosis, rare diseases, snp prioritization
1 - 25
In silico prediction of drug-drug interactions
Nigreisy Montalvo Zulueta, María Elena Ochagavía Roque, María Elena García Ochagavía
Nigreisy Montalvo Zulueta
Drug-drug interaction (DDI) is a change in the therapeutic effects of a drug when another drug is co-administrated. Early detection of these interactions is mandatory to avoid inadvertent side effects that can lead to failure of clinical treatments and increase healthcare costs. Computational prediction of DDIs has been approached as a classification problem where two classes are defined: interacting drug pairs (positive class) and non-interacting drug pairs (negative class). Positive DDIs are usually obtained from public databases that contain a list of validated DDIs, however negative DDIs are drug pairs generated randomly, due to the lack of a “gold standard” non-DDI dataset. In the present work we propose to perform a disproportionality analysis of FDA Adverse Event Reporting System (FAERS) with the aim of finding drug pairs that are often co-administrated and did not generate signal of interaction. We selected previous pairs as negative class examples. We calculated drug-drug pair similarity using nine biological features and finally applied five machine learning-based predictive models: decision tree, Naïve Bayes, Support Vector Machine (SVM), logistic regression and K Nearest Neighbors (KNN). SVM obtained the highest AUC value (0.77) based on tenfold cross validation.
bioinformatics, drug-drug interactions, supervised machine learning
1 - 26
Machine Learning Applied to Social Sciences: New Insights to Understand Human Behavior
Francielle M. Nascimento, Marielli Bittencourt, Henrique Carlos de Castro, Dante A. C Barone
Francielle Marques
"The research in Social Sciences is fundamental to the study of human behavior. Beliefs and motivations play an important role in people's decision-making and choices. This relationship is relevant to explain the behavior in a population, and therefore, it allows for outlining social actions to improve the community. Knowing this, we proposed a way to discover meaningful patterns from a database of social studies using state-of-the-art techniques of Artificial Intelligence and Social Sciences. In this context, we selected Social Activism to perform classification using the extensive Word Values Survey (WVS) database. The database used contain a survey applied in several countries, divided into periods called Waves. The Waves handled in this study were Wave 5 (2005-2009), Wave 6 (2010-2014), and Wave 7 (2018-2022). Thus, we discovered the patterns in the databases in the longitudinal view that make sense from the perspective of the Social Sciences. These patterns indicate the tendency of peoples of the world is concerned with issues of moral-ethical than other aspects, such as politics, for example."
application, explainable, computational social science
1 - 27
Machine learning based label-free fluorescence lifetime skin cancer screening.
Renan A. Romano*, Ramon G. T. Rosa, Ana Gabriela Salvio, Javier A. Jo, Cristina Kurachi.
Renan Arnon Romano
Skin cancer is the most prominent cancer type all over the world. Early detection is critical and can increase survival rates. Well-trained dermatologists are able to accurately diagnosis through clinical inspections and biopsies. However, clinical similar lesions are usually incorrectly classified. This work aims to screen similar benign and malignant lesions for both pigmented and non-pigmented types. Fluorescence lifetime images measurements were carried out on patients with dermatologist (and biopsy) diagnosed lesions. This technique does not require the addition of any markers and can be performed noninvasively. Metabolic fluorescence lifetime images were performed by using a Nd:YAG laser emitting at 355 nm to excite the skin fluorophores. Collagen/elastin, NADH and FAD emission spectral bands for both nodular basal cell carcinomas and intradermic nevus, as well as for melanoma and pigmented seborrheic keratosis were analyzed. Features were properly extracted from these lifetime decays and set as the input of a simple partial least squares discriminant analysis model. After splitting the train and test sets, it was possible to achieve around 0.8 of ROC area on the test set, both for melanoma and basal cell carcinoma discriminations.
label-free imaging, fluorescence lifetime imaging, computer aided diagnosis
1 - 28
Machine Learning for Humanoid Robot Soccer
Alexandre Muzio, Luckeciano Melo, Lucas Steuernagel, Marcos Maximo
Marcos Ricardo Omena de Albuquerque Maximo
ITANDROIDS is a robot soccer team from Aeronautics Institute of Technology, Brazil. This poster presents recent efforts by this team in applying machine learning for robot soccer. We present a convolutional neural network (CNN) based on the You Only Look Once (YOLO) system for detecting the ball and the goal posts of the soccer field of a humanoid robot competition. We also show efforts in applying deep reinforcement learning for learning motions and behaviors for a simulated humanoid robot. We used Proximal Policy Optimization (PPO), which is suited for continuous domain tasks, to learn a dribbling behavior which surpassed our own hand coded behavior. Moreover, we used the same algorithm to learn a high-performance kick motion. In the latter case, behavior cloning was used to bootstrap the training, which contributed to make the algorithm converge to a better local minimum.
machine learning, robotics, robot soccer
1 - 29
Neural Networks applied to small datasets: efficiency evolution of natural gas networks
de Meio Reggiani, Martin C.; Chiarvetto Peralta, Lucila L.; Viego, Valentina N. Viego and Brignole, Nélida B.
Chiarvetto Peralta Lucila Lourdes
In January 2002, price updates of Argentinian public services were halted. Based on a dataset provided by the natural gas regulatory authority, an Artificial Neural Network was employed to study the efficiency change in the natural gas transport system. The gas leakage, which served as a proxy of operating inefficiency, has been estimated by a Multilayer Perceptron. The model was trained using technology-related data from 2002 onwards, and the previous information was employed for leakage prediction, allowing for the comparison against the real value.
natural gas transport system, efficiency, Artificial Neural Network
1 - 30
Optimizing classifier parameters for insect datasets
Bruno Gomes Coelho, Andre Maletzke, Gustavo Batista
Bruno Gomes Coelho
Motivated by the real life problem of being able to identify and then selectively capture dangerous insects that transmit various diseases, this poster analyzes the random search method of optimizing the parameters of two of the most recommended machine learning algorithms, Support Vector Machines (SVM) and Random Forest (RF).
parameter optimization, insect classification, random search
1 - 31
Photometric redshifts for S-PLUS using machine learning techniques
Erik V. R. Lima, Marcus V. Costa-Duarte, Laerte Sodré Jr. & the S-PLUS collaboration
Erik Vinicius Rodrigues de Lima
The distance to celestial objects is a fundamental quantity for studies in astronomy and cosmology. Until recently, the only way to obtain this information was via spectroscopy, but the increasingly bigger surveys, with enormous amounts of data, made this approach infeasible. Therefore a new way to estimate the distance to objects, based on photometry, was developed. Photometric redshifts can be acquired for many objects in a time-inexpensive manner. In this work, the objective is to investigate how machine learning methods perform when using the 12 filter system of S-PLUS. Also, a comparison with the currently used method for this purpose (BPZ) is done. The results show that S-PLUS has potential to acquire accurate photometric redshifts using machine learning techniques.
photometric redshifts, galaxies, machine learning
1 - 32
Predicting Diabetes Disease Evolution Using Financial Records and Recurrent Neural Networks
Rafael T.Sousa, Lucas A. Pereira, Anderson S. Soares
Rafael Texeira Sousa
Managing patients with chronic diseases is a major and growing healthcare challenge in several countries. A chronic condition, such as diabetes, is an illness that lasts a long time and does not go away, and often leads to the patient's health gradually getting worse. While recent works involve raw electronic health record (EHR) from hospitals, this work uses only financial records from health plan providers to predict diabetes disease evolution with a self-attentive recurrent neural network. The use of financial data is due to the possibility of being an interface to international standards, as the records standard encodes medical procedures. The main goal was to assess high risk diabetics, so we predict records related to diabetes acute complications such as amputations and debridements, revascularization and hemodialysis. Our work succeeds to anticipate complications between 60 to 240 days with an area under ROC curve ranging from 0.81 to 0.94. This assessment will give healthcare providers the chance to intervene earlier and head off hospitalizations. We are aiming to deliver personalized predictions and personalized recommendations to individual patients, with the goal of improving outcomes and reducing costs
diabetes, rnn, self-attention
1 - 33
Prediction of Frost Events Using Machine Learning and IoT Sensing Devices
Ana Laura Diedrichs; Facundo Bromberg; Diego Dujovne.
Ana Laura Diedrichs
"In this poster, I would like to introduce a frost prediction already published in IEEE IoT Journal, https://doi.org/10.1109/JIOT.2018.2867333, preprint available in https://anadiedrichs.github.io/files/publications/2018-IoT-Diedrichs.pdf If there are time and space, I would like to share my experience using LSTM or GRU for a frost prediction system, a WIP project, so I can have the chance to get feedback about time series deep learning implementations."
frost, machine learning, precision agriculture
1 - 34
Reinforcement learning for Bioprocess Optimisation
Ilya Orson Sandoval Cárdenas, Panagiotis Petsagkourakis, Eric Bradford, Antonio del Rio-Chanona, Dongda Zhang
Ilya Orson Sandoval Cárdenas
"Bioprocesses have recently received attention to produce clean and sustainable alternatives to fossil-based materials. However, they are generally difficult to optimize due to their unsteady-state operation modes and stochastic behaviours. Furthermore, plant-model mismatch is often present. In this work we leverage a model-free Reinforcement Learning optimisation strategy. We apply a Policy Gradient method to tune a control policy parametrized by a recurrent neural network. We assume that a preliminary model of the process is available, which is exploited to obtain an initial optimal control policy. Subsequently, this policy is partially updated based on a variation of the starting model to simulate the plan-model mismatch."
reinforcement learning, optimization, bioprocesses
1 - 35
Skin tone and undertone determination using a Convolutional Neural Network model
M. Etchart, J. Garella, G. De Cola, C. Silva, J. Cardelino
Emanuele Luzio
In the makeup industry, skin products are recommended to a guest based on their skin color and personal preferences. While the latter plays a key role in the final choice, accurate skin color and foundation matching is a critical starting point of the process. Skin color and foundation shades are categorized in the industry by their tone and undertone. Skin tone is typically divided into 6 categories linked to epidermal melanin, called the Fitzpatrick scale, ranging from fair to deep, while undertone is usually defined by 3 categories, cool, neutral and warm. Other scales exist such as the Pantone Skin Tone Guide reaching 110 combinations of tone and undertone. Both tone and undertone can be well represented by a two-dimensional continuum or be discretized into as many ordered categories as desired. Non-uniform illumination, auto exposure, white balance and skin conditions (spots, redness, etc) all pose important challenges determining skin color from direct measurements of semi-controlled face images. Previous work has shown good results for skin tone classification in 3 or 4 categories, while undertone classification hasn't been yet addressed in the literature. We propose a solution for inferring skin tone and undertone from face images by training a CNN which outputs a two-dimensional regression score representing skin tone and undertone. The CNN was trained from face images with tone and undertone labeled in the discrete 6 tones and 3 undertone categories, mapped into a score for regression. This approach achieves an accuracy of 78% for skin tone and 82% for undertone. In addition, the score allows for a simplified matching scheme between skin tone/undertone and the foundation colors.
skin tone, regression, convolutional neural network
1 - 36
Stream-GML: a Generic and Adaptive Machine Learning Approach for the Internet
Juan Vanerio, Pedro Casas, Federico Larroca
Juan Vanerio
The application of AI/ML to Internet measurement problems has largely increased in the last decade; however, the complexity of the Internet as learning environment has so far hindered the wide adoption of AI/ML in the practice. We introduce Stream-GML learning, a generic stream-based, ensemble learning model for the analysis of network measurements. Stream-GML deals with two major challenges in networking: the constant occurrence of concept drifts, and the lack of generalization of the obtained learnings. To deal with concept drifts, Stream-GML relies on adaptive memory sizing strategies, periodically retraining the underlying models according to changes in the empirical distribution of incoming samples, or based on performance degradation over time. To deal with generalization of learning results and (partially) counterbalance catastrophic forgetting issues, Stream-GML uses as underlying model a novel stacking ensemble learning meta-model known as Super Learner (SL). The SL model performs asymptotically as good as the best input base learner, and provides a powerful approach to tackle multiple problems in parallel, while minimizing over-fitting likelihood. The SL meta-model is extended to the dynamic, stream setting, controlling the exploration/exploitation trade-off by reinforcement learning and no-regret learning principles.
stream learning, ensemble learning, network attacks
1 - 37
Supervised Learning Study of Changes in the Neural Representation of Time
Estevão Uyrá, Gabriela C. Tunes, Eliezyer F. de Oliveira, Marcelo S. Caetano, Marcelo B. Reyes
Estevão Uyrá Pardillos Vieira
"When counting time is essential for optimal behavior, animals must make use of some inner temporal representation to guide responses. This temporal representation can be instantiated in the neural activity, measured via intracranial recordings, and assessed in a parsimonious manner by machine learning techniques. We aimed to shed some light on the process of learning to time by studying how temporal representations develop in two relevant areas: the medial Pre Frontal Cortex (mPFC) and the Striatum. For this purpose, we used regression techniques to predict the time elapsed since a sustained response has started based on the neural activity. We then measured the performance of the regression algorithm, associating higher performance with better time representation. As expected, we found patterns of activity consistent with time representation in both areas. However, the effect of training was inverted between the areas, with mPFC's representation weakening while the Striatum's enhances, thus indicates a migration of dependencies from mPFC to the Striatum. Our findings, consistent with habit formation, suggest new directions of research for the timing community and illustrate the potential of machine learning in the study of neuroscience. Experiments were approved at CEUA-UFABC with protocol numbers 2905070317 and 1352060217."
representation, timing, neuroscience
1 - 38
Testing a simple Random Forest approach to predict surface evapotranspiration from remote sensing data
V. Douna, V. Barraza, F. Grings, A. Huete, N. Restrepo-Coupe and J. Beringer
Vanesa M. Douna
Evapotranspiration (ET), which is the sum of the water evaporated and transpired from the land surface to the atmosphere, is crucial to ecosystems as it affects the soil, the vegetation, the atmosphere and mediates their interaction. Modelling and quantifying it accurately is critical for sustainable agriculture, forest conservation, and natural resource management. Although ET cannot be remotely sensed directly, remote sensing provides continuous data on surface and biophysical variables, and thus it has been an invaluable tool for estimating ET. In this work, we have evaluated the potential of a Random Forest regressor to predict daily evapotranspiration in three sites in Northern Australia from daily in-situ meteorological data, and satellite data on leaf area index and land surface temperature. We have obtained satisfactory performances with RMSE errors around 1 mm/day (rRMSE around 0.3), which are comparable to those obtained in previous works by different methods. Sensitivity to variations in the training sample and the importance of the input variables have been analyzed. Our promising results and the simplicity of the method reinforce the relevance of deeply exploring this approach in other ecosystems at different temporal and spatial scales, aiming to develop a versatile and operative ET product.
evapotranspiration, remote sensing, random forest
1 - 40
Towards an AI Ecosystem in Bolivia
Oscar Contreras Carrasco
Oscar Contreras Carrasco
As a country, we are currently facing many challenges in the adoption of Artificial Intelligence at different levels. Whereas at this moment we don't have an formal plan towards AI implementation, there are several initiatives that intend to address key issues, such as Education on AI, as well as industry adoption. Additionally, several communities and study groups are tackling education on AI, as well as spreading the word about the benefits it can provide. Also, at an institutional level, there have been initial discussions to tackle AI adoption nationwide with key strategies at different levels. All in all, the purpose of this presentation will be to discuss these initiatives, as well as the current challenges and future plans for adoption of Artificial Intelligence in Bolivia.
artificial intelligence, bolivia, ecosystem
1 - 41
Towards Bio-Inspired Artificial Agents
Maria-Jose Escobar, Mauricio Araya, Hans Lehnert, Rodrigo Carrasco, Cristóbal Nettle, Arthur Leblois, Pablo Reyes,
María-José Escobar
The study of biological and sensory systems allows us to understand the principles of computations used to extract information from the environment inspiring new algorithms and technologies. Inspired in the retina computation, we propose visual sensors for automatic image/video equalization (Tone Mapping) and autonomous robot navigation. On the other hand, we will also analyze the cortical circuit associated with decision making, cortical-basal ganglia loop, to incorporate it into a robot controller. For this, we propose a model including tonic dopamine type D1 receptors, which modulates the robot behavior, in particular, in the balance between exploitation and exploration.
bio-inspired computation, retina, decision-making, artificial agents
1 - 42
Unsupervised domain adaptation for brain MR image segmentation through cycle-consistent adversarial networks
Julián Alberto Palladino, Diego Fernandez Slezak, Enzo Ferrante
Julián Alberto Palladino
"Image segmentation is one of the pilar problems in the fields of computer vision and medical imaging. Segmentation of anatomical and pathological structures in magnetic resonance images (MRI) of the brain is a fundamental task for neuroimaging (e.g brain morphometric analysis or radiotherapy planning). Convolutional Neural Networks (CNN) specifically tailored for biomedical image segmentation (like U-Net or DeepMedic) have outperformed all previous techniques in this task. However, they are extremely data-dependent, and maintain a good performance only when data distribution between training and test datasets remains unchanged. When such distribution changes but we still aim at performing the same task, we incur in a domain adaptation problem (e.g. using a different MR machine or different acquisition parameters for training and test data). In this work, we developed an unsupervised domain adaptation strategy based on cycle-consistent adversarial networks. We aim at learning a mapping function to transform volumetric MR images between domains (which are characterized by different medical centers and MR machines with varying brand, model and configuration parameters). This technique allows us to reduce the Jensen-Shannon divergence between MR domains, enabling automatic segmentation with CNN models on domains where no labeled data was available."
unsupervised domain adaptation, cyclegans, biomedical image segmentation
1 - 43
Using Deep Learning to make Black Hole Weather Forecasting
Roberta Duarte, Rodrigo Nemmen, João Paulo Navarro
Roberta Duarte
One way to describe black holes and how they affect the enviroment around them is to use numerical simulations to solve, for example, Navier-Stokes equations. The evolution of this system is turbulent and it has a really high computational cost. We suggest here to use deep learning to describe the evolution of this systems using inputs and output from previous simulations. In this way, we can train convolutional neural networks to understand the system and predict how they will be in future. In our project, we already have great results in prediction the enviroment around a black hole using convolutional neural networks.
black holes, convolutional neural networks, turbulence
1 - 44
UWB Radar for dielectric characterization
Magdalena Bouza, Andrés O. Altieri, Cecilia G. Galarza
Magdalena Bouza
Ultra-Wideband (UWB) radar signals are characterized for having both high frequency carrier and high bandwidth. This makes the scattered field from the targets when irradiated with UWB pulses highly dependent on the composition and shape of the target. In particular, we focus on recovering the permittivity of the target based on the measured scattered field. We are currently working on moisture detection, classifying samples into different categories. To this end, we designed and built an impulse radar UWB testbed that transmits a gaussian pulse of approximately 1ns duration and captures the target’s response to this excitation. We then process the measured signals and use it as input to our classification algorithms.
ultra-wideband, classification, electromagnetic scattering
1 - 45
Visualizing the viral evolution for untangling and predicting it
"G. Martínez, D. Simón, F. Tambasco, G. Moratorio, M. Vignuzzi, F. Lecumberry, M.I. Fariello "
Maria Ines Fariello
"Viral emergence of drug resistance can be monitored by deep sequencing over short periods of time. Due to its high mutation rate and short generation time, viruses represent a great model to study this phenomena. As it is highly probable to find several alleles of a viral population in a random position of the genome just by chance, the consensus allele will appear with high frequency and several codons at low frequency. We use Shannon's Entropy to represent codons' frequencies variability, reducing the data dimensionality significantly without losing key information related with underlying evolutionary processes. Entropy was decomposed given its rate of temporal evolution into two processes: Leading and Random Variations. Several statistical and machine learning analysis were applied to this data to clusterize sites in the genome based on their evolutionary behavior, and to differentiate among the three viral variants. Some of the outliers pinpointed by these methods were shown to be sites under selection by other authors. Altogether, we are testing new analysis tools and visualization methods for detecting relevant sites under ongoing selection in a rapid way. For example, to differentiate the evolution of viruses under a new environment, such as a new drug treatment. "
viral evolution, visualization, classificiation
1 - 46
BeMyVoice - Bringing down communication barriers
Diego G. Alonso, Alfredo Teyseyre, Luis Berdun
Diego G. Alonso
Nowadays, deaf-mute people have different communication problems, not only because of their condition but also because just a few people know sign language. These communication problems affect the education, employment, and social development of these people. With BeMyVoice, we aim to give deaf-mute people a way to improve their communication and, thus, their quality of life. In short, we propose a mobile app connected to a sensor that allows the automatic recognition of hand signs and their translation to text and voice.
Hand Gesture Recognition; Deep Learning; Natural User Interfaces; Egocentric Vision
2 - 1
A Budged-Balanced Tolling Scheme for Efficient Equilibria under Heterogeneous Preferences
Gabriel de O. Ramos, Roxana Radulescu, Ann Nowé
Gabriel de Oliveira Ramos
Multiagent reinforcement learning has shown its potential for tackling real world problems, like traffic. We consider the toll-based route choice problem, where self-interested drivers need to repeatedly choose routes that minimise their travel times. A major challenges here is to deal with agents' selfishness when competing for a common resource, as they tend to converge to a substantially far-from-optimum equilibrium. In traffic, this translates into higher congestion levels. Road tolls have been advocated as a means to tackle this issue, though typically assuming that (i) drivers have homogeneous preferences, and that (ii) collected tolls are kept for the traffic authority. In this paper, we propose Generalised Toll-based Q-learning (GTQ-learning), a multiagent reinforcement learning algorithm capable of realigning agents' heterogeneous preferences with respect to travel time and monetary expenses. GTQ-learning neutralises agents' preferences, thus ensuring that congestion levels are minimised regardless of agents' selfishness levels. Furthermore, GTQ-learning achieves approximated budget balance by redistributing a fraction of the collected tolls. We perform a theoretical analysis of GTQ-learning, showing that it leads agents to a system-efficient equilibrium, and provide empirical results, evidencing that GTQ-learning minimises congestion on realistic road networks.
multiagent reinforcement learning, route choice, marginal-cost tolling
2 - 2
A Hierarchical Two-tier Approach to Hyper-parameter Optimization in Reinforcement Learning
Juan Cruz Barsce, Jorge Palombarini, Ernesto Martínez
Juan Cruz Barsce
Optimization of hyper-parameters in reinforcement learning (RL) algorithms is a key task, because they determine how the agent will learn its policy by interacting with its environment, and thus what data is gathered. In this work, an approach that uses Bayesian optimization to perform a two-step optimization is proposed: first, categorical RL structure hyper-parameters are taken as binary variables and optimized with an acquisition function tailored for such variables. Then, at a lower level of abstraction, solution-level hyper-parameters are optimized by resorting to the expected improvement acquisition function, while using the best categorical hyper-parameters found in the optimization at the upper-level of abstraction. This two-tier approach is validated in a simulated task.
reinforcement learning, hyper-parameter optimization, bayesian optimization
2 - 3
A Novel Deviation Bound via Mutual Information for Cross-Entropy Loss
Matias Vera, Pablo Piantanida and Leonardo Rey Vega
Matias Alejandro Vera
Machine learning theory has mostly focused on generalization to samples from the same distribution as the training data. Whereas a better understanding of generalization beyond the training distribution where the observed distribution changes is also fundamentally important to achieve a more powerful form of generalization. In this paper, we attempt to study through the lens of information measures how a particular architecture behaves when the true probability law of the samples is potentially different at training and testing times. Our main result is that the testing gap between the empirical cross-entropy and its statistical expectation (measured with respect to the testing probability law) can be bounded with high probability by the mutual information between the input testing samples and the corresponding representations, generated by the encoder obtained at training time. These results of theoretical nature are supported by numerical simulations showing that the mentioned mutual information is representative of the testing gap, capturing qualitatively the dynamic in terms of the hyperparameters of the network.
mutual information, deviation bound, generalization
2 - 4
Anatomical Priors for Image Segmentation via Post-Processing with Denoising Autoencoders
Agostina Larrazabal, César Martinez, Enzo Ferrante
Agostina
"We introduce Post-DAE, a post-processing method based on denoising autoencoders to improve the anatomical plausibility of arbitrary biomedical image segmentation algorithms. Some of the most popular segmentation methods still rely on post-processing strategies like conditional random fields to incorporate connectivity constraints into the resulting masks. Even if it is a valid assumption in general, these methods do not offer a straightforward way to incorporate more complex priors like convexity or arbitrary shape restrictions. Post-DAE leverages the latest developments in manifold learning via denoising autoencoders. We learn a low-dimensional space of anatomically plausible segmentations, and use it to impose shape constraints by post-processing anatomical segmentation masks obtained with arbitrary methods. Our approach is independent of image modality and intensity information since it employs only segmentation masks for training. We performed experiments in segmentation of chest X-ray images. Our experimental results show that Post-DAE can improve the quality of noisy and incorrect segmentation masks obtained with a variety of standard methods, by bringing them back to a feasible space, with almost no extra computational cost."
anatomical segmentation, autoencoders, post-processing
2 - 5
Asistente de velocidad vehicular como agente de control en entornos urbanos
Rodrigo Velázquez
Rodrigo Manuel Velázquez Galeano
El trabajo busca plantear el modelo de un sistema asistente de velocidad vehicular que sea capaz de identificar coordenadas, en el vehículo y compararlas con las coordenadas almacenadas, por medio de una webApp implementando API’S de google maps, en una base de datos de marcas de velocidad de zonas urbanas, y alertar al conductor si excede en alguna de ellas a medida que vaya avanzando en su recorrido, lo que puede contribuir de gran manera al desarrollo de esta línea de investigación y a mejorar de forma notable las posibilidades de implementar en un futuro no muy lejano en un automóvil que pueda ser completamente asistido por un computador, teniendo en cuenta estos principios aquí mencionados.
asistente, velocidad, mapas
2 - 6
Assisted Optimal Transfer of Excitation Energy by Deep Reinforcement Learning
Joseph Vergel-Becerra and Leonardo A. Pachón
Joseph Vergel
"The high efficiency of energy transfer is one of the main motivations in the study of light-harvesting systems. The accurate description of these complexes can be formulated in the framework of open quantum systems which comprises the interaction among their fundamental units called chromophores and the interaction with the environment. Maximizing energy transfer involves optimally controlled system dynamics and at the same time, getting optimal configurations that achieve this objective. Therefore, this research proposes the implementation of reinforcement learning (RL) as a mechanism for quantum optimal control of excitation energy transfer (EET) in light-harvesting systems and, in turn, obtaining configurations that maximize efficiency through a classical agent that even can tolerate environments with high noise levels. "
reinforcement learning, open quantum systems, excitation energy
2 - 7
Bert's behavior evaluation using Stress test
Vladimir Araujo, Carlos Aspillaga
Vladimir Araujo
"Recently, several machine learning based models have been proposed for Natural Language Processing (NLP), achieving outstanding results, by using powerful architectures like “Transformer” (Vaswani et al., 2017) and pretraining on large text corpus, as is the case of BERT (Devlin et al., 2018). However, it has been shown that language models are fragile (they are easily broken) and biased (instead of an actual comprehension of the text, they tend to take advantage of data biases). To the best of our knowledge, this is the first time a Transformer-based model is systematically put to test."
natural language processing, language models, evaluation
2 - 8
Biomarker discovery on multi-omic data using Kernel Learning and Autoencoders
Martin Palazzo, Patricio Yankilevich, Pierre Beauseroy
Martin Palazzo
Molecular data from cancer patients is characterized by tens of thousands of gene features and also by different modalities or ‘omics’ like Genomics, Transcriptomics and Proteomics. These systems are also labeled by clinical information like patient survival, tumor stage and tumor subtype. The initial high dimensional input space is noisy and makes complicated to find useful patterns like similarities between tumor types and sub-types. For clinical reasons this work aims to learn meaningful and lower dimensional representations of tumors which keeps biological signals and contribute to classify tumor subtype or stage by using Variational Autoencoders (VAE) and Kernelized Autoencoders (KAE) . Then a feature selection strategy based on Multiple Kernel Learning is executed with the objective to approximate as much as possible the resulting representation based on the selected features to the one learned by the autoencoders. Selected features are also evaluated to classify tumor samples based on clinical labels and also to discover tumor subtypes. Preliminary results show that the learned representations drive the selection of meaningful genes associated to the clinical outcome of the patient and thus provide evidence for potential biomarkers.
kernel learning, autoencoders, cancer genomics
2 - 9
Bottom-Up Meta-Policy Search
Luckeciano C Melo, Marcos Máximo, Adilson Cunha
Luckeciano
Despite of the recent progress in agents that learn through interaction, there are several challenges in terms of sample efficiency and generalization across unseen behaviors during training. To mitigate these problems, we propose and apply a first-order Meta-Learning algorithm called Bottom-Up Meta-Policy Search (BUMPS), which works with two-phase optimization procedure: firstly, in a meta-training phase, it distills few expert policies to create a meta-policy capable of generalizing knowledge to unseen tasks during training; secondly, it applies a fast adaptation strategy named Policy Filtering, which evaluates few policies sampled from the meta-policy distribution and selects which best solves the task. We conducted all experiments in the RoboCup 3D Soccer Simulation domain, in the context of kick motion learning. We show that, given our experimental setup, BUMPS works in scenarios where simple multi-task Reinforcement Learning does not. Finally, we performed experiments in a way to evaluate each component of the algorithm.
imitation learning, meta-learning, robotics
2 - 10
Classification of SAR Images using Information Theory
Eduarda T. C. Chagas, Alejandro C. Frery and Heitor S Ramos
Eduarda Tatiane Caetano Chagas
The Classification of regions, especially urban areas, on synthetic aperture polarimetric radar (PolSAR) data is a challenging task. We know that texture analysis has a great informational power of the spatial properties of the main elements of the image, being one of the most important techniques in image processing and pattern recognition. The first task of this analysis is the extraction of discriminant features capable of efficiently incorporating information about the characteristics of the original image. Based on this principle, in this paper, we propose a new classification technique. Through the analysis of the textures of these images, ordinal pattern transition graphs, and information theory descriptors, we achieved a high discriminatory power in the characterization and classification of the regions under study.
sar image, classification, theory information
2 - 11
Clustering of climate time series
Y. Barrera, M. Jonckheere, V. Lefieux, D. Picard, A. Umfurer , E. Smucler,
Matthieu Jonckheere
The fluctuations in the temperature have a strong influence in the electric consumption. As a consequence, identifying and finding groups of possible climate scenarios is useful for the analysis of the electric supply system. The scenarios data that we are considering are time series of hourly measured temperatures over a grid of geographical points in France and neighboring areas, used by the French company RTE . Clustering techniques are useful for finding homogeneous groups of times series but the challenge is to find a suitable data transformation and distance metric. In this work, we used several transformations (fourier, wavelets, autoencoders) and distance metrics (DTW and euclidean among others) and found consistent groups of climate scenarios using clustering techniques. We give several performance indicators and we findd that k-shape performs the best according to some of them.
clustering, performance, time series
2 - 12
Complex Data Relevance Analysis for Event Detection
Caroline Mazini Rodrigues, Luis Pereira, Anderson Rocha, Zanoni Dias
Caroline Mazini Rodrigues
Considering the occurrence of an event with high social impact, it is important to establish a space-time relation of available information and so, answer some questions about the event as“who”, “how”, “where” and “why”. This work is part of the thematic FAPESP project “DéjàVu: Feature-Space-Time Coherence from Heterogeneous Data for Media Integrity Analytics and Interpretation of Events” and it proposes, from social network collected data, to determine the relevance of them for the analyzed event, allowing the correct construction of relationships among these data during an analysis phase later on. The main challenges of this work are the characteristics of the data which will be used: heterogeneity, as they come from different sources; multi-modality, such as texts, images and videos; unlabeled data, as they do not present label of straightforward relevance for the event; and unstructured data, as they do not possess characteristics which could be used directly during the learning.
event detection, data mining, features engineering
2 - 13
Conceptual Attention Networks for Action Recognition
Andrés Espinosa, Alain Raymond, Julio Hurtado
Alain Raymond
"We introduce Concept Attention Networks (CAN) for Action Recognition. CANs seek to provide more interpretability by providing attention for both visual features as well as concepts associated to the action we want to recognize. CANs are modelled on the MAC architecture - which has produced great results on VQA through the use of sequential reasoning- with two main differences: 1) The knowledge base is modified to take video features. 2) We introduce attention over concepts via an auxiliary task that tries to guess the concepts associated to the predicted class on each reasoning step. We expect that taking visual features and word features to the same space might provide both similar accuracy as well as greater interpretability; since CAN - as the MAC architecture on which it is based- divides its reasoning in steps, we are able to see on which parts of the video and on which concepts the model is focusing to generate its predictions. We present results on the Something to Something v2 dataset against a C3D baseline. "
attention, action recognition, sequential reasoning
2 - 14
Deep Reinforcement Learning for Humanoid Walking
Dicksiano Carvalho Melo, Adilson Marques da Cunha, Marcos Ricardo Omena de Albuquerque Máximo
Dicksiano Carvalho Melo
"The work consists in applying Deep Reinforcement Learning algorithms in order to the improve a robot's walking engine. Therefore, the final goal is to implement a Push Recovery Controller, which is a bio-inspired controller that stabilizes the agent under external perturbations in order to achieve a more stable and also faster walking movement. Proximal Policy Optimization algorithm has already been used in different domains and had success to solve many Continuous Control problems, being considered one of the state-of-art techniques of Deep Reinforcement Learning, therefore this is the main technique used in this work. Given the nature of Policy Gradient methods, we applied distributed training in order to Speed Up the learning process. We have used Intel AI DevCloud Cluster in order to have many agents running in parallel."
deep reinforcement learning, humanoid walking, robotics
2 - 15
Multitask Learning on Graph Neural Networks: Learning Multiple Graph Centrality Measures with a Unified Network
Pedro HC Avelar, Marcelo OR Prates, Henrique Lemos, Luis C Lamb
Pedro Henrique da Costa Avelar
The application of deep learning to symbolic domains remains an active research endeavour. Graph neural networks (GNN), consisting of trained neural modules which can be arranged in different topologies at run time, are sound alternatives to tackle relational problems which lend themselves to graph representations. In this paper, we show that GNNs are capable of multitask learning, which can be naturally enforced by training the model to refine a single set of multidimensional embeddings and decode them into multiple outputs by connecting MLPs at the end of the pipeline. We demonstrate the multitask learning capability of the model in the relevant relational problem of estimating network centrality measures, focusing primarily on producing rankings based on these measures. We then show that a GNN can be trained to develop a lingua franca of vertex embeddings from which all relevant information about any of the trained centrality measures can be decoded. The proposed model achieves 89% accuracy on a test dataset of random instances with up to 128 vertices and is shown to generalise to larger problem sizes. The model is also shown to obtain reasonable accuracy on a dataset of real world instances with up to 4k vertices, vastly surpassing the sizes of the largest instances with which the model was trained ($n=128$). Finally, we believe that our contributions attest to the potential of GNNs in symbolic domains in general and in relational learning in particular.
graph neural networks, graph networks, centrality measures, network centrality
2 - 16
End-To-End Imitation Learning of Lane Following Policies Using Sum-Product Networks
Renato Lui Geh, Denis Deratani Mauá
Renato Lui Geh
Recent research has shown the potential of learning lane following policies from annotated video sequences through the use of advanced machine learning techniques. They however require high computational power, prohibiting their use in low-budget projects such as educational robotic kits and embedded devices. Sum-product networks (SPNs) are a class of deep probabilistic models with clear probabilistic semantics and competitive performance. Importantly, SPNs learned from data are usually several times smaller than deep neural networks trained for the same task. In this work, we develop an end-to-end imitation learning solution to lane following using SPNs to classify images into a finite set of actions. Images are obtained from a monocular camera, which is part of the low-cost custom made mobile robot. Our results show that our solution generalizes training conditions with relatively few data. We investigate the trade-off between computational and predictive performance, and conclude that sacrificing accuracy for the benefit of faster inference results in improved performance in the real world, especially in resource constrained environments.
machine learning, robotics, sum-product networks
2 - 17
Friend or Foe: Studying user trustworthiness for friend recommendation in the era of misinformation
Antonela Tommasel
Antonela Tommasel
" The social Web, mainly represented by social networking sites, enriches the life and activities of its users by providing new forms of communication and interaction. Even though most of the time, the use of Internet is safe and enjoyable, there are risks that involve communication through social media. The unmoderated nature of social media sites often results in the appearance and distribution of unwanted content or misinformation. Thus, although social sites provide a great opportunity to stay informed about events and news, it also produces skepticism among users, as not every piece of shared information can be trusted. Moreover, the potential for automation and the low cost of producing fraudulent sites, allows the rapid creation and dissemination of unwanted content. Thus, current information dissemination processes pose the challenge of determining whether it is possible to trust on recommendations. The goal of this work is to define a profile to describe and estimate the trustworthiness or reputation of users, to avoid making recommendations that could favour the propagation of unreliable content and polluting users. The final aim is to reduce the negative effects of the existence and propagation of such content, and thus improving the quality of the recommendations."
recommender systems, trusworthiness, misinformation
2 - 18
Global Sensitivity Analysis of MAP inference in Selective Sum-Product Networks
Julissa Villanueva Llerena and Denis deratani Mauá
JulissaVillanueva
"Sum-Product Networks (SPN) are deep probabilistic models that have exhibited state-of-the-art performance in several machine learning tasks. As with many other probabilistic models, performing Maximum-A-Posteriori (MAP) inference is NP-hard in SPNs. A notable exception is selective SPNs, that allows MAP inference in linear time. Due to the high number of parameters, SPNs learned from data can produce unreliable and overconfident inference. This effect can be partially detected by performing a Sensitivity Analysis of the model predictions to changes in the parameters. In this work, we develop efficient algorithms for global quantitative analysis of MAP inference in selective SPNs. In particular, we devise a polynomial-time procedure to decide whether a given MAP configuration is robust with respect to changes in the model parameters. Experiments with real-world datasets show that this approach can discriminate easy- and hard-to-classify instances, often more accurately than criteria based on the probabilities induced by the model."
sensitivity analysis, sum-product networks, tractable probabilistic models.
2 - 19
Graph Feature Regularization: Combining machine learning models with graph data
Federico Albanese, Esteban Feuerstein, Leandro Lombardi
Federico Albanese
"In recent years, the amount of available data has drastically increased. However, labelling such data is hugely expensive. In this scenario, semi-supervised learning emerge as a vitally important tool, which combines labelled data (supervised machine learning) and unlabelled data (unsupervised learning) in order to make better predictions. In particular, graph based algorithms takes into account the relationships between the instances of the data and the underlying graph structures to make those predictions. In addition, in the context of data analysis, there are scenarios that can be naturally think as graphs. This occurs in situations where in addition to individual properties, connectivity between the elements of the data set is also important. Therefore, it is logical that machine learning models include information from both a node and its neighbours when making a prediction. This works propose adding graph feature regularization terms (GFR) to the the objective function to maximize. This new regularization terms depends on the structure of the network, the weight of the edges and the features of the node. We conclude that adding this terms to gradient boosted trees can outperform complex network architectures such as the Graph Convolutional Networks."
graph, machine learning, regularization
2 - 20
IA and HPC Convergence
Mariza Ferro, Vinícius Klôh, Felipe Bernardo, Bruno Schulze
Mariza Ferro
The convergence of High-Performance Computing (HPC) and Artificial Intelligence (AI) has become a promissing approach to major performance improvements. This combination has much to offer from each other and it’s giving to the users unprecedent capabilities of research. In this interaction HPC could be used by AI (HPC for AI) to execute and enhance the performance of its algorithms. It involves using and evaluating different HPC architectures to train AI algorithms, understand and optimize their performance on different architectures. IA for HPC can be further subdivided in IA after HPC and autotune. In the first, ML algorithms are used to understand and analyze the results of simulations on HPC. It involves using ML to understand scientific applications, how they are relate to different HPC architectures, and the impact of this relationship on performance and power consumption. It is more related to knowledge discovery and its result can be used in autotune. In autotune, IA is used to configure HPC, to choose the best set of computation and parameters to achieve some goal, for example energy saving. Also, ML is used to the prediction of performance and energy consumption, job scheduling and frequency and voltage scaling.
hpc, performance, autotune
2 - 21
l0-norm feature LMS algorithms
Hamed Yazdanpanah, José A. Apolinário Jr., Paulo S. R. Diniz, Markus V. S. Lima
Hamed Yazdanpanah
A class of algorithms known as feature least-mean-square (F-LMS) has been proposed recently to exploit hidden sparsity in adaptive filter parameters. In contrast to common sparsity-aware adaptive filtering algorithms, the F-LMS algorithm detects and exploits sparsity in linear combinations of filter coefficients. Indeed, by applying a feature matrix to the adaptive filter coefficients vector, the F-LMS algorithm can revealand exploit their hidden sparsity. However, in many cases the unknown plant to be identified contains not only hidden but also plain sparsity and the F-LMS algorithm is unable to exploit it. Therefore, we can incorporate sparsity-promoting techniques into the F-LMS algorithm in order to allow the exploitation of plain sparsity. In this paper, by utilizing the l0-norm, we propose the l0-norm F-LMS (l0-F-LMS) algorithm for sparse lowpass and sparse highpass systems. Numerical results show that the proposed algorithm outperforms the F-LMS algorithm when dealing with hidden sparsity, particularly in highly sparse systems where the convergence rate is sped up significantly.
lms algorithm, hidden sparsity, plain sparsity
2 - 22
Learning to Solve NP-Complete Problems
Marcelo Prates, Pedro Avelar, Henrique Lemos, Luis Lamb, Moshe Vardi
Marcelo Prates
Graph Neural Networks are a promising technique for bridging differential programming with combinatorial domains. In this paper we show that GNNs can learn to solve, with very little supervision, the decision variant of the Traveling Salesperson Problem.
graph neural networks, np-complete, traveling salesperson problem
2 - 23
Loco: A toolkit for RL research in locomotion
Wilbert Santos Pumacay Huallpa
Wilbert Santos Pumacay Huallpa
Recent advances in the field of Deep Reinforcement Learning have achieved impressive results in various tasks. One key component for these achievements are the simulated environments used to train and test DeepRL based agents, and for locomotion tasks there are various benchmarks that can be used, which are built on top of popular physics engines. However, these locomotion benchmarks do not offer the functionality required to train and evaluate agents in more diverse and complex tasks, exposing only relatively simple tasks, e.g. traversing flat terrain. This work presents an engine-agnostic toolkit for locomotion tasks that provides such functionality, allowing users to create a wide range of diverse and complex environments. We provide support for various physics engines via a physics abstraction layer, allowing users to easily switch between engines as required.
locomotion benchmarks, deeprl, simulated environments
2 - 24
Machine Learning-Based Pre-Routing Timing Prediction with Reduced Pessimism
E. Carvajal, N. Shukla, Y. Chen, J. Hu.
Erick Carvajal Barboza
Optimizations at placement stage need to be guided by timing estimation prior to routing. To handle timing uncertainty due to the lack of routing information, people tend to make very pessimistic predictions such that performance specification can be ensured in the worst case. Such pessimism causes over-design that wastes chip resources or design effort. In this work, a machine learning-based pre-routing timing prediction approach is introduced. Experimental results show that it can reach accuracy near post-routing sign-off analysis. Compared to a commercial pre-routing timing estimation tool, it reduces false positive rate by about 2/3 in reporting timing violations.
integrated circuit design, static timing analysis, machine learning
2 - 25
Memory in Agents
Meire Fortunato, Melissa Tan, Ryan Faulkner, Steven Hansen*, Adrià Puigdomènech Badia, Gavin Buttimore, Charlie Deck, Joel Z Leibo, Charles Blundell
Meire Fortunato
Memory is an important aspect of intelligence and plays a role in many deep reinforcement learning models. However, little progress has been made in understanding when specific memory systems help more than others and how well they generalize. The field also has yet to see a prevalent consistent and rigorous approach for evaluating agent performance on holdout data. In this paper, we aim to develop a comprehensive methodology to test different kinds of memory in anagent and assess how well the agent can apply what it learns in training to a holdout set that differs from the training set along dimensions that we suggest are relevant for evaluating memory-specific generalization. To that end, we first construct a diverse set of memory tasks that allow us to evaluate test-time generalization across multiple dimensions. Second, we develop and perform multiple ablations on an agent architecture that combines multiple memory systems, observe its baseline models, and investigate its performance against the task suite.
memory, rl, generalization
2 - 26
Model-Based Reinforcement Learning with Deep Generative Models for Industrial Applications
Ângelo Gregório Lovatto, Thiago Pereira Bueno, Leliane Nunes de Barros
Ângelo Gregório Lovatto
Industrial applications, such as those in process industry like or power generation, could benefit from reinforcement learning (RL) agents to reduce energy consumption and lower emissions. However, the systems involved in these applications usually have high usage costs, while RL algorithms generally require too many trials to learn a task. A promising approach to the inefficiency problem is the model-based RL method, which allows agents to learn a predictive model of the environment to extract more information from available data. Given that industrial applications generally feature complex stochastic behavior, we propose investigating novel integration schemes between the model-based approach and deep generative models, a class of neural networks specially designed to handle sophisticated probability distributions. We will test these interventions in existing and novel benchmark tasks aimed at assessing a learning system's capacity of handling state changes governed by complex conditional probability distributions. We expect that our approach will lead to better model predictions and faster learning.
reinforcement learning, generative models, deep learning
2 - 28
On the optimization of the regularization parameters selection in sparse modeling
Victoria Peterson and Rubén D. Spies
Victoria Peterson
Tikhonov functionals are commonly used as regularization strategies for severely ill-posed inverse problems. Besides the type of penalization induced into the solution, the proper selection of the regularization parameters is of utmost importance for accurate estimation. In this work, we analyze several data-driven regularization parameters estimation methods in a mixed-term discriminative framework. Numerical results for P300 detection in Brain-Computer Interfaces classification are presented, showing the impact of regularization parameter estimation into classification performance.
generalized tikhonov regularization, tunning parameter selection, sparse modeling
2 - 29
Pajé - End-to-End Machine Learning
Edesio Alcobaça, Davi Pereira-Santos, André Carvalho
Edesio Alcobaça
"The number, variety, and complexity of Data Science applications are rapidly increasing along with automated solutions. This kind of solution, called automated machine learning, makes data science accessible to non-specialists. On the other hand, from the specialist standpoint, automated machine learning can spare him/her manual and repetitive work, speeding up research. In the last years, there has been a strong interest in the development of tools able to automate data science. While the existing frameworks mainly focus on inducing accurate models through hyperparameter tuning, they disregard or forgo, for instance, the data preprocessing step, reproducibility, and explainability. Nevertheless, this kind of task expends the majority of human resources. In this paper, we present an overview of ideas behind Pajé, an open tool for automated data science. Pajé includes all the core processes of the data science pipeline, from data acquisition to model interpretation, and at the same time, addresses important aspects of machine learning, such as reproducibility and explainability."
automl, meta-learning, machine learning
2 - 30
Preliminary results of supervised models trained with charge density data from Cruzain-inhibitors complexes.
Villafañe, Roxana Noelia; Luchi, Adriano Martín; Angelina, Emilio Luis; Peruchena, Nélida María.
Roxana Noelia Villafañe
"Proteins are the most versatile biological molecules, with diverse functions. Recently, the AI community have developed interest in specific topics related to proteins as: protein folding, structural analysis, protein-ligand affinity estimation, among others. Cruzain is a cysteine protease involved in chagas disease with several Cz-inhibitor complexes deposited in the Protein Data Bank (PDB). Unfortunately, the number of structures solved up-to-date is scarce for the requirements of a machine learning optimization algorithm. Another issue is the high dimensionality of the data involved in structure-based approaches for drug design. In this work, charge density-based data was employed as input for a classification algorithm with the protein-ligand interactions as columns and ligands as rows. A support vector machine with recursive feature elimination was employed to uncover the most relevant features involved in the protein-inhibitor complexes. This approach is the first step for further analysis of topological data of Cz-ligand complexes under study. We hope that results will shed light to understand the inhibition mechanism of Cruzain."
support vector machines, qtaim, feature selection
2 - 31
Probability distributions of maximum entropy on Wasserstein balls and their applications
Luis Felipe Vargas and Mauricio Velasco
Mauricio Velasco
We introduce a cutting plane method for efficiently finding the probability distribution of maximum entropy contained in a Wasserstein ball. Such distributions are the most general (i.e. minimizers of the amount of prior information) in the ball and are therefore of central importance for statistical inference. We generalize these results to the problem of minimizing cross-entropy from a given prior distribution and use them to propose 1-parameter families of learning algorithms that are naturally resilient to biases.
wasserstein metric, maximum entropy, minimum cross-entropy
2 - 32
Random Projections and $\alpha$-shape to Support the Kernel Design
"Daniel Moreira Cestari Rodrigo Fernandes de Mello"
Daniel CestarI
We automatically design kernels from data by projecting points into either random hyperplanes or onto the boundaries forming the $\alpha$-shape. We interpret such transformation as an explicit strategy a kernel uses to extract features from data, thus SVM applied on this transformed space should be capable of correctly separating class instances. We firstly applied this method on two different synthetic datasets to assess its performance and parameter sensitivity. Those experimental results confirmed a considerable improvement over the original input space, robustness in the presence of noise and parameter changes. Secondly, we applied our approach on well-known image datasets in order to evaluate its ability to deal with real-world data and high dimensional spaces. Afterwards, we discuss how this novel approach could be plugged to Convolutional Neural Networks, helping to understand the effects and the impact of adding units to layers. Our proposal has a low computational cost and it is parallelizable to work directly on the transformed space and, when memory constraints hold, its resultant kernel matrix might be used instead. Such approach considerably improved the classification performance in almost all scenarios, supporting the claim that it could be used as a general-purpose kernel transformation.
random projections; alpha-shape; kernel design
2 - 33
Regular Inference over Recurrent Neural Networks as a Method for Black Box Explainability
Franz Mayr, Sergio Yovine
Franz Mayr
This work explores the general problem of explaining the behavior of recurrent neural networks (RNN). The goal is to construct a representation which enhances human understanding of an RNN as a sequence classifier, with the purpose of providing insight on the rationale behind the classification of a sequence as positive or negative, but also to enable performing further analyses, such as automata-theoretic formal verification. In particular, an active learning algorithm for constructing a deterministic finite automaton which is approximately correct with respect to an artificial neural network is proposed.
recurrent neural networks, regular inference, explainability
2 - 34
Scalable methods for computing state similarity in deterministic Markov Decision Processes
Pablo Samuel Castro
Pablo Samuel Castro
We present new algorithms for computing and approximating bisimulation metrics in Markov Decision Processes (MDPs). Bisimulation metrics are an elegant formalism that capture behavioral equivalence between states and provide strong theoretical guarantees on differences in optimal behaviour. Unfortunately, their computation is expensive and requires a tabular representation of the states, which has thus far rendered them impractical for large problems. In this paper we present a new version of the metric that is tied to a behavior policy in an MDP, along with an analysis of its theoretical properties. We then present two new algorithms for approximating bisimulation metrics in large, deterministic MDPs. The first does so via sampling and is guaranteed to converge to the true metric. The second is a differentiable loss which allows us to learn an approximation even for continuous state MDPs, which prior to this work had not been possible.
markov decision processes, reinforcement learning, bisimulation metrics
2 - 35
See and Read: Detecting Depression Symptoms in Higher Education Students Using Multimodal Social Media Data
Paulo Mann, Aline Paes, Elton H. Matsushima
Paulo Mann
Mental disorders such as depression and anxiety have been increasing at alarming rates in the worldwide population. Notably, major depressive disorder has become a common problem among higher education students. While the reasons for this alarming situation remain unclear (although widely investigated), the student already facing this problem must receive treatment. To that, it is first necessary to screen the symptoms. The traditional way for that is relying on clinical consultations or answering questionnaires. However, nowadays, the data shared at social media is a ubiquitous source that can be used to detect the depression symptoms even when the student is not able to afford or search for professional care. In this work, we focus on detecting the severity of the depression symptoms in higher education students, by comparing deep learning with feature engineering models induced from Instagram data. The experimental results show that students presenting a BDI score higher than 20 can be detected with 0.92 of recall and 0.69 of precision in the best case, reached by a fusion model. Our findings show a potential of help on further investigation of depression, by bringing students at-risk to light, to guide them to access adequate treatment.
deep learning, depression, students
2 - 36
Solving Linear Inverse Problems by Joint Posterior Maximization with a VAE Prior
Mario González, Andrés Almansa, Mauricio Delbracio, Pablo Musé
Mario González
"We address the problem of solving ill-posed inverse problems in imaging where the prior is a neural generative model. Specifically we consider the decoupled case where the prior is trained once and can be reused for many different degradation models without retraining. Whereas previous MAP-based approaches to this problem lead to highly non-convex optimization algorithms, our approach computes the joint (space-latent) MAP that naturally leads to alternate optimization algorithms and to the use of a stochastic encoder to accelerate computations. We show theoretical and experimental evidence that the proposed objective function may be quite close to bi-convex, which would pave the way to show strong convergence results of our optimization scheme. Experimental results also show the higher quality of the solutions obtained by our approach with respect to non-convex MAP approaches."
inverse problems, variational autoencoder, maximum a posteriori
2 - 37
Stream-based Expert Ensemble Learning for Network Measurements Analysis
Juan Vanerio, Pedro Casas, Federico Larroca
Juan Vanerio
"The application of machine learning to Network Measurements Analysis problems has largely increased in the last decade; however, it remains difficult to say today which is the most fitted category of models to address these tasks in operational networks. We work on Stream-GML2, a generic stream-based (online) Machine Learning model for the analysis of network measurements. The model is a stacking ensemble learning algorithm, in which several weak or base learning algorithms are combined to obtain higher predictive performance. In particular, Stream-GML2 is an instance of a recent model known as Super Learner, which performs asymptotically as good as the best input base learner. It provides a very powerful approach to tackle multiple problems with the same technique while minimizing over-fitting likelihood during training, using a variant of cross-validation. Additionally, stream-GML2 copes with concept drift and performance degradation by relying on Reinforcement Learning (RL) principles, no-regret learning and online-convex optimization. The model resorts to adaptive memory sizing to retrain the system when required, adjusting its operation point dynamically according to distribution changes in incoming samples or performance degradation over time."
stream learning; ensemble learning; network attacks
2 - 39
Synthesizing Atmospheric Radar Images from IR Satellite Channel Using Generative Deep Neural Networks
Sacco, Maximilliano A. ;Scheffler, Guillermo ; Ruiz, Juan
Maximiliano Sacco
We present a novel application to infer atmospheric radar reflectivity images using infrared satellite images. Given the high cost of radar instruments, data oriented image reconstruction appears as an attractive option. We compared output from fully connected networks, convolutional-deconvolutional networks and generative adversarial networks trained with synthetically generated radar/satelite image pairs from numerical weather model simulations. Results are comparable with state of the art statistical methods. The application shows promising results for short term weather prediction.
satellite radar gans
2 - 40
Towards self-healing SDNs for dynamic failures
Cristopher G. S. Freitas, André L. L. Aquino
Cristopher Gabriel de Sousa Freitas
Legacy IP networks are currently a huge problem for Internet Service Providers, as the demand grows exponentially, the profit doesn't follow. With the emergence of the Software-Defined Networks (SDN), providers are hoping to improve their service while lowing the operational expenses. In this work, we focus on self-healing SDNs, that requires fault-tolerant mechanisms and intelligent network management for enabling the system to perceive its incorrect states and acting to fix it. As fault tolerance is a huge issue, we narrow our proposal for only dynamic failures, as these are usually the best target for machine learning approaches as deterministic solutions are sub-optimal or too complex. Thus, we develop a solution using Deep Reinforcement Learning (DRL) for routing and load balancing, considering highly dynamic traffic, and we show the viability of a model-free solution and its efficiency.
deep reinforcement learning, fault tolerance, software-defined networks
2 - 41
Towards the Education of the Future: Challenges and Opportunities, for AI?
Germán Capdehourat, Federica Bascans, Fabián Frommel, María Catalina Piana, Fiorella Nahmias, Cecilia Bisogno, Cecilia Marconi, Alessia Zucchetti, Fiorella Haim, Enrique Lev.
Germán Capdehourat
As in other verticals, the application of data science to education opens up new possibilities. An example is the growing research community in learning analytics. Different goals, such as looking for tools for a more personalized education and the detection of particular difficulties at early ages, are relevant challenges that are being addressed in the area. In this context, we present the case of Plan Ceibal, an institution that assists the education system in Uruguay, providing technological solutions for the support of education.
education, learning analytics, ai literacy
2 - 42
Transfer in Multiagent Reinforcement Learning
Silva, Felipe Leno da; Costa, Anna Helena Reali
Felipe Leno da Silva
Reinforcement learning methods have successfully been applied to build autonomous agents that solve challenging sequential decision-making problems. However, agents need a long time to learn a task, especially when multiple autonomous agents are in the environment. This research aims to propose a Transfer Learning framework to accelerate learning by combining two knowledge sources: (i) previously learned tasks; and (ii) advice from a more experienced agent. The definition of such a framework requires answering several challenging research questions, including: How to abstract and represent knowledge, in order to allow generalization and posterior reuse?, How and when to transfer and receive knowledge in an efficient manner?, and How to consistently combine knowledge from several sources?
machine learning, multiagent reinforcement learning, transfer learning
2 - 43
Transformers are Turing Complete
Jorge Pérez, Javier Marinkovic, Pablo Barceló
Jorge Perez
"Alternatives to recurrent neural networks, in particular, architectures based on attention, have been gaining momentum for processing input sequences. In spite of their relevance, the computational properties of these alternatives have not yet been fully explored. We study the computational power of one of the most paradigmatic architectures exemplifying the attention mechanism, the Transformer (Vaswani et al., 2017). We show that the Transformer is Turing complete exclusively based on their capacity to compute and access internal dense representations of the data. Our study also reveals some minimal sets of elements needed to obtain these completeness results."
attention, transformers, turing completeness
2 - 44
Uncovering differential equations
Agustin Somacal
Many branches of science and engineering require differential equations to model the dynamics of the systems under study. Traditionally, the identification of the appropriate terms in the equation has been done by experts. Brunton, Proctor, and Kutz 2016 developed a method to automate this task using the data itself. In this work, we extend the applicability of this method to situations where not all variables are observed by adding higher-order derivatives to the model space search. We first test the results using only one variable of the Lorenz system and then apply the same methodology to temperature times series. We found that the proposed approach is enough to recover equations with R²>0.95 in both cases. We also propose an algebraic method to get future values of the system and compare it with traditional integrative methods finding that our approach is more stable giving high accuracy prediction results in the case of the Lorenz system.
differential equations, dynamical systems
2 - 45
Aurea Soriano-Vargas, Bernd Hamann, Maria Cristina F. de Oliveira
Aurea Soriano-Vargas
We present an integrated interactive framework for the visual analysis of time-varying multivariate datasets. As part of our research, we performed in-depth studies concerning the applicability of visualization techniques to obtain valuable insights. We consolidated the considered analysis and visualization methods in one framework, called TV-MV Analytics. It effectively combines visualization and data mining algorithms providing the following capabilities: i) visual exploration of multivariate data at different temporal scales; and ii) a hierarchical small multiples visualization combined with interactive clustering and multidimensional projection to detect temporal relationships in the data. We demonstrate the value of our framework for specific scenarios, by studying three use cases that were validated and discussed with domain experts.
visual analytics, time-varying multivariate data, visual feature selection
2 - 46
AI-enabled applications with social and productivity impact
Digital Sense Technologies
Álvaro Pardo
dSense is a specialized R&D Studio that provides consultancy and development services in Computer Vision, Machine Learning and Image Processing for projects with an important component of innovation. Our team of 4 PhDs, 5 MScs and experienced engineers authored more than 175 papers and 4 US patents. By taking advantage of our research background, we have been able to develop valuable custom AI-enabled solutions across industries with a positive social and productivity impact. We introduce some of the most recent in this poster.
Computer Vision, Machine Learning, Image Processing
2 - 47
FLambé: A Customizable Framework for Machine Learning Experiments
Jeremy Wohlwend, Nicholas Matthews, Ivan Itzcovich
Carolina Rodriguez Diz
Flambé is a machine learning experimentation framework built to accelerate the entire research life cycle. Flambé's main objective is to provide a unified interface for prototyping models, running experiments containing complex pipelines, monitoring those experiments in real-time, reporting results, and deploying a final model for inference. Flambé achieves both flexibility and simplicity by allowing users to write custom code but instantly include that code as a component in a larger system which is represented by a concise configuration file format. We demonstrate the application of the framework through a cutting-edge multistage use case: fine-tuning and distillation of a state of the art pretrained language model used for text classification.
Pytorch Experiment Research
3 - 1
A Comparative Study of Segmentation Methods for Historical Documents
Nury Yuleny Arosquipa Yanque
Nury Yuleny Arosquipa Yanque
Great efforts are being made to digitize ancient handwritten and machine-printed text documents in recent years and Optical Character Recognition (OCR) systems still do not work well in them for a variety of reasons (paper aging defects, faded ink, stains, uneven lighting, folds, bleed-through, ghosting, poor contrast between text and background, among others). An important step in most OCR system is segmentation of text and background (binarization), that is particularly sensitive to the typical artifacts of historical documents (in the last 8 years, competitions for segmentation of historical documents has been held 1). Here we compare several segmentation methods and propose a new one based on machine learning that rescues the advantages of the heuristic and texture methods. The study covered both handwritten and typography historical documents and we compared the segmentation via DIBCO standard metrics and an open OCR system. The results of the proposed method are comparable with the state of the art in respect to DIBCO metrics but it has advantages respect to OCR system.
document binarization, thresholding, text segmentation
3 - 2
A continuous model of pulse clarity
Martin A. Miguel, Mariano Sigman, Diego Fernandez Slezak
Martin Miguel
"Music has a unique capability to evoke emotions. One of the mechanisms used is through the manipulation of expectations in time. Specifically in rhythms, two concepts have been explored that relate to the comfort and understanding of music: pulse clarity and rhythm complexity. Several computational models have been introduced to analyze these concepts but most do not consider how they evolve in time. We present a novel beat tracking model, that given a rhythmic passage provides continuous information of which tacti are most reasonable and how salient they are. Here we evaluate the output of the model as a proxy for pulse clarity. We performed a beat tapping experiment which consisted in asking participants (N=27) to tap the subjective beat while listening to 30 rhythmic passages. A pulse clarity score was calculated as the mean certainty of the model. After each trial participants were asked about task difficulty. We also calculated the within subject tapping precision as an empirical measurement of pulse clarity. The proposed metric correlated with similar spearman correlation coefficient than previous models with both collected measures. This positive result allows us to inspect music emotions that arise from changes in rhythm perception."
computational models; cognitive musicology; beat perception
3 - 3
A cross linguistic study of the production of turn taking cues in Slovak, Argentinian Spanish, and American English.
Pablo Brusco, Jazmin Vidal, Štefan Beňuš, Agustín Gravano
Pablo Brusco
"Humans produce and perceive acoustic and prosodic cues during dialogue. However, little of the dynamics and the cross-linguistic validity of these cues is known. In this work we explore and show the effect of different acoustic/prosodic cues preceding different turn transitions (holds, switch, and backchannels) using machine learning techniques as a descriptive tool in three languages: Slovak, American English, and Argentine Spanish. Results suggest that the three languages share acoustic/prosodic resources to signal turn transitions. We also rank the features in each language by order of contribution to the separation of classes of turn transitions. This study demonstrates that machine learning methods provide a powerful and efficient means for explaining how the dynamics of prosodic features relate to conversation flow."
descriptive machine learning, turn-taking, prosody
3 - 4
A Multi-Armed Bandit Approach for House Ads Decisions
Nicolás Aramayo, Mario Schiappacase, Marcel Goic
Nicolás Aramayo
In recent years, many websites have started to use a variety of recommendations systems to decide the content to display to their visitors. In this work we address this using a contextual combinatorial multi-armed bandit approach to select the combination of house ads to display in the homepage of a large retailer. House ads correspond to promotional information displayed on the retailer's website to highlight some category of products. As retailers are continuously changing their product assortment, they can benefit from dynamically deciding what products are more effective, thus treating this as a reinforcement learning problem benefits from the ability to learn efficiently which images perform well and quickly discard the least attractive ones. Moreover, the number of clicks they receive not only depends on their own attractiveness, but also on how attractive are other products displayed around them. Finally, using previous purchases of a fraction of customers, we implemented another version of our algorithm that personalized recommendations. We tested our methods in a controlled experiment where we compared them against an experienced team of managers. Our results show that our method implies a more active exploration of the decision space, but also significant increases in conversion rates.
recommendation systems, reinforcement learning, a/b test
3 - 5
A multi-task relational model for conversation classification
Felipe del Río, Álvaro Soto
Felipe Del Río
Every day millions of users interact online with each other generating a large number of interactions all over the internet. These discussion data pose a great opportunity as a source to get a variety of insights, as well as a great challenge on how to correctly obtain them. Most of the existing models are only focus on classification, without explaining why they are classifying in a certain way. This limits our ability to get insights from our models as well as a reduced trust in them. To attack this issue, we build two datasets. The first one based on a Reddit corpus, based on discussion threads on the platform, that is composed of the main task of classifying a thread into its subreddit, as well as an auxiliary task. Second, a dataset of news with their subsequent discussion, based on the chilean news outlet EMOL, in which the main task is to classify the controversiality of the news, and complemented with a variety of other auxiliary tasks. We proposed a model based on the transformer that can learn multiple tasks jointly, and can effectively be used in both of these datasets, and tested it on the Reddit dataset. We checked that our model is achieving better performance that our baseline as well as paying attention to relevant interactions in a conversation.
abstractive summarization, transformers, nlp
3 - 6
A Place to Go: Locating Damaged Regions after Natural Disasters through Mobile Phone Data
"Galo Castillo-López, María-Belén Guaranda, Fabricio Layedra, and Carmen Vaca"
María Belén Guaranda
Large scale natural disasters involve budgetary problems for governments even when local and foreign humanitarian aid is available. Prioritizing investment requires near real time information about the impact of the hazard in different locations. However, such information is not available through sensors or other devices specially in developing countries that do not have such infrastructure. A rich source of information is the data resulting from mobile phones activity that citizens in affected areas start using as soon as it becomes available post-disaster. In this work, we exploit such source of information to conduct different analyses in order to infer the affected zones in the Ecuadorian province of Manabí, after the 2016 earthquake, with epicenter in the same province. We propose a series of features to characterize a geographic area, as granular as a canton, after a natural disaster and label its level of damage using mobile phone data. Our methods result in a classifier based on the K-Nearest Neighbors algorithm to detect affected zones with a 75% of accuracy. We compared our results with official data published two months after the disaster
spatio-temporal analysis, mobile phone activity, disaster management
3 - 7
Advanced Transfer Learning Approach for Improving Spanish Sentiment Analysis
Daniel Palomino and Jose Ochoa-Luna
JOSE EDUARDO OCHOA LUNA
In the last years, innovative techniques like Transfer Learning have impacted strongly in Natural Language Processing, increasing massively the state-of-the-art in several challenging tasks. In particular, the Universal Language Model Fine-Tuning (ULMFiT) algorithm has proven to have an impressive performance on several English text classification tasks. In this paper, we aim at developing an algorithm for Spanish Sentiment Analysis of short texts that is comparable to the state-of-the-art. In order to do so, we have adapted the ULMFiT algorithm to this setting. Experimental results on benchmark datasets (InterTASS 2017 and InterTASS 2018) show how this simple transfer learning approach performs well when compared to fancy deep learning techniques.
sentiment analysis, transfer learning, spanish
3 - 8
Aggressive Language Identification in Social Media using Deep Learning
Errol Wilderd Mamani Condori
Errol Wilderd Mamani Condori
The increasing influence from users in social media has made that Aggressive content propagates over the internet. In a way to control and tackle this problem, recent advances in Aggressive and offensive language detection have found out that Deep Learning techniques get good performance as well as the novel Bidirectional Encoder Representations from Transformer called BERT. This work presents an overview of Offensive language detection in English and the Aggressive content detection using this novel approach from Transformer for the case study of Mexican Spanish. Our preliminary results show that pre-trained multilingual model BERT also gets good performance compared with the recent approaches in Aggressive detection track at MEX-A3T.
aggressive language, deep learning,social media
3 - 9
CharCNN Approach to Aspect-Based Sentiment Analysis for Portuguese
Ulisses Brisolara Corrêa and Ricardo Araújo
Ulisses Brisolara Correa
Sentiment Analysis was developed to support individuals in the harsh task of obtaining significant information from large amounts of non-structured opinionated data sources, such as social networks and specialized reviews websites. A yet more challenging task is to point out which part of the target entity is addressed in the opinion. This task is called Aspect-Based Sentiment Analysis. The majority of work focuses on coping with English text in the literature, but other languages lack re- sources, tools, and techniques. This paper focuses on Aspect-Based Sentiment Analysis for Accommodation Services Reviews written in Brazilian Portuguese. Our proposed approach uses Convolution Neural Networks with inputs in Character-level. Results suggest that our approach out- performs lexicon-based and LSTM-based approaches, displaying state- of-the-art performance for binary Aspect-Based Sentiment Analysis.
aspect-based sentiment analysis, char-level convolutional neural networks,
3 - 10
Clustering meteorological scenarios
"Matthieu Jonckheere Dominique Picard Vincent Lefieux Alfredo Umfurer Agustin Somacal Yamila Barrera"
Yamila Barrera
"The fluctuations in the temperature have a strong influence in the electric consumption. As a consequence, identifying and finding groups of possible climate scenarios is useful for the analysis of the electric supply system. The scenarios data that we are considering are time series of hourly measured temperatures over a grid of geographical points in France and neighboring areas. Clustering techniques are useful for finding homogeneous groups of times series but the challenge is to find a suitable data transformation and distance metric. In this work, we used several transformations (fourier, wavelets, autoencoders) and distance metrics (DTW and euclidean among others) and found consistent groups of climate scenarios using clustering techniques (k-medoids and k-means). We found that k-shape performs the best according a within cluster dispersion index. This is a joint work with RTE (Réseau de Transport d’Électricité), the electricity transmission system operator of France. "
non-supervised learning, clustering, temperature times series
3 - 11
Compare OCR Services
Orietha Castillo
Orietha Marcia Castillo Zalles
"Abstract Start a startup these days is not an easy job, there are many things we need to consider on this like choose a partner. There are entrepreneurs who have chosen Twitter as silent partner, beside that the startup needs press, growth, customer acquisition and rabid fans but entrepreneurs do not have time and money which makes the path more easy. Twitter is the guy who supports the entrepreneur have traffic to their sites for free, have the opportunity to network with potential clients, a Marketing expert Mark Schaefer said in his book “Known” that if you had to choose one distribution channel for your content, use Twitter. Is an excellent channel to communicate ideas, increase a positive branding is a key to stay in clients mind which will increase the potential clients, but these benefits are not the best thing Twitter can provide. Twitter by itself knows your client's thoughts what is really the most important thing. In this paper will review this topic in detail "
text and topic analysis
3 - 12
Conditioning visual reasoning through query based mechanisms
Sebastián Amenabar, Raimundo Manterola, Julio Hurtado, Francisco Rencoret and Alvaro Soto
Francisco Rencoret
"Deep Neural Networks learn to solve datasets which may contain different reasoning tasks, for example, VQ&A where questions may rely on counting, positioning, or existence reasoning. Even though they may sometimes learn effectively for simple tasks, they usually lack generalization capabilities for complex tasks which demand several reasoning steps. Their fixed weight structure limits the network to use different neurons for different types of reasoning. In this work, we propose a method to adaptively condition Neural Network's reasoning based on the information retrieved of an input. The proposed method helps the model carry out a variety of reasoning tasks generalizing better for complex tasks. Based on VQ&A, we test our hypothesis by conditioning visual reasoning in models that rely on iterative reasoning. On each reasoning step, the model attends the input and radically alters their visual reasoning. By transforming each convolutional filter, the model learns to specialize their visual reasoning for the arbitrary input and reasoning step."
conditional visual reasoning, selective feature transformation, vq&a
3 - 13
Contextual Hybrid Session-based News Recommendation with Recurrent Neural Networks
Gabriel Moreira, Dietmar Jannach, Adilson Marques da Cunha
Gabriel
Recommender systems help users deal with information overload by providing tailored item suggestions to them. The recommendation of news is often considered to be challenging, since the relevance of an article for a user can depend on a variety of factors, including the user’s short-term reading interests, the reader’s context, or the recency or popularity of an article. Previous work has shown that the use of RNNs is promising for the next-in-session prediction task, but has certain limitations when only recorded item click sequences are used as input. In this work, we present a hybrid, deep learning based approach for session-based news recommendation that is able to leverage a variety of information types. We evaluated our approach on two public datasets, using a temporal evaluation protocol that simulates the dynamics of a news portal in a realistic way. Our results confirm the benefits of considering additional types of information, including article popularity and recency, resulting in significantly higher recommendation accuracy and catalog coverage than other session-based algorithms. Additional experiments show that the proposed parameterizable loss function used in our method also allows us to balance two conflicting quality factors: accuracy and novelty.
recommender systems, deep learning, session-based news recommendation
3 - 14
Cost-sensitive Machine Learning
Emanuele Luzio
Emanuele Luzio
What is the difference between classification and decision making? We show how to calibrate a classifier, incorporating the economic context information into the model and transforming a classification model into a decision-making model.
decision-making, classification, economics
3 - 15
Deep Learning for Meteor Classification
Yuri Galindo, Ana Carolina Lorena
Yuri Galindo
The EXOSS (Exploring the Southern Sky) non profit organization manages a network of cameras across Brazil that automatically capture meteor images. The captures more often than not are of non meteor objects such as birds and planes, and are currently filtered by volunteers. Our research targets the classification of these images by applying Convolutional Neural Networks to the automatic captures, that are black and white, noisy, uncentered, and largely different from publicly available datasets. The objective is to develop a system that is capable of automatically filtering the captures, reducing human intervention to cases that represent uncertain classifications.
computer vision, image classification, deep learning
3 - 16
Deep Multiple Instance Learning for the Acoustic Detection of Tropical Birds using Limited Data
Jorge Castro, Roberto Vargas-Masis, Danny Alfaro Rojas
Jorge Castro
Deep learning algorithms have produced state of the art results for acoustic bird detection and classification. However, thousands of birds vocalizations have to be manually tagged by experts to train these algorithms. We use three strategies to reduce this manual work: simpler labels, fewer labels, and less labeled data. The Multiple Instance Learning (MIL) approach provides the framework to simplify and reduce the number of labels, as each recording (bag) is modeled as a collection of smaller audio segments (instances) and is associated with a single label that indicates if at least one bird was present in the recording. In this work, we propose a deep neural network architecture based on the MIL framework to predict the presence or absence of tropical birds in one minute recordings. As only a relatively small number of training observations (1600) are used to train the algorithm, we compare the performance of the network using several hand-crafted features.
deep learning, multiple instance learning (mil), bird detection
3 - 17
Deep Q-Learning in ROS for navigation with TurtleBot3.
Leopoldo Agorio, Juan Bazerque
Leopoldo Carlos Agorio Grove
"Our group is getting involved in the use of robots with the ROS operating system for machine learning applications and distributed algorithms. ROS connects with a simulation environment -Gazebo- with good fidelity in terms of the dynamics of actual robot platforms. This connection allows the robots to be trained offline through a series of simulated episodes, and then use the results online in the real world. In our poster we explain the Deep Q-Learning method developed by the ROBOTIS machine learning team, in which a robot learns to navigate towards a target avoiding a series of obstacles. We implement this technique using a Turtlebot3 platform and design our own robot world in which the robot is trained."
robotics, q-learning, navigation
3 - 18
Detecting Spatial Clusters of Disease Infection Risk Using Sparsely Sampled Social Media Mobility Patterns
Roberto Nalon (student), Renato Assuncao (myself), Daniel Neill, Wagner Meira
Renato Assuncao
Standard spatial cluster detection methods used in public health surveillance assign each disease case a single location (typically, the patient’s home address), aggregate locations to small areas, and monitor the number of cases in each area over time. However, such methods cannot detect clusters of disease resulting from visits to non-residential locations, such as a park or a university campus. Thus we develop two new spatial scan methods, the unconditional and conditional spatial logistic models, to search for spatial clusters of increased infection risk. We use mobility data from two sets of individuals, disease cases and healthy individuals, where each individual is represented by a sparse sample of geographical locations (e.g., from geo-tagged social media data). The methods account for the multiple, varying number of spatial locations observed per individual, either by non-parametric estimation of the odds of being a case, or by matching case and control individuals with similar numbers of observed locations. Applying our methods to synthetic and real-world scenarios, we demonstrate robust performance on detecting spatial clusters of infection risk from mobility data, outperforming competing baselines.
spatial scan statistics, social media data, spatial cluster detection
3 - 19
Detective conditions using multi-modal deep learning approach
Diana Mosquera
Diana Mosquera
"Analyzing the voice behind words represents a central aspect of human communication, and therefore key to intelligent machines. In the field of computational linguistics, research has addressed human-computer interaction by allowing machines to recognize features such as loudspeakers, social and non-verbal signals, speech emotion and prosody estimation. These concepts added to the sequence modeling, allows us to generate an early diagnosis of the cognitive condition of the human being. "
cognitive impairment, prosody models, sequence models
3 - 20
Diagnosing Mental Health Disorders using Deep Learning and Probabilistic Graphical Models
Juan Pavez, Simón Michell, Diego Acuña, Héctor Allende
Juan Pavez
" Mental illnesses are becoming one of the most common health concern among the world population, with important effects on the life of people suffering them. Despite the evidence of the efficacy of psychological and pharmacological treatments, mental illnesses are largely underdiagnosed and untreated, especially in developing countries. One important cause of this is the scarcity of mental health providers that can correctly diagnose and treat people in need of help. In this work, we developed a deep learning system to help in the differential diagnosis of mental disorders. Our system can analyze a patient description of symptoms written in natural language, and based on that, it can ask questions to confirm or refine the initial diagnosis made by the deep learning model. We trained our model on thousands of anonymous symptoms descriptions that we collected from various sources on the internet. The initial prediction is refined by asking symptoms confirmation questions that are extracted from a probabilistic graphical model built by experts based on the diagnostic manual DSM-5. Preliminary studies both on symptoms descriptions from the internet and on clinical vignettes extracted from psychiatry exams show very encouraging results."
deep learning, healthcare, natural language processing
3 - 21
Efficient Data Sampling for Product Title Classification with Unbalanced Classes and Partially Unreliable Labels
Tobias Veiga
Tobias Mesquita Silva da Veiga
Having a large corpus for training can be a great asset for developing efficient Machine Learning models. Despite that, if a corpus is too large, computational problems may arise. Sampling the data is a reasonable approach, but can become more complex when the problem has more restrictions. In the MercadoLibre Challenge 2019, not only the corpus was large but also the classes were very unbalanced and most of the labels were unreliable. The method here presented is a simple way to sample from large corpus while also taking these restrictions into account. Using this sampling method and a simple SGDClassifier from scikit-learn, the public score was 90.38% (enough to rank 13th in the public leaderboard). Internal validation and leaderboard scores were very similar with only less than 0.1% difference. To improve the score further to 90,13% (2nd place), an ensemble was used by combining a similar variation of the sampling method and a few different models.
text-classification, unbalanced-classes, unreliable-labels
3 - 22
Efficiently Improved Hierarchical Text with External Knowledge Integration
Kervy Rivas, Gina Bustamante, Arturo Oncevay, Marco Sobrevilla
Gina Bustamante
Hierarchical text classification has been addressed recently with increasingly complex deep neural networks, taking advantage solely of the annotated corpus and raw monolingual data for pre-training embeddings and language modelling transfer. We turn the focus towards the potential semantic information in the target class definitions at the different layers of the hierarchy, and proceed to exploit them directly from word embedding spaces. We identify that a less-complex deep neural network could achieve state-of-the-art results by integrating the target class embeddings in an on-fly prediction from the highest levels. Also, we analyse the relevance of integrating this kind of external knowledge into a flat text classification scenario. Even with a straightforward approach to interconnect external semantic information to the model, we overcome flat text classification baselines and previous work with more complex neural architectures in two well-known datasets.
text clarification, knowledge integration, semantic information
3 - 23
EpaDB: analysis of a database for automatic Assessment of pronunciation
Jazmin Vidal, Luciana Ferrer
Jazmin Vidal
In this paper, we describe the methodology for collecting and annotating a new database designed for conducting research and development on pronunciation assessment. We created EpaDB (English Pronunciation by Argentinians Database), which is composed of English phrases read by native Spanish speakers with different levels of English proficiency. The recordings are annotated with ratings of pronunciation quality at phrase-level and detailed phonetic alignments and transcriptions indicating which phones were actually pronounced by the speakers. We present inter-rater agreement, the effect of each phone on overall perceived non-nativeness, and the frequency of specific pronunciation errors.
Pronunciation Scoring, Databases, Phone-level
3 - 24
Exploring Double Cross Cyclic Interpolation in Unpaired Image-to-Image Translation
Jorge López, Antoni Mauricio, Guillermo Cámara .
Jorge Roberto López Cáceres
The unpaired image-to-image translation consists of transferring a sample an in the domain A to an analog sample b in domain B without intensive pixel-to-pixel supervision. The current vision focuses on learning a generative function that maps both domains but ignoring the latent information, although its exploration is not explicit supervision. This paper proposes a cross-domain GAN-based model to achieve a bi-directional translation guided by latent space supervision. The proposed architecture provides a double-loop cyclic reconstruction loss in an exchangeable training adopted to reduce mode collapse and enhance local details. Our proposal has outstanding results in visual quality, stability, and pixel-level segmentation metrics over different public datasets.
unpaired image-to-image translation, generative adversarial networks, latent space
3 - 25
FastDVDnet: Towards Real-Time Video Denoising Without Explicit Motion Estimation
Matias Tassano, Julie Delon, Thomas Veit
Matias Tassano
We propose FastDVDnet, a state-of-the-art video denoising algorithm based on a convolutional neural network architecture. Until recently, video denoising with neural networks had been a largely under explored domain, and existing methods could not compete with the performance of the best patch-based methods. Our approach shows similar or better performance than other state-of-the-art competitors with significantly lower computing times. In contrast to other existing neural network denoisers, our algorithm exhibits several desirable properties such as fast runtimes, and the ability to handle a wide range of noise levels with a single network model. The characteristics of its architecture make it possible to avoid using a costly motion compensation stage while achieving excellent performance. The combination between its denoising performance and lower computational load makes this algorithm attractive for practical denoising applications.
video denoising, cnn, residual learning
3 - 26
From medical records to research papers: A literature analysis pipeline for supporting medical genomic diagnosis processes
Fernando López Bello, Hugo Naya, Víctor Raggio, Aiala Rosá
Fernando López Bello
In this paper, we introduce a framework for processing genetics and genomics literature, based on ontologies and lexical resources from the biomedical domain. The main objective is to support the diagnosis process that is done by medical geneticists who extract knowledge from published works. We constructed a pipeline that gathers several genetics- and genomics-related resources and applies natural language processing techniques, which include named entity recognition and relation extraction. Working on a corpus created from PubMed abstracts, we built a knowledge database that can be used for processing medical records written in Spanish. Given a medical record from Uruguayan healthcare patients, we show how we can map it to the database and perform graph queries for relevant knowledge paths. The framework is not an end user application, but an extensible processing structure to be leveraged by external applications, enabling software developers to streamline incorporation of the extracted knowledge.
health records, natural language processing, medical terminology
3 - 27
Generative Adversarial Networks for Image Synthesis and Semantic Segmentation in Brain Stroke Images
Israel Chaparro, Javier Montoya
Israel Nazareth Chaparro Cruz
Brain stroke was classified as 2nd cause of death in 2016, automated methods that can locate and segment strokes could aid clinician decisions about acute stroke treatment. Most medical images datasets are limited, smalls and have a severe class imbalance, this limits the development of medical diagnostic systems. Generative Adversarial Networks (GANs) are one of the hottest topics in artificial intelligence and can learn how to produce data. This work presents a conditional image synthesis with GANs for brain stroke image analysis and class balancing; furthermore, presents a novel training framework for segmentation with GANs.
generative adversarial networks, image synthesis, image segmentation
3 - 28
How Important is Motion in Sign Language Translation?
Jefferson Rodríguez and Fabio Martínez
Jefferson Rodríguez
More than 70 million people use at least one Sign Language (SL) as their main channel of communication. Nevertheless, the absence of effective mechanisms to translate massive information among sign, written and spoken languages is the main cause of exclusion of deaf people into society. Thanks to recent advances, sign recognition has moved from a naive isolated sign recognition problem to a structured end-to-end translation. Today, the continuous SL recognition is an open research problem because of multiple spatio-temporal variations, challenging visual sign characterization, as well as the non-linear correlation between signs. This work introduces a compact sign to text approach that explores motion as an alternative to support SL translation. In contrast to appearance-based features, the proposed representation allows focused attention on main spatio-temporal regions relevant to a corresponding word. Firstly, a 3D-CNN network codes optical flow volumes to highlight sign features. Then, an encoder-decoder architecture is used to couple visual motion-sign information with respective texts. From a challenging dataset with more than 4000 video clips, motion-based representation outperforms appearance-based representation achieving 47.51 and 56.55 on WER and Blue-4 score.
sign language translation, motion patterns, encoder-decoder architecture,
3 - 30
Language-Agnostic Visual-Semantic Embeddings
Jônatas, Wehrmann and Rodrigo C. Barros
Jônatas Wehrmann
This paper proposes a framework for training language-invariant cross-modal retrieval models. We also introduce a novel character-based word-embedding approach, allowing the model to project similar words across languages into the same word-embedding space. In addition, by performing cross-modal retrieval at the character level, the storage requirements for a text encoder decrease substantially, allowing for lighter and more scalable retrieval architectures. The proposed language-invariant textual encoder based on characters is virtually unaffected in terms of storage requirements when novel languages are added to the system. Our contributions include new methods for building character-level-based word-embeddings, an improved loss function, and a novel cross-language alignment module that not only makes the architecture language-invariant, but also presents better predictive performance. We show that our models outperform the current state-of-the-art in both single and multi-language scenarios. This work can be seen as the basis of a new path on retrieval research, now allowing for the effective use of captions in multiple-language scenarios.
multimodal learning, deep neural networks, language agnostic learning
3 - 31
Mining Opinions in the Electoral Domain based on social media
Jéssica Soares dos Santos, Aline Paes, Flavia Bernardini
Jéssica Soares dos Santos
Election polls are the de facto mechanisms to predict political outcomes. Traditionally, these polls are conducted based on a process that includes personal interviews and questionnaires. Taking into account that such a process is costly and time-demanding, many alternative approaches have been proposed to the traditional way of conducting election polls. In this research, we focus on the methods that use social media data to infer citizens’ votes. As the main contribution, this research presents the state-of-the-art of this area by comparing social media-based mechanisms to predict political outcomes taking into account the quantity of collected data, the specific social media used, the collection period, the algorithms adopted, among others. This comparison allows us to identify the main factors that should be considered when forecasting elections based on social media content and the main open issues and limitations of the strategies found in the literature. In brief, the main challenges that we have found include (but are not limited to): labeling data reliably during the short period of campaigns, absence of a robust methodology to collect and analyze data, and a lack of a pattern to evaluate the obtained results.
sentiment analysis, opinion mining, election outcomes
3 - 32
Object removal from complex videos using a few annotations
Thuc Trinh LE, Andrés ALMANSA, Yann GOUSSEAU & Simon MASNOU
Andres Almansa
"We present a system for the removal of objects from videos. As input, the system only needs a user to draw a few strokes on the first frame, roughly delimiting the objects to be removed. To the best of our knowledge, this is the first system allowing the semi-automatic removal of objects from videos with complex backgrounds. The key steps of our system are the following: after initialization, segmentation masks are first refined and then automatically propagated through the video. Missing regions are then synthesized using video inpainting techniques. Our system can deal with multiple, possibly crossing objects, with complex motions, and with dynamic textures. This results in a computational tool that can alleviate tedious manual operations for editing high-quality videos. More information here https://object-removal.telecom-paristech.fr/"
video inpainting, object removal, semantic segmentation
3 - 33
Persona-oriented approach to building a paraphrase corpus
Rossana Cunha, Adriana Pagano, Fabio Alves
Rossana Cunha
Paraphrasing shares intertwined perspectives on Linguistics and Natural Language Processing (NLP). In a broader sense, paraphrases are expressions that share approximately the same meaning. Several NLP tasks comprise paraphrasings such as paraphrase identification, text simplification, textual entailment, and semantic textual similarity. In this study, we explain how persona use benefited the compilation and alignment of a Brazilian Portuguese paraphrase corpus in the education of diabetes self-management domain. Our main objectives are constructing a paraphrase corpus for meeting the needs of healthcare professionals, patients, and families. The corpus consists of pairs of three groups of real users - (i) doctors/expert readers, (ii) nurses and healthcare assistants, and (iii) patients/lay readers. We combine the Systemic Functional Theory (Halliday and Matthiessen, 2014) with semantic-based NLP approaches in order to recognize paraphrase relationships. Finally, a Committee of domain experts (Linguists, Health professionals) further evaluates these pairs of sentences in order to validate our approach. Our experiments show preliminary results of a monolingual corpus aligned with expert, specialist, and lay discourse on the diabetes mellitus self-care domain.
natural language processing, persona, paraphrase corpus
3 - 34
PTb-Entropy: Leveraging Phase Transition of Topics for Event Detection in Social Media
Pedro H. Barros, Isadora Cardoso-Pereira, Hector Allende-Cid, Osvaldo A. Rosso and Heitor S. Ramos,
Pedro Henrique Silva Souza Barros
Social Media has gained increasing attention in the last years. It allows users to create and share information in an unprecedented way. Event detection in social media, such as Twitter, is related to the identification of the first story on a topic of interest. In this work, we propose a novel approach based on the observation that tweets are subjected to a continuous phase transition when an event takes place, i.e., its underlying model changes. This work proposes a new method to detect events in Twitter based on calculating the entropy of the keywords extracted from the content of tweets to classify the most shared topic as an event or not. We present a theoretical rationale about the existence of phase transitions, as well as the characterization of phase transitions with synthetic models and also with real data. We evaluated the performance of our approach using seven data sets, and we outperformed nine different techniques present in the literature.
event detection, social media analysis, phase transition
3 - 35
Relation extraction in Spanish radiology reports
Viviana Cotik, Javier Minces Müller
Viviana Cotik
"The number of digitized texts in the clinical domain has been growing steadily in the last years, due to the adoption of clinical information systems. Information extraction from clinical texts is essential to support clinical decisions and is important to improving health care. The scarcity of lexical resources and corpora, the informality of texts, the polysemy of terms and the abundance of non-standard abbreviations and acronyms, among others, difficult the task. For languages other than English, the challenges are usually more important. In this work, we present three different methods developed to perform relation extraction among clinical findings and anatomical entities in Spanish clinical reports: a baseline method based on co-occurrence of entities, a rule-based method and a work in progress based on convolutional neural networks. As data, we use a set of Spanish radiology reports, previously annotated by us. "
relation extraction, spanish radiology reports, bionlp
3 - 36
Revisiting Latent Semantic Analysis word-knowledge acquisition
Edgar Altszyler, Diego Fernandez-Slezak
Edgar Altszyler
"Latent Semantic Analysis (LSA) is one of the most widely used corpus-based methods for word meaning representation (word-embeddings). Landauer and Dumais published in 1997 the foundational work ``A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge''. In this paper, they compare the word-knowledge acquisition between LSA and that of children’s, and claimed that most of the knowledge acquired comes from indirect associations (high-order co-occurrences). To this day LSA continues to be intensively used in the computational psychology community as a plausible model of vocabulary acquisition. In this work, we revisit Landauer and Dumais (1997) experiments and discuss about some technical elements that call into question the presence of indirect learning processes in LSA word-knowledge acquisition. We support our discussion with new experiments that shed light on the matter"
latent semantic analysis, vocabulary acquisition, higher-order co-occurrence
3 - 37
Single image deblurring with convolutional neural networks
Guillermo Carbajal, Mauricio Delbracio, José Lezama, Pablo Musé
Guillermo Carbajal
Single image deblurring is a well studied task in computer vision and image processing. Blur may be caused by camera shake, object motion or out-of-focus. In general, deblurring is a challenging inverse problem that is severely ill-posed. When blur across the image can be considered to be uniform, traditional methods produce satisfactory results. However, in the more general non-uniform case, state-of-the-art deblurring methods are end-to-end trainable convolutional neural networks. Those networks learn a nonlinear mapping from low quality and high quality image pairs. Long-exposure blurry frames are generated through averaging consecutive short-exposure frames from videos captured by high-speed cameras, e.g. GoPro Hero 4 Black. These generated frames are quite realistic since they can simulate complex camera shake and object motion, which are common in real photographs. Although producing impressive results in some cases, their performance remains irregular and dependent on each case. In this project we investigate whether it is possible to improve networks performance by incorporating prior knowledge in the training process.
image deblurring; restoration; cnn
3 - 38
Syntactic Analysis and Semantic Role Labeling for Spanish using Neural Networks
Luis Chiruzzo and Dina Wonsever
Luis Chiruzzo
We developed a neural network architecture for parsing Spanish sentences using a feature based grammar. The architecture consists of several LSTM neural network models that produce syntactic analysis and semantic role labeling: first determine how to split a sentence into a binary tree, then assign the rule to be applied for each pair of branches, finally determine the argument structure of the resulting segments. We analyze two variants of this architecture and conclude that merging the split and rule identification models yields better results than training them separately. We train and evaluate the performance of these models against two Spanish corpora: the AnCora corpus and the IULA treebank.
parsing, spanish, lstm
3 - 39
The Encoder-Decoder Model Applied to Brazilian-Portuguese Verbal Irregularities
Beatriz Albiero
BEATRIZ ALBIERO
"Inspired by the controversial debate about the acquisition of irregular verbs in the English language, this research aims to study the inflection process of irregular verbs in the Brazilian Portuguese language through the use of the Encoder-Decoder model. To do this, we propose the task of predicting an inflected verbal form given a primary form (Stem + Thematic Vowel). To do this, we built a corpus that consisted of 423 verbs that were marked as belonging to either regular (51%) or irregular (49%) groups. Moreover, within the scope of irregular verbs, it was possible to identify 15 subgroups through the identification of inflection patterns. We also built a specific phonetic notation so that verbs could be associated with new representations that included information related to the phonetic features present. Thus, the proposed model attempts to predict inflected forms by identifying the phonetic relationships involved during the inflection process. The model was submitted to multiple trainings and tests and presented an average accuracy of 13.55%. Considering the segmentation between regular and irregular verbs, the model performed better among the regular class (17.88% vs 9.23%). "
computational linguistics, phonetics, connectionism
3 - 40
Towards goal-oriented dialog systems that consider their visual context
Luciana Benotti and Mauricio Mazuecos
Mauricio Diego Mazuecos Perez
Research in deep neural networks has made great progress in the area of computer vision in the last decade. There are preliminary works that make use of these advances to allow a dialogue system to talk about what is "observed" in an image. So far these systems are usually limited to answering questions about the image. In this project we investigate the generation of goal-directed questions that refer to the visual context of an image. We analyze how the visual context contributes to disambiguating the use of situated language. Finally, we model how the goal of the task influences the “salience” of the visual context and how the visual context restricts the range of possible clarification requests in a dialog.
natural language question generation, visually grounded dialog, reward shaping for reinforcement learning
3 - 41
Unraveling Antonym’s Word Vectors through a Siamese-like Network
Mathias Etcheverry and Dina Wonsever
Mathias Etcheverry
Discriminating antonyms and synonyms is an important NLP task that has the difficulty that both, antonyms and synonyms, contains similar distributional information. Consequently, pairs of antonyms and synonyms may havesimilar word vectors. We present an approach to unravel antonymy and synonymy from word vectors based on a siamese network inspired approach. The model consists of a two-phase training of the same base network: a pre-training phase according to a siamese model supervised by synonyms and a training phase on antonyms through a siamese-like model that supports the antitransitivity present in antonymy. The approach makes use of the claim that the antonyms in common of a word tend to be synonyms. We show that our approach outperforms distributional and pattern based approaches, relaying on a simple feed forward network as base network of the training phases.
word embeddings, synonym/antonym detection, siamese/parasiamese neural network
3 - 42
Unsupervised anomaly detection in 2D radiographs using generative models
"Laura Estacio-Cerquin, Moritz Ehlke, Alexander Tack, Stefan Zachow, Hans Lamecker, Rensso Mora-Colque"
Laura Jovani Estacio Cerquin
"Anomaly detection in medical images plays an important role in the development of biomedical applications. One interesting example is computed-aided diagnosis tools which are used in clinical routines process for detecting and diagnosis pathologies, disease evaluations, and treatment planning. This kind of application requires manual identification of anomalies by experts which is tedious, prone to error and time-consuming task. Therefore, this research field still poses highly challenging identification problems, which will be addressed in this research. The fundamental hypothesis is that manual identification problems could be solved using unsupervised methods in order to require minimal interaction by medical experts. We focus on the prosthesis and foreign objects identification located in the pelvic bone using X-ray images. The main idea is to use generative models such as convolutional autoencoders and variational autoencoders to reproduce X-rays without anomalies. Thereby, if a new X-ray image has an anomaly by subtraction between the input image and the reconstructed image we will be able to identify it. Preliminary results show good performance in the anomaly detection process."
anomaly detection, generative models, medical images.
3 - 43
Using Contextualized Word Embeddings to detect Hate Speech in Social Media
Juan Manuel Pérez, Franco Luque, Agustín Gravano
Juan Manuel Pérez
" Hate speech (also known as cyber bullying) is a pervasive phenomenon on the Internet. Racist and sexist discourse are a constant in Social Media, with peaks documented after “trigger” events, such as murders with religious or political reasons, or other events related to the affected groups. Interventions against this phenomenon (such as Reddit's ban on 2015) have been proved effective to restrain its proliferation. Due to the amount of content generated in Social Media, automatic tools are crucial to reduce human effort in the detection of abusive speech. In this work we present a classifier of hate speech based on recurrent neural networks and contextualized word-embeddings. We use data from a recent competition (HatEval @ SemEval 2019) achieving slightly better results in Spanish. Moreover, we analyze the behaviour of our neural model trying to understand where it is failing to detect hate speech. "
nlp, hate speech, contextualized embeddings
3 - 44
Video Segmentation with Complex Networks
Josimar Chire Saire
Josimar Chire
"Nowadays, the quantity of multimedia files(images, videos, audio, etc.) is increasing everyday. Then, It is necessary to analyze if there is one issue related to find. Focus on video camera surveillance, the quantity of cameras in the cities has exploded, after of some crime usually people goes to video records to find some evidence related to the crime. Complex Networks is an approach to analyze phenomenons considering the inner relationships represented by graphs. The objective of this work is combine techniques from Image Processing with Complex Networks and Machine Learning(K-Means) to analyze surveillance video and perform automatic segmentation to a posterior analysis. The initial performed experiments shows the capacity of automatic segmentation using Complex Network representation."
complex networks, machine learning, video segmentation
3 - 45
Winograd Schemas in Portuguese
Gabriela S. de Melo, Vinicius A. Imaizumi, Fabio G. Cozman
Gabriela Souza de Melo
"The Winograd Schema Challenge has become a common benchmark for question answering and natural language processing. The original set of Winograd Schemas was created in English; in order to stimulate the development of Natural Language Processing in Portuguese, we have developed a set of Winograd Schemas in Portuguese. We have also adapted solutions proposed for the English-based version of the challenge so as to have an initial baseline for its Portuguese-based version; to do so, we created a language model for Portuguese based on a set of Wikipedia documents."
winograd schema challenge, natural language processing, deep learning
3 - 46
Learning the operation of energy storage systems from real trajectories of demand and renewables
Agustin Castellano, Juan Andrés Bazerque
Agustin Castellano
Storage systems at the grid level have the potential to increase the power system performance in many aspects, including arbitrage of energy, frequency stabilization, and stable island operation. When grid operators plan the investments for expansion, they must compare these benefits to the cost of instal- lation of massive storage. We take on this question by analyzing the potential savings by arbitrage of energy, focusing on a single- bus model of the grid that includes storage, fuel and hydro- based generators, and renewables. A storage dispatch policy is optimized via the q-learning algorithm under a cyclostationary model of the random variables. Our algorithm starts with no prior knowledge of the system and progresses to take actions that act with regards to the expected state of the system in the future. The learning agent is trained with real trajectories of demand and renewables collected from the Uruguayan power system over three years, and with a fitted cost that accounts for the actual aggregated price of energy at the Uruguayan generation market. The learned policy operates the storage system achieving lower operational costs compared to a grid with no batteries.
Keywords - Q-Learning, Reinforcement learning, Energy storage systems
3 - 47
The use of computer vision for soil analysis from pfeiffer chromatographs
Nathália Ferreira de Figueiredo, Wallinson Deives Batista Lima, Liomar Renner de Araújo Rabelo, Oderlan Freire de Sousa, João Vitor de Araújo Rocha
Nathália Ferreira de Figueiredo
The SharinAgro project was created to provide assistance for CSA's. CSA stands for Community Supported Agriculture, witch is a worldwide social way of producing food that puts in direct contact, farmers and consumers. In the CSAs, consumers become supporters of farmers, like owning a piece of the farm, collaborating with a monthly or weekly fee to help with cost of food production. The CSA then produces baskets of organic vegetables, with fruits, vegetables, honey and other foods of natural origin at cheaper price, and also establishes a trusting relationship with producers. With this in mind, we want to use technology to create a software to help providing financial and administrative control to CSAs. Through SharinAgro's application, users will have access to functionalities such as organization of human resources and control of operational expenses. In this way we help the CSA, to have more financial security, as regards the loss of harvest by pests, environmental inclemencies and others. And on top of that, we propose a feature for soil health analysis by using Pfeiffer’s chromatography. Using machine learning we aim to to the user correct prediction for agricultural management purposes.
computer vision, pfeiffer chromatography, soil analysis
3 - 48
MedGenie
Quanam Data & Analytics
Quanam
Collaborative tool for physicians, to streamline access to latest medical science findings. Based on paper https://www.sciencedirect.com/science/article/pii/S2352914819300309?via%3Dihub
health records, natural language processing, medical terminology

Information for authors

Each poster has an ID: [poster session] – [poster #]. Each panel in the poster area has a number that should match [poster #]. You can start setting up your poster anytime in the morning.
There will be poster putty available for free at the reception to stick the posters to the panels.
If you need to print your poster, there are some printing shops close to the venue:
  • MVD Soluciones Gráficas – J.Herrera y Reissig 584 (opposite to the venue).
  • Grupo D3 – Br. España 2294.
  • Copiplan – 21 de setiembre 2699.