Poster Sessions 2023

The Best Poster Contest is now open.

Each participant will be able to choose up to three posters from both Poster Session 1 and poster Session 2. 

In order to vote you have to access the Khipu 2023 Voting Form  ( and follow the instructions.

1 - 1
A deep learning approach to brain vessel segmentation in 3DRA with arteriovenous malformations
Camila García, Yibin Fang, Jianmin Liu, Ana Paula Narata, José Ignacio Orlando and Ignacio Larrabide
Camila García
Segmentation of brain arterio-venous malformations (bAVMs) in 3D rotational angiographies (3DRA) is still an open problem in the literature, with high relevance for clinical practice. While deep learning models have been applied for segmenting the brain vasculature in these images, they have never been used in cases with bAVMs. This is likely caused by the difficulty to obtain sufficiently annotated data to train these approaches. Our work introduces a first deep learning model for blood vessel segmentation in 3DRA images of patients with bAVMs. We densely annotated 5 3DRA volumes to train two alternative 3DUNet-based architectures with different segmentation objectives. Our results show that the networks reach a comprehensive coverage of relevant structures for bAVM analysis, much better than what is obtained using standard methods. This is promising for achieving a better topological and morphological characterisation of the bAVM structures of interest. Furthermore, the models have the ability to segment venous structures even when missing in the ground truth labelling, which is relevant for planning interventional treatments. Ultimately, these results could be used as more reliable first initial guesses, alleviating the cumbersome task of creating manual labels. (Based on paper
3dunet, brain vessel segmentation, biomedical imaging
1 - 2
Feminist AI: automation tools towards a feminist judicial reform in Latin America
"Feldfeber, Ivana Quiroga, Yasmín Ciolfi Felice, Marianela"
Ivana Feldfeber
"AymurAI is a software based on Artificial Intelligence, which aims to help criminal courts in Latin America to collect gender-based violence data and make it available for the general public. AymurAI was specifically developed to identify important information in court rulings and then build an open dataset to aid transparency in the judiciary. This program will help the construction of data and statistics on gender-based violence."
Gender data, data collection, nlp
1 - 3
Adversarial perturbations on ultrasound deep beamformers
Itamar Salazar; Roberto Lavarello
Itamar Franco Salazar Reque
As deep neural network-based ultrasound beamformers become increasingly common, evaluating their robustness is becoming increasingly important. Robustness can be determined by the sensitivity of the beamformer to discrepancies in its underlying assumptions. However, it is challenging to examine the deep beamformer's worst-case performance using this method. In this study, we examine the robustness of a deep beamformer by analyzing the impact of adversarial perturbations on its reconstructions. We use a deep beamformer from previous literature that generates both B-mode and segmented images from raw channel data and compute adversarial perturbations. Our results show that in the worst-case scenario, adversarial perturbations severely distort the deep beamformer reconstructions, whereas the delay-and-sum beamformer remains relatively unchanged. The PSNR, contrast, and dice similarity score of the deep beamformer outputs changed from +20 dB to +5 dB, from -40 dB to +10 dB, and from about 1.0 to almost 0.0, respectively.
adversarial perturbations; ultrasound beamforming; dnn
1 - 4
Analysis of fairness metrics for anonymization methods in the context of Electronic Health Records
Mariela Rajngewerc - Laura Acion - Laura Alonso Alemany
Mariela Rajngewerc
"Classical metrics to evaluate machine learning models are usually aggregates that provide no insights into the differential behavior of the model with respect to certain subgroups, which is usually known as bias. When working with models that will affect human beings, the impact of bias must be assessed to detect, mitigate or even prevent possible harm. Several fairness metrics have been defined in the bibliography. In some cases, if a metric adequately represents a relevant aspect of the behavior of the model, this implies that some other metrics may be irrelevant. Different problems may require different perspectives and different bias definitions. In this work, we show the strengths and limitations of different metrics, illustrating them as applied to the bias analysis of anonymization algorithms of Electronic Health Reports (EHR). These algorithms take a set of sentences and eliminate any sensitive data they may contain (names, surnames, identification numbers, etc). If these algorithms make systematic errors over a specific group of society, that group may be exposed, and their privacy may be violated. We show how different fairness metrics highlight certain aspects of the behavior of these algorithms while obscuring others. "
fairness metrics, bias, electronic health records
1 - 5
Artificial Intelligence and Autism Spectrum Disorder: the discriminatory potential
Isadora Valadares Assunção
Isadora Valadares Assunção
"The present article is an integrative literature review aimed at evidencing the discriminatory potential of AI diagnostic and therapeutical applications for people with disabilities, especially for people with Autism Spectrum Disorder (ASD). Apart from the discriminatory risks, these systems also contribute to the dehumanization of people with ASD by removing their autonomy and agency, which makes AI ethics insufficient for mitigating the algorithmic discrimination risks. Technical perspectives of risk mitigation are also unsatisfactory because of the diversity of ASD presentations, as it is an intrinsically non-observable and non-measurable characteristic. Thus, an increase of the representativity of people with ASD during the development of these tools makes itself necessary, in a perspective of designing with people with autism, not for them."
autism spectrum disorder; social discrimination; artificial intelligence
1 - 6
Automatic multi-modal processing of language and vision to assist people with visual impairments.
Hernán Javier Maina and Luciana Benotti
Hernán Javier Maina
In recent years, the study of visual question answering (VQA) models, has gained significant appeal due to its great potential in assistive applications for people with visual disabilities. Despite this, to date, many of the existing VQA models are nor applicable to this goal for at least three reasons. First, they are designed to respond to a single question. That is, they are not able to give feedback to incomplete or incremental questions. Secondly, they only consider a single image which is neither blurred, nor poorly focused, nor poorly framed. At last, these people frequently need to read text captured by the images, and most current VQA systems fall short in this task. This proposal presents four lines of research to be able to adapt and extend the existing VQA system, to be able to assist these people in their daily tasks. We propose the integration of dialogue history, the analysis of more than one input image, and the inclusion of text recognition capabilities to the models.
assistive technologies, human-computer interaction, multimodal processing
1 - 7
Comparison of Self Supervised Representations for Mispronunciation Detection
Jazmín Vidal, Pablo Riera, Luciana Ferrer
Jazmin Vidal
1 - 8
Digital Holographic Microscopy based on Physics-guided Deep Learning
Juan Llaguno, Federico Lecumberry, Julia Alonso
Juan Manuel Llaguno Jaime
"Holography was invented by Gabor in 1948 and consists of recording the interference pattern, called a hologram, generated by light coming from an object and a reference beam. Traditionally the hologram was recorded in a physical medium but as technology advanced the hologram was able to be recorded by a camera sensor (Digital Holography, DH). The reconstruction of the digital captured hologram is made computationally using light wave theory and signal processing. Nevertheless, these calculations have some common problems that need to be addressed: the calculations for hologram generation and diffraction have high computational complexity, limited image quality due to speckle noise and optical aberrations. It is well known that Deep Learning (DL) has seen impressive results in the field of image processing, microscopy, and many others. DL has been applied to DH in several different ways, such as: depth estimation, phase unwrapping and direct reconstruction using DL (both supervised and unsupervised learning). Physics-guided DL consist of conditioning the loss function of the DL model so that some physical property is satisfied, thus achieving results coherent with the physical laws underlying the results. In this work I propose to apply Physics-guided DL to DHM. "
image processing, digital holography, optics
1 - 9
Demographically-Informed Prediction Discrepancy Index (DIPDI): Early Warnings for Biases in Unlabeled Populations
Lucas Mansilla, Estanislao Claucich, Rodrigo Echeveste, Diego H. Milone, Enzo Ferrante
Lucas Mansilla
An ever-growing body of work has shown that machine learning systems can be systematically biased against certain sub-populations. Data imbalance and under-representation in the training datasets have been identified as potential causes behind this phenomenon. However, understanding whether data imbalance may result in biases for a given task and model class is not simple. A typical approach to answering this question is to perform counterfactual experiments in a controlled scenario, where several models are trained with different imbalance ratios and then evaluated on the target population. However, in the absence of ground-truth annotations at deployment for a new target population, most fairness metrics cannot be computed. In this work, we explore an alternative method based on the output discrepancy of pools of models trained on different demographic groups. Our hypothesis is that the output consistency between models may serve as a proxy to anticipate biases. We formulate the Demographically-Informed Prediction Discrepancy Index (DIPDI) and validate our hypothesis using both synthetic and real-world datasets. Our work sheds light on the relationship between model output discrepancy and demographic biases, and provides a means to anticipate fairness issues in the absence of ground-truth annotations.
fairness, discrepancy, biases
1 - 10
Exploring concepts for building a Spanish-speaking automated Game Master🎲
Santiago Góngora, Luis Chiruzzo, Gonzalo Méndez & Pablo Gervás
Santiago Góngora
Tabletop Role-Playing games put two or more players to collaborate in order to create a story. One of these players is the Game Master (GM), who is the one in charge of creating the narrative background, the non-playable characters the human players meet and the challenges they face. A GM is not only a storyteller, but also a judge and a guide for the rest of the human players. To automate a GM is one of the next challenges for NLP and AI, due its complexity on dialogue, language and creativity. In this poster we present - as a work in progress (MSc. thesis) - a set of concepts to be explored in order to obtain a Spanish-speaking GM.
computational creativity, natural language processing, role playing games
1 - 11
Exploring the potential of soundscape deep embeddings in identifying tropical dry forest transformation level
Andrés Eduardo Castro-Ospina (Instituto Tecnológico Metropolitano - ITM), Paula Andrea Rodríguez-Marin (Instituto Tecnológico Metropolitano - ITM), Susana Rodríguez-Buriticá (Instituto de Investigación de Recursos Biológicos Alexander von Humboldt), Juan David Martínez-Vargas (Universidad EAFIT)
Andrés Eduardo Castro Ospina
Passive Acoustic Monitoring has emerged as a cost-effective and non-invasive method for evaluating ecosystems. This study explores the potential of deep embeddings generated by pre-trained convolutional neural networks for determining the transformation level of tropical dry forests based on their soundscape. Three sets of data were recorded between 2015 and 2017 and expertly labeled based on the level of transformation. We characterized each soundscape using traditional acoustic indices and high-level features extracted by the pre-trained neural network. Then, we propose using 24-hour temporal blocks to leverage patterns using 1D convolutions, which aggregate dynamic information from each feature. We found that these temporal blocks allow the exploitation of acoustic dynamics to accurately identify the transformation level of tropical dry forests by using high-level features extracted by the convolutional neural network. In conclusion, our findings demonstrate the usefulness of high-level features extracted by convolutional neural networks as an alternative to traditional acoustic indices in evaluating ecosystems.
passive acoustic monitoring, soundscape, convolutional neural networks
1 - 12
Graph Coloring: A reinforcement Learning Approach
David Corredor Montenegro, Nicolás A. Castro P, Juan Manuel Perez, Mauricio Velasco
David Corredor
"The Graph Coloring Problem is one of the most relevant combinatorial optimization problems. It consists in finding the minimum number of colors that can be assigned to the vertices of a graph in such a way that no edge connects two vertices of the same color. This problem has a large number of applications: It can be used to solve planning problems, for example, the design of final exam schedules at a university, or the design of sports tournaments. Finding the optimal solution to this problem can be very difficult, since it is an NP-hard problem. In practice, suboptimal solutions to this problem are constructed using different heuristics, each with its advantages and disadvantages. In this work we use reinforcement learning techniques to build new roll-out based graph coloring algorithm. Our main contribution is the use of lower bounds for the chromatic number of a quotient graph to quickly reduce the size of the explored tree. Using these ideas we have built a quite efficient algorithm (we call it ROGC) capable of coloring graphs of significant magnitudes."
graph couloring, reinforcement learning, rollout
1 - 13
Graph Coloring: A reinforcement Learning Approach.
Nicolás Castro Pulido; David Corredor M; Mauricio Velasco
Nicolas Andres Castro Pulido
"Several scheduling problems as well as the solving of Sudokus can be modelled as a weighted shortest path problem. This weighted shortest path problem can be solved using Bellman’s dynamic programming principle. Let us remember that it defines a value function J, which, if the tree associated with the dynamic system is large, is very difficult to calculate. For solving this problem, we propose a reinforcement learning approach. More concretely, we construct a method, which names RALB (Rollout Algorithm with Lower Bound). This method approximates the value function J by a family of heuristics. These are constructed using the spectral bounds of partial colored graphs as well as hermitian semidefinite programming."
reinforcement learning, graph coloring, optimization
1 - 14
How Do Reddit Users React to Articles From Partisan Media and Misinformation?: A Causal Perspective
Federico Albanese; Siqi Wu; Lu Cheng
Federico Albanese
This work studies, from a causal perspective, the collective user engagement of different political communities toward news articles with different political positions and factualness. By conducting a causal analysis on a curated Reddit dataset, we find that an opposite political position of the article tends to cause a significant decrease in the community engagement (e.g., upvotes) compared to what the article would have received if it had the same political position. The phenomenon is known as the echo chamber effect, and we find that it is more prominent in right-leaning subreddits than in left-leaning subreddits, suggesting an ideological asymmetry. Moreover, we investigate the causal relation between the media credibility and the community engagement metric. To our surprise, we find that users tend to upvote more for articles published by low credible media sources compared to those by factual sources.
social media, misinformation, causal analysis
1 - 15
HybridGNet: Leveraging graph-based representations of organs for anatomically plausible medical image segmentation
Nicolás Gaggion, Lucas Mansilla, Candelaria Mosquera, Diego H. Milone, Enzo Ferrante
Rafael Nicolás Gaggion Zulpo
This poster presents HybridGNet, a novel encoder-decoder neural architecture that utilizes standard convolutions for image feature encoding and graph convolutional neural networks (GCNNs) to decode plausible representations of anatomical structures. Unlike standard fully convolutional neural networks trained with pixel-level losses, which assume pixels to be independent of each other and thus ignore topological errors and anatomical inconsistencies, HybridGNet incorporates anatomical constraints into the model by construction. We show that this approach produces anatomically plausible segmentation results in challenging scenarios, such as chest x-ray segmentation in the context of multi-center datasets with heterogeneous labels and 3-d cardiac mesh reconstruction from MRIs.
graph neural networks, medical image segmentation, generative models
1 - 16
Ischemic stroke infarct segmentation from non-contrast CT using self-supervised learning and convolutional neural networks.
Seia, Joaquin Oscar; De La Rosa, Ezequiel; Sima, Diana; Robben, David
Joaquin Oscar Seia
Acute ischemic stroke is a medical emergency caused by the obstruction of a brain artery that impedes the proper irrigation of the brain tissue finally causing its death. The longer the time since the occlusion the higher the probability of permanent neurological damage and even death of the patient. In clinical practice, the extent of the irreversible injury -infarct core- is used to decide on the patient's treatment. Magnetic resonance and perfusion images are the gold standard modalities used for segmenting the lesion core and determining its volume. However, in the last years several attempts have been made to use non-contrast CT, which constitutes a much noisier but also faster and more accessible modality, for the same purpose. The state of the art solutions for this problem include the use of convolutional neural networks (CNN) that leverage not only the NCCT images but also other contextual priors, such as asymmetry maps or general anatomical information, to provide a better segmentation. Here we present preliminary results on the use of self-supervised learning techniques in order to condition the CNN segmentation models to focus on this relevant information instead of providing it an additional input to the model.
segmentation, stroke, self-supervised learning
1 - 17
Learning to cluster urban areas: two competitive approaches and an empirical validation
"Camila Vera, Francesca Lucchini, Naim Bro, Marcelo Mendoza, Hans Lobel, Felipe Gutierrez, Jan Dimter, Gabriel Cuchacovic, Axel Reyes, Hernán Valdivieso, Nicolás Alvarado"
Camila Fernanda Vera Villa
Urban clustering detects geographical units that are internally homogeneous and distinct from their surroundings. It has applications in urban planning, but few studies compare the effectiveness of different methods. We study two techniques that represent two families of urban clustering algorithms: Gaussian Mixture Models (GMMs), which operate on spatially distributed data, and Deep Modularity Networks (DMONs), which work on attributed graphs of proximal nodes. To explore the strengths and limitations of these techniques, we studied their parametric sensitivity under different conditions, considering the spatial resolution, granularity of representation, and the number of descriptive attributes, among other relevant factors. To validate the methods, we asked residents of Santiago, Chile, to respond to a survey comparing city clustering solutions produced using the different methods. Our study shows that DMON is slightly preferred over GMM and that social features seem to be the most important ones to cluster urban areas.
urban clustering, graph neural networks, gaussian mixture models
1 - 18
Machine Learning and Bioinformatics Research in the Bioinformatics and Machine Learning Group (BioMal/DC/UFSCar)
Ricardo Cerri
Ricardo Cerri
This poster presents some of the research topics and works we have been performing in the Machine Learning and Bioinformatics Group from Federal University of São Carlos.
bioinformatics, machine learning, multi-output learning
1 - 19
Measuring the complexity in semantic matching: a new dataset in news
Carlos Muñoz-Castro; Maria Apolo; Maximiliano Ojeda; Hans Löbel; Marcelo Mendoza
Carlos José Muñoz Castro
Accelerated technological growth and the rapid adoption of digitization have made it easy to recover a large volume of data in a short time. This has enabled the development of several areas of Artificial Intelligence (AI), including Natural Language Processing (NLP). Despite the above, it is possible to recognize a set of difficulties resulting from the flexibility of language; ambiguity, irony, sarcasm, etc. This leads to considering the following hypothesis for one of the classic NLP tasks; the incorporation of a new data set that explores levels of polysemy and lexical similarity at the sentence level in the semantic matching task will make it possible to evaluate and identify potential difficulties of state-of-the-art models. Considering the above, this work addresses two stages to verify the previous premise: the creation of a new dataset and the evaluation of state-of-the-art models. The dataset is built from short texts of news headlines extracted from Twitter and Facebook-CrowdTangle, bringing together the main news media in the USA from 2019 to 2022. In contrast, the evaluation and analysis of results are based on state-of-the-art models through fine-tuning BERT and SentenceBERT.
natural language processing; semantic matching; sentence similarity; polisemy; deep learning; lexical relationship
1 - 20
Modeling global plankton communities via multinomics & AI approaches
Luis Valenzuela, Luis Martí, Nayat Sanchez-Pi
Luis Valenzuela Villa
"Climate change is damaging both environmental and human health, hence it is urgent to adopt strategies to mitigate its harmful consequences. The oceans play a key role against climate change, acting as a reservoir of CO2, through the biological carbon pump, a suite of processes that sequester carbon from surface layer to the deep ocean. This is mainly driven by plankton communities due their vast metabolic diversity, influencing in global biogeochemical cycles, food webs and climate regulation. In the context of the Inria challenge project OcéanIA (, which aims to develop AI modeling tools to better understand the ocean's capacity to mitigate climate change and how to protect it, we employed the planktonic metagenomes and metatranscriptomes sequenced over the last decade by Tara Oceans expeditions, to characterize structure and functioning of global plankton communities, and their relationship with environment. By using topological approaches as persistent homology and UMAP models combined with neural networks and trees ensemble i) we characterized subpopulations of global plankton communities at different depths, ii) we trained models to infer plankton genomic composition from environmental features and iii) we simulated changes in the global plankton communities under different future global warming scenarios."
deep learning, plankton, climate change
1 - 21
Noisesniffer: image forgery detection based on noise inspection
Marina Gardella, Pablo Musé, Miguel Colom, Jean-Michel Morel
Marina Gardella
image forensics, automatic forgery detection, noise residual
1 - 22
Phones vs Speakers: Self-supervised Speech Representation Analysis
Pablo Riera, Manuela Cerdeiro, Leonardo Pepino, Luciana Ferrer
Pablo Riera
Self-supervised representations of speech are currently being widely used for a large number of applications. Recently, some efforts have been made in trying to analyze the type of information present in each of these representations. Most such work uses downstream models to test whether the representations can be successfully used for a specific task. The downstream models, though, might distort the structure of the representation, extracting information that may not have been readily available in the original representation. In this work, we analyze several state-of-the-art speech representations using methods that do not require a downstream model. We use representation similarity analysis and statistical techniques to measure how the representations organize the information in the signal. We measure how different layers encode basic acoustic parameters such as formants and pitch. Further, we study the extent to which each representation clusters the speech samples by phone or speaker classes. Overall, our results indicate that models tend to represent speech attributes differently depending on the target task used during pretraining.
self-supervised, speech representation, explainability
1 - 23
Physics-Informed Dynamic Graph Neural Networks for Ocean Modeling
Caio F. D. Netto, Marcel R. de Barros, Jefferson F. Coelho, Felipe M. Moreno, Lucas P. de Freitas, Marlon S. Mathias, Fábio G. Cozman, Marcelo Dottori, Edson S. Gomi, Eduardo A. Tannuri, Anna H. R. Costa
Caio Fabricio Deberaldini Netto
Forecasts of ocean dynamic variables are essential to ensure safe operations at sea and coastal regions. However, one area for improvement with such forecasts is the need to handle multiple scales and repetitions in data, as well as noise caused by sensor malfunction. We describe a data-driven approach to predict oceanic variables under those circumstances; we take as a case study the prediction of water current velocity and sea surface height in an estuarine system on the southeastern coast of Brazil. We propose a generic method that can be applied to various practical cases with little to no adaption, using a Graph Neural Network to model the system dynamics. We provide evidence that our method produces robust forecasts. It employs forecast data from the state-of-the-art physics-based model “Santos Operational Forecasting System” (SOFS). The approach has lower computational costs and requires almost no domain-specific knowledge. We compare our model with SOFS and ARIMA-like forecast models in experiments.
dynamic graph neural networks, physics-informed machine learning, ocean modeling
1 - 24
PredGenIA: Transformers for genomic prediction
Graciana Castro, Romina Hoffman, Mateo Musitelli
Graciana Maria Castro Olmedo
Up to this day, Transformers are most commonly used for Natural Language Processing. But what if they can work with data that has nothing to do with language? What if our genotype can speak to the attention models the same way language does? Will it be possible to predict a certain phenotype?
transformers, genomic prediction, self-attention
1 - 25
Predicting target–ligand interactions with graph convolutional networks for interpretable pharmaceutical discovery
Paola Ruiz Puentes, Laura Rueda-Gensini, Natalia Valderrama, Isabela Hernández, Cristina González, Laura Daza, Carolina Muñoz-Camargo, Juan C Cruz, Pablo Arbeláez
Paola Ruiz Puentes
Drug Discovery is an active research area that demands great investments and generates low returns due to its inherent complexity and great costs. To identify potential therapeutic candidates more effectively, we propose protein–ligand with adversarial augmentations network (PLA‐Net), a deep learning‐based approach to predict target–ligand interactions. PLA‐Net consists of a two‐module deep graph convolutional network that considers ligands’ and targets’ most relevant chemical information, successfully combining them to find their binding capability. Moreover, we generate adversarial data augmentations that preserve relevant biological backgrounds and improve the interpretability of our model, highlighting the relevant substructures of the ligands reported to interact with the protein targets. Our experiments demonstrate that the joint ligand–target information and the adversarial augmentations significantly increase the interaction prediction performance. PLA‐Net achieves 86.52% in mean average precision for 102 target proteins with perfect performance for 30 of them, in a curated version of actives as decoys dataset. Lastly, we accurately predict pharmacologically‐relevant molecules when screening the ligands of ChEMBL and drug repurposing Hub datasets with the perfect‐ scoring targets.
graph neural networks, pharmaceutical discovery, target-ligand affinity
1 - 26
Probabilistic Intersection-over-Union for Training and Evaluation of Oriented Object Detectors
Jeffri Murrugarra-Llerena, Lucas N. Kirsten, Luis Felipe Zeni, and Claudio R. Jung
Jeffri Erwin Murrugarra Llerena
"The vast majority of object detectors explore Horizontal Bounding Boxes (HBBs) as the shape representation, and the current state-of-the-art is achieved by deep learning methods. The Intersection-over-Union (IoU) is the standard metric for evaluating object detectors and has also been explored in the localization loss term for training HBB detectors. However, HBBs are not suitable representations when elongated oriented objects are present, and detectors based on Oriented Bounding Boxes (OBBs) are becoming more popular. Although the IoU has also been used to evaluate OBB detectors, the problem of ambiguous representations for irregular or roughly circular objects has been overlooked by the community. Furthermore, extending the idea of using IoU-like loss functions for OBBs is challenging due to complex formulations and differentiability issues. In this work, we propose a Probabilistic IoU (ProbIoU) measure that considers fuzzy object representations as probability density functions. When Gaussian distributions are used, ProbIoU reduces to a differentiable closed-form expression that can be directly used as a localization loss term to train OBB object detectors. "
oriented object detectors, object representation, computer vision
1 - 27
Reinforcement Learning with Almost Sure Constraints
Agustin Castellano, Hancheng Min, Juan Bazerque, Enrique Mallada
Agustin Castellano
"We propose Reinforcement Learning (RL) with almost sure constraints, wherein constraints are in probability-one. We develop a ""separation principle"" that decouples the problems of feasibility and optimality. As such, we focus on learning feasibility (which can be done independently). Feasibility is encoded in a hard-barrier function that summarizes all the safety information about state-action pairs. Our algorithm learn the set of feasible policies in expected finite time—therefore bounding the number of violations during training."
reinforcement learning, safe learning, constrained mdps
1 - 28
Sampling-based inference under hierarchical probabilistic models to study perception and neural dynamics in the visual cortex
Josefina Catoni, Rodrigo S. Echeveste
Josefina Catoni
"The Bayesian theory of visual perception in Neuroscience assumes the brain performs probabilistic inference to estimate probability distributions over variables of interest given an observed stimulus. To understand how this process could take place we train neural networks to perform Bayesian inference under hierarchical generative models of perception. Inference in these networks is done sampling, employing the dynamics of recurrent modules. These types of networks constitute useful models of cortical circuits of perception, being able to capture not only mean responses and neural variability but also cortical dynamics, including: oscillations, transient responses, and temporal cross-correlations. We focus on visual inference from natural images, and aim to learn both a generative model for these images and an inference model. As a first step we are training Variational Autoencoders whose modules will serve as silver-ground truth for our recurrent ones. These models will be applied to the study of neurotypical visual perception, and also to improve the understanding of the link between physiology and sensory perception in individuals with autism."
probabilistic inference, visual perception, neural networks
1 - 29
Studying latent representations of Disorder of Consciousness using deep learning on EEG recordings
Laouen Belloli, Dragana Manasova, Melanie Valente, Emilia Flo Rama, Basak Turker, Esteban Munoz, Aude Sangare, Martin Rosenfelder, Lina Willacker, Theresa Raiser, Benjamin Rohaut, Andreas Bender, Lionel Naccache, Jacobo Sitt
Laouen Mayal Louan Belloli
"Patients with disorder of consciousness (DoC) present a challenge for clinical diagnosis due to their inability to communicate. Currently, diagnosis is obtained using the coma recovery scale - revised (CRS-R). Such scale is sensitive to subjectivity of the physician and relies on the patient's behavioral responses. Different studies have shown evidence that the brain is a large-scale complex system where collective behavior emerges from the neuron's nonlinear dynamics. This activity self-organizes into a much lower number of states, suggesting that a low-dimensional manifold could explain the states of consciousness. In this work, we propose a novel variational autoencoder (VAE) to obtain a low-dimensional latent representation of raw EEG data where the decoder learns to reconstruct existing biomarkers instead of the input. This modification constrains the model to prioritize emergent general behaviors away from individualities. Our preliminary results show that we are able to obtain a 3-dimensional latent space that reconstructs almost all the biomarkers. This latent space separates the diagnoses even though this information is never given to the model. Also, the gradient obtained in the latent space shows linearity between the states of consciousness and generalizes to other cohorts."
disorder of consciousness, variational autoencoders, latent representations
1 - 30
Teacher Student Curriculum Learning applied to OCR
Rodrigo Laguna
Rodrigo Jorgeluis Laguna Queirolo
"The field of machine learning has seen an increase in the use of curriculum learning methods as a way to improve performance in supervised problems. Our work proposes the application of Teacher Student Curriculum Learning (TSCL), a reinforcement learning-based curriculum learning method, in an optical character recognition (OCR) task. The task is part of the LUISA (Leyendo Unidos para Interpretar loS Archivos) project, which aims to develop tools for extracting information from digital images of historical documents stored in the Archivo Berruti, a collection of documents generated by the Armed Forces during the last dictatorship in the period 1968-1985. This work uses a previously developed seq2seq model as the student in the Teacher-Student Curriculum Learning (TSCL) framework. Essentially, the model was trained with the same data, but with modifications to the training method. To date, the results have been encouraging and show potential for further improvement. It is imperative to carry out further research to compare the differences between traditional training and TSCL, carry out error analysis, and comprehend the variations in the training procedures."
curriculum learning, reinforcement learning, ocr
1 - 31
Tempo vs. Pitch: understanding self-supervised tempo estimation
Giovana Morais, Matthew EP Davies, Marcelo Queiroz, Magdalena Fuentes
Giovana Vieira de Morais
Self-supervision methods learn representations by solving pretext tasks that do not require human generated labels, alleviating the need for time-consuming annotations. These methods have been applied in computer vision, natural language processing, environmental sound analysis, and recently in music information retrieval, e.g. for pitch estimation. Particularly in the context of music, there are few insights about the fragility of these models regarding different distributions of data, and how they could be mitigated. In this paper, we explore these questions by dissecting a self-supervised model for pitch estimation but adapted for tempo estimation via rigorous experimentation with synthetic data. Specifically, we study the relationship between the input representation and data distribution for self-supervised tempo estimation.
tempo estimation, self-supervised learning, music information retrieval
1 - 32
ToMoDL: Model-based neural networks for tomographic reconstruction
Obando, M. , Mato, G. and Correia, T.
Marcos Antonio Obando
"Linear inverse problems have greatly benefited from deep learning techniques in the paramount goal of recovering a signal from a small number of measurements. In the field of image reconstruction, recent approaches involve the augmentation of traditional inverse problem solvers with neural networks as sparsifying functions. We propose a deep unrolled framework for tackling the problem of accelerating the acquisition of optical projection tomography (OPT), a mesoscopic technique for imaging biological translucid samples. Using twelve volumes of zebrafish (Danio Rerio) angular projections from four longitudinal sections and different days post-fertilisation, our approach deals with an extremely variable dataset in terms of intensity and structure, as well as an often overlooked problem in model-based deep learning: tomography reconstruction. By maximizing structural similarity in a fixed-iteration unrolled mapping, our cross-validated results show a reasonably high quality reconstruction with even a 5% of the acquired projections, achieving a considerably better performance than compressed sensing methods such as Two-Step Iterative Shrinkage/Thresholding (TwIST) and U-Net architectures. We also analyze problematic reconstruction issues regarding the anatomical structure of zebrafish, where analytical solutions may be chosen instead."
optical projection tomography, lineal inverse problems, deep learning
1 - 33
Towards a NLI Dataset Annotated with Complexity Levels in Brazilian Portuguese
Felipe Ribas Serras and Marcelo Finger
Felipe Ribas Serras
In this work, we present the first steps towards the construction of CinCoPE (Corpus de Inferência com Complexidades para o Português), a dataset of Natural Language iIference (NLI) in Brazilian Portuguese annotated with complexity information. Our goals with cincope are two: (i) to better evaluate the capabilities of the models already available for NLI in Portuguese, understanding its performance over different levels of semantic and syntactic complexity, and (ii) to use the produced metadata to explore techniques of curricular learning using attention-based language models (e.g. BERT and RoBERTa) for natural language inference in Portuguese. In this document, we present the problem and the state of the resources for NLI in Portuguese, as well as our first attempts in developing a system for annotating semantic and syntactic complexity of NLI pairs.
natural language inference, attention, curricular learning
1 - 34
Towards Detecting the Level of Trust in the Skills of a Virtual Assistant from the User's Speech
Lara Gauder
Lara Gauder
We would like to present our work where we aim to explore the feasibility of automatically detecting the technical competence or ability of a virtual assistant (VA) from the user's speech. We developed a novel protocol for collecting speech data from subjects interacting with VAs with different skill levels. In consequence, we collected a new speech corpus in Argentine Spanish which is publicly available for research use. Using the collected data, we developed a system to detect the ability of the VA with which a subject interacted during a session, based on the subject's speech patterns.
speech processing, human-computer interaction, speech resources
1 - 35
Towards Efficient Active Learning of PDFA
Franz Mayr, Sergio Yovine, Federico Pan, Nicolas Basset, Thao Dang
Franz Mayr Ojeda
We propose a new active learning algorithm for PDFA based on three main aspects: a congruence over states which takes into account next-symbol probability distributions, a quantization that copes with differences in distributions, and an efficient tree-based data structure. Experiments showed significant performance gains with respect to reference implementations.
active learning. probabilistic deterministic finite automata. quantization
1 - 36
Towards unraveling calibration biases in medical image analysis
María Agustina Ricci Lara, Candelaria Mosquera, Enzo Ferrante and Rodrigo Echeveste
María Agustina Ricci Lara
In recent years the development of AI systems for automated medical image analysis has gained enormous momentum. At the same time, a large body of work has shown that AI systems can systematically and unfairly discriminate against certain populations in various application scenarios. These two facts have motivated the emergence of algorithmic fairness studies in this field. Most research on healthcare algorithmic fairness to date has focused on the assessment of biases in terms of classical discrimination metrics such as AUC and accuracy. Potential biases in terms of model calibration, however, have only recently begun to be evaluated. This is especially important when working with clinical decision support systems, as predictive uncertainty is key for health professionals to optimally evaluate and combine multiple sources of information. In this work we study discrimination and calibration biases in models trained for automatic detection of malignant dermatological conditions from skin lesions images. Importantly, we show how a wide range of typically employed calibration metrics are highly sensitive to sample sizes. This is of particular relevance to fairness studies, where data imbalance results in drastic sample size differences between demographic subgroups, which if not taken into account can act as confounders.
fairness, calibration, skin lesion analysis
1 - 37
Two-stage Conditional Chest X-ray Radiology Report Generation
Pablo Messina, José Cañete, Denis Parra, Álvaro Soto, Cecilia Besa, and Jocelyn Dunstan
Pablo Alfredo Messina Gallardo
"A radiology report typically comprises multiple sentences covering different aspects of an imaging examination. With some preprocessing effort, these sentences can be regrouped according to a predefined set of topics, allowing us to implement a straightforward two-stage model for chest X-ray radiology report generation. Firstly, a topic classifier detects relevant findings or abnormalities in an image. Secondly, a conditional report generator outputs sentences from an image conditioned on a given topic. We present experimental results on the test split of the MIMIC-CXR dataset for each stage separately and the system as a whole. Most notably, the proposed model outperforms previous works on several medical correctness metrics based on the CheXpert labeler, establishing a new state-of-the-art. The source code is available at"
computer vision, natural language generation, medical imaging
1 - 38
Unit Testing in Computer Vision Models
"- Nury Yuleny Arosquipa Yanque - CromAI"
Nury Yuleny Arosquipa Yanque
"To make use of artificial intelligence in real-world systems, i.e. agriculture, models have not only achieved good performance but also interact with the system in a safe and reliable manner. The difficulty arises when the model is deployed in production in a real-world environment, i.e. sugarcane plantation, when often doesn't perform as well as it did during the training and evaluation phase because of the huge variety of scenarios of a real-world environment affecting the model’s capability to accurately predict the output. We propose an ML model test at pos-training to inspect behaviors for a variety of critical scenarios that are selected according to more serious errors seen at model execution in production detected by experts and monitoring model performance. Each scenery is named a unitary test. A data acquisition pipeline is defined to get data from the field for each scenery and generate a dataset to represent each scenery named as a data unit, the model performance is evaluated in each unit test using specific metrics. These unit tests are used along different model versions to know the model evolution over releases to understand its strengths and weaknesses in each scenery."
unit test, real world environment, computer vision
1 - 39
Updating models to anomalies detection, front to variations in the normal data behavior , or change domain.
Gastón García González, Alicia Fernández, Pedro Casas.
Gastón García González
"One of the most popular approaches for anomaly detection using machine learning is to model the distribution of normal data in an unsupervised way and detect anomalies from detecting deviations from the model. With the great popularity that generative models (VAE, GAN) have taken in recent years, due to their ability to learn complex distributions and train in a self-supervised manner, many works have emerged that use this tool to learn how to model this distribution. However, this approach fails when the distribution of the normal data ceases to be seasonal in the future, or if the data domain changes due to system needs, since a deep model would have to be retrained, which entails a cost of resources and the properties would be lost previously learned. That is why in this work, we propose to study the ways to deal with the automatic updating of generative models, specifically for time series anomaly detection. One of the most interesting approaches is, the application of continuous learning, a recent technique that has shown good results, and a variety of application forms that are constantly being updated. "
anomaly detections, generative models, continual learning.
1 - 40
Water quality and visual-based behavior-related variables in a fish-farming environment for AI applications
"Juan D. Medina, Alejandro Arias, Luis F. Giraldo, Andres Gonzalez-Mancera, Fredy Segura, Yeferzon Ardila, Veronica Akle"
Juan David Medina Tobon
"It is known that water quality and fish behavior are highly correlated in fish farming, and understanding their relationship is key to making inferences about the state of the fish farming process, particularly when real-time automatic monitoring is involved. Novel machine learning techniques including deep learning and TinyML can be used to analyze the fish behavior and obtain valuable insights to the process. In this work, a database containing both water quality time series and raw video footage was obtained for two separate days with the objective of studying possible image processing methods and their applications. "
fish farming, tinyml, deep learning
1 - 41
An Introduction to Hyperbolic Neural Networks and Hyperbolic Machine Learning.
Nicolás Alvarado
Nicolás Alvarado
The study of hyperbolic neural networks has experienced constant growth and has potentially shown better results than Euclidean neural networks. In this study we want to understand from the root the reason for this potential improvement. For this, we will show some basic examples of Machine Learning using hyperbolic geometry tools and their connection with hyperbolic neural networks.
machine learning, pac-learning, hyperbolic neural networks.
1 - 42
AymurAI: A prototype for an open and gender-sensitive justice in Latin America
Ivana Feldfeber, Yasmín Quiroga, Marianela Ciolfi Felice
Ivana Feldfeber
The lack of transparency in the judicial treatment of gender-based violence (GBV) against women and LGBTIQ+ people in Latin America results in low report levels, mistrust in the justice system, and thus, reduced access to justice. To address this pressing issue before GBV cases become feminicides, we propose to open the data from legal rulings as a step towards a feminist judiciary reform. We identify the potential of artificial intelligence (AI) models to generate and maintain anonymised datasets for understanding GBV, supporting policy making, and further fueling feminist collectives' campaigns. In this paper, we describe our plan to create AymurAI, a semi-automated prototype that will collaborate with criminal court officials in Argentina and Mexico. From an intersectional feminist, anti-solutionist stance, this project seeks to set a precedent for the feminist design, implementation, and deployment of AI technologies from the Global South.
feminism, judiciary, nlp
1 - 43
Audio tagging for anthropogenic impact assessment on antarctic soundscape
Emiliano Acevedo, María Noel Espinosa, Ilana Stolovas
Emiliano Acevedo / María Noel Espinosa / Ilana Stolovas
The impact of human activities in Antarctica affects the ecosystem for the species that inhabit the region. To study this phenomenon, recordings of the sound environment are extracted for later analysis. This work seeks to automate the process of identifying sound sources of interest by applying machine learning techniques. Several audio tagging techniques were applied, from classical models such as convolutional networks to the most advanced state-of-the-art models such as audio spectrogram transformers.
Sound tagging, Machine learning, Audio tagging, Antarctica.
1 - 44
End-to-end sequence to structure: Learning to fold non-coding RNA
Leandro A. Bugnon, L. Di Persia, M. Gerard, J. Raad, E. Fenoy, S. Prochetto, A. Edera, G. Stegmayer and D. H. Milone
Leandro A. Bugnon
There are several challenges in learning sequence representations, especially in the context of few labeled data, high class imbalances and domain variability. In bioinformatics, the determination of secondary structures from biological sequences (such as RNA) is a very costly process, which cannot be scaled up efficiently, limiting our ability to functionally characterize such molecules. Non-coding RNAs are relevant for numerous biological processes in medical and agroindustrial applications. Computational methods are promising for the prediction of RNA structures, but show limited capacity for modeling their wide structural diversity. We present new end-to-end approaches for secondary structure prediction from the sequence alone. To harness larger datasets with unlabeled sequences, a self-supervised encoder is proposed. Then, an architecture based on RenNet is used to encode information about each sequence position and its neighbors, and then use element pair information to predict a connection matrix. We have compared several recent methods for secondary structure prediction, with special focus on how to measure generalization capabilities, using benchmark datasets and experimentally validated sequences. By using biophysical constraints to guide the learning, results are improved for different types of RNAs.
sequence representation learning, structure prediction, bioinformatics
1 - 45
The short-long history of a Mexican Speech Recogniser
Carlos Daniel Hernández-Mena, Ivan Vladimir Meza Ruiz
Ivan Vladimir Meza Ruiz
This work present the current state of the Mexican Spanish Speech Recognition. It tells the story of collecting the enough data to have a competitive speech recognizer and the used of different speech platforms. We show there is benefits to focus on specific variants of Spanish.
speech recognition, mexican spanish
1 - 46
ErizoJSApp a collaborative designed mediating agent to support informal settlements in water access related problem.
Jean Marco Rojas Umaña, Jaime Gutiérrez Alfaro
Jean Marco Rojas Umaña
One face of the marginalization of Erizo Juan Santamaría informal settlement is the absence of a public drinking water service provided to all the inhabitants of the community. Few people have access to a water straw located on the surroundings of the neighborhood, from these straws the rest of the people are supplied with an internal pipe system that is managed, each system, with different policies. Mishandling of failure situations in the internal piping system can lead to an impact on the social coexistence of the neighborhood. As a result of a process of reflection, horizontal and dialogical, in which people from Erizo Juan Santamaría and the Laboratorio Experimental participated, ErizoJSApp software tool was designed with the aim of being a mediating agent between people in the community to solve problems. As a result of the mediation, data will be obtained to create an AI model that offers support in decision-making.
informal settlements, drinking water, collaborative design.
1 - 47
Detection of cardiac lesions in Cardiac MRI Images
Pablo Jimenez, Ariel H. Curiale, German Mato, Matías Calandrelli, Jorgelina Medus
German Mato
"Artificial Intelligence (AI) and specially automatic learning has been used to detect patterns beyond human sight. Specifically. AI is considered a powerful tool in the Medical Imaging field, since it can collect information from different textures and use them to identify and quantify tissue damage. We apply these techniques in the Cardiology field. More specifically, we are interested in the detection of fibrosis in myocardial tissue of the left ventricle (LV) through Cardiac Magnetic Resonance (CMR) images without the use of a contrast agent. These agents facilitate the detection of lesions but are contraindicated in some cases. In our approach we create deep features based on an encoder-decoder Convolutional Neural Network architectures, which are much more complex than radiomic features and are created with the unique goal of detecting cardiac fibrosis. We do not analyze an entire CMR image but patches of the LV, including endocardium. In a second stage we use a fingerprinting technique to obtain a prediction for each patient. Our method results in a competitive patient classification accuracy (about 80%) and results to be applicable for fibrosis segmentation and quantification. "
cardiac magnetic resonance images, fibrosis, deep learning
1 - 48
Knowledge Management and Information Retrieval Research Group at ICIC (CONICET UNS)
Ana Gabriela Maguitman
Ana Gabriela Maguitman
We outline the research lines of the Knowledge Management and Information Retrieval Research Group from the Institute for Computer Science and Engineering (ICIC CONICET UNS). Our main projects include learning causal models from digital media, building stance and opinion trees from social media, and the design and application of topic-based search methods in different domains.
causal modeling, stance detection, topic-based search
1 - 49
AymurAI: A prototype for an open and gender-sensitive justice in Latin America
Ivana Feldfeber, Yasmín Quiroga, Marianela Ciolfi Felice
Ivana Feldfeber
The lack of transparency in the judicial treatment of gender-based violence (GBV) against women and LGBTIQ+ people in Latin America results in low report levels, mistrust in the justice system, and thus, reduced access to justice. To address this pressing issue before GBV cases become feminicides, we propose to open the data from legal rulings as a step towards a feminist judiciary reform. We identify the potential of artificial intelligence (AI) models to generate and maintain anonymised datasets for understanding GBV, supporting policy making, and further fueling feminist collectives' campaigns. In this paper, we describe our plan to create AymurAI, a semi-automated prototype that will collaborate with criminal court officials in Argentina and Mexico. From an intersectional feminist, anti-solutionist stance, this project seeks to set a precedent for the feminist design, implementation, and deployment of AI technologies from the Global South.
feminism, judiciary, nlp
1 - 51
Persistent Homology and Machine Learning for Detecting Climate Change
Peguy Kem-Meka Tiotsop Kadzue, Armandine Sorel Kouyim Meli
"Climate change is one of the biggest threats we are facing today. Some of its impacts include floods, droughts, and storms which are the main sources of agricultural risk. In this study, we propose an automated method for detecting climate change early using persistent homology and machine learning. The proposed method is designed to provide vital information about the topological features of climate data. It is coordinate-free (i.e. it does not depend on a particular coordinate system) and threshold-free (i.e. it does not need any threshold criteria). We evaluate the performance of the method on unseen data outside Africa and obtain a very good score. Furthermore, the developed model is also tested on precipitation data and still performs well. We recommend the method for mitigating change in the growing season in the agricultural sector. Moreover, this model also offers the possibility to develop an effective early system for dangerous phenomena (droughts, floods, volcanoes, etc) based on our approach. "
Climate data, Climate change, Persistent homology, Machine learning, Topological features, Africa
1 - 52
Leveraging Pre-trained Language Model for Speech Sentiment Analysis
Pablo Brusco
1 - 53
Custom-Built AI Innovation
Digital Sense
1 - 54
Tu Perro Me Suena
Universidad de Montevideo
2 - 1
#PraCegoVer: A large dataset for image captioning in Portuguese
Gabriel Oliveira dos Santos, Esther Luna Colombini, Sandra Avila
Gabriel Oliveira dos Santos
Automatically describing images using natural sentences is essential to help include people with vision impairment or low vision on the Internet, making it more inclusive and democratic. This task is known as image captioning and it is still an open challenge that requires understanding the semantic relations among objects in the image and transforming them into descriptions. Significant progress has been made in image captioning in recent years thanks to the availability of a large amount of labeled data in relevant datasets. However, most datasets only comprehend images annotated with English descriptions, whereas datasets with captions in other languages are scarce. In particular, Portuguese is a low-resource language; hence it has few publicly available datasets. Thus, to contribute to the community of Portuguese speakers, we have proposed the first large dataset for image captioning with descriptions in Portuguese, #PraCegoVer. Our dataset relies on public data published by followers of the PraCegoVer movement. Then, it comprehends 533,523 images with captions in Portuguese created explicitly for the audience with vision disabilities.
image-text dataset in portuguese, image captioning, multimodal dataset
2 - 2
ADRAS: Airborne Disease Risk Assessment System for Closed Environments
Wilber Rojas, Edwin Salcedo, and Guillermo Sahonero
Edwin Rene Salcedo Aliaga
Airborne diseases are easy to spread in any population. The advent of COVID-19 showed us that we are not prepared to control them. The pandemic has drastically posed challenges to the daily functioning of public and private establishments. In general, while there have been several approaches to reduce the potential risk of spreading the virus, many of them rely on the commitment that people make, which - unfortunately - cannot be constant, for example, wearing a facemask in closed environment at all times or social distancing. In this work, we propose a stereo vision system to determine the risk of airborne disease spread in closed environments. We modify and implement the Wells-Riley epidemiological equation. The generated data from several devices is visible in a web platform to monitor multiple areas and locations. Finally, an OAK-D camera and a Jetson device are embedded in a end device meant to monitor a closed environment and send spread risk data continually to the web platform.
airborne disease, risk assessment, stereo vision
2 - 3
Affective and Computational Perspectives of Automatic Emotion Recognition using Electrodermal Activity: A Systematic Review
Maldonado, Emmanuel Alesandro; Galán, Lorenzo Ariel; Diaz Barquinero, Agustin Ariel; D'Amelio, Tomas Ariel
Tomás Ariel D'Amelio
Affective computing has emerged as a new discipline that seeks to incorporate emotions into the realm of artificial intelligence. It is a growing field of research that combines computer science and engineering methods to automatically recognize and interpret human emotions. The use of physiological signals, such as electrodermal activity, has been a focus of recent research. Most of this work focuses on how to improve signal extraction or which machine learning models are best at recognizing emotions from this signal. However, the emotional models underlying such systems have received little attention in the literature. In response, the authors conducted a systematic review of the existing literature on automatic emotion recognition systems using electrodermal activity. The review considered both the machine learning models used and the affective and physiological components of each model. The PRISMA protocol was followed and the review was pre-registered in OSF. The results showed an increase in the use of dimensional models of emotion over time, but a corresponding lack of regression models. The authors conclude that it is essential to consider both affective and computational perspectives in the study of emotion in order to achieve a more comprehensive understanding of affective states.
affective computing, electrodermal activity, automatic emotion recognition
2 - 4
Analyzing Formality Biases across Languages
Asım Ersoy, Gerson Vizcarra, Tasmiah Tahsin Mayeesha, Benjamin Muller
Gerson Waldyr Vizcarra Aguilar
"Multilingual generative language models are increasingly fluent in a large variety of languages. They are trained on huge corpora of multiple languages that enable powerful transfer from high-resource languages to low-resource ones. However, it is still unknown the sort of cultural biases that are induced in the predictions of these models. In this work, we analize the formality level of two multilingual language models: XGLM (Lin et al., 2021) and BLOOM (Scao et al., 2022), across five languages, namely Arabic, Bengali, English, French, and Spanish."
formality analysis, multilingual language models, biases
2 - 5
AnuraSet: A large-scale acoustic multi-label dataset for tropical anuran call classification in passive acoustic monitoring
Juan Sebastián Cañas, Maria Paula Toro, Diego Llusia, Larissa Sayuri Moreira Sugai, Juan Sebastián Ulloa
Juan Sebastián Cañas
The timing and intensity of calling activity in anuran amphibians, which has a central role in sexual selection and reproduction, are largely controlled by climatic conditions such as environmental temperature and humidity. Therefore, climate change is predicted to induce shifts in calling behavior and breeding phenology, species traits that can be tracked using passive acoustic monitoring (PAM). To construct robust algorithms that allow classifying species calls in a long-term monitoring program, it is fundamental to design adequate datasets and benchmarks in the wild. We present a large-scale multi-species dataset of acoustics recordings of amphibians anuran from PAM recordings. The dataset comprises 27 hours of herpetologist annotations of 42 different species in different regions of Brazil. The classification task is unique and challenging due to the high species diversity, the long-tailed distribution, and frequent overlapping calls. We present a characterization of the challenges and a baseline model for the goals of the monitoring program. The dataset, including raw recordings, preprocessing code, and baseline code, is made available to promote collaboration between machine learning researchers and ecologists in solving the classification challenges toward understanding the effects of global change on biodiversity.
fine-grained categorization, domain shift, biodiversity monitoring
2 - 6
AutoMeLi – Mercado Libre's AutoML solution in practice
Guilherme Folego, Hernan Ceferino Vazquez, Lautaro Gesuelli Pinto
Guilherme Folego
"Automated Machine Learning (AutoML) has become increasingly popular in recent years due to its ability to reduce the amount of time and expertise required to design and develop machine learning systems. This is very important for the practice of machine learning, as it allows building strong baselines quickly, improving the efficiency of data scientists, and reducing the time to production. However, despite the advantages of AutoML, it faces several challenges when it comes to real-world applications. In AutoMeLi, our internal AutoML tool, there are a number of predefined steps consisting of data preprocessing, feature engineering, and an estimator. Whenever a new step is added, the number of possible pipelines grows exponentially. In order to overcome this situation, we developed GramML, our solution for defining and exploring a grammar-based search space. This is a model-free reinforcement learning approach based on an adaptation of the Monte Carlo tree search (MCTS) algorithm and context-free grammars, which has shown state-of-the-art performance on the OpenML-CC18 benchmark."
automl, applied machine learning, industry use-case
2 - 7
Automated signal intelligence for electronic warfare
Victor Manuel Hidalgo
Victor Manuel Hidalgo Ibarra
Electronic support measures involve the collection and characterization of radiated electromagnetic energy sources for situational awareness and decision-making, and thus they are an essential element of electronic warfare capabilities. In highly congested and contested environments, it is fundamental to characterize pulse-Doppler radar emitters for friend-or-foe classification. Deep learning applications for radar signal processing are promising for their potential to process in parallel large amounts of data coming from complex scenarios. Unfortunately the development of discriminative models for pulse-Doppler radar data is hindered by the scarcity of relevant recordings, as military emitters are aware of the electronic intelligence activities of other agents and limit their emissions. We use Variational Autoencoders (VAE) for learning latent variable models of the emitters-of-interest, leveraging them as data augmentation tools of these emitters' pulse descriptor words to power our deep learning radar processing applications. VAEs crystallize the information provided by newly gathered data and become a powerful tool for machine-learning-based, automated electronic intelligence.
electronic warfare, variational autoencoders, radar signal processing
2 - 8
Automatic Brain White Matter Hyperintensities Segmentation with Swin U-Net
José A Viteri, Bryan V Piguave, Enrique Pelaez, Francis R Loayza
José Viteri
This work proposes an automatic segmentation approach to detect White Matter Hyperintensities (WMH) using Fluid Attenuated Inversion Recovery (FLAIR) and the corresponding T1 weighted MRI images. In this work, we used the Swin U-Net architecture based on Transformers and compared its performance with two currently reported CNN U-Nets architectures. Sixty pairs of images and their corresponding ground truth labels were used from the MICCAI challenge. The following metrics were obtained: Dice similarity score of 0.80; lesion F1 score of 0.63; Hausdorff distance of 3.16 mm; and, 16.93 average volume difference, locating the Swin U-Net second in the ranking of the tested algorithms. However, the computational resources needed to process the imaging data were the lowest compared to the previous U-Nets. Therefore, the Swin U-Net architecture shows potential to become a promising and fast tool for segmenting medical images.
image segmentation, transformers, white matter hyperintensities (wmh)
2 - 9
Building vocal repertoires of Southern Cone endemic birds using unsupervised learning: applying A.I. to field Biology
Tomás Salas, Marcela Osorio & Máximo Fernández
Marcela Osorio Thomas
"Classically, it has been considered that the only groups of birds that exhibit vocal learning are oscines (""traditional"" songbirds, of the Passeri suborder and order Passeriformes), parrots (order Psittaciformes) and hummingbirds (order Apodiformes, family Trochilidae); who are postulated to have evolved these traits independently. On the other hand, the sister suborder of oscines, predominantly present in South America, the suboscines (order Passeriformes, suborder Tyranni), presents birds considered to be innate vocalizers without plasticity, with vocal organs with little musculature and innervation compared to the previous groups. However, recent studies in suboscines have shown cases of birds with variability in the use and form of their vocalizations, and even in specific cases, presenting plasticity and vocal learning. While there is still very little information on the vocal ecology of suboscines, in this work we investigated the vocal behavior of three Chilean suboscines using directional field recordings, to reveal their vocal repertoires. Using unsupervised machine learning techniques, recordings of the suboscines species Turca (Pteroptochos megapodios), and Fiofio (Elaenia chilensis), were analyzed in search of characteristic vocal patterns to distinguish the main vocalizations and the qualities that allow them to be classified."
vocal repertoire, unsupervised learning, clustering.
2 - 10
CanDLE: Illuminating Biases in Transcriptomic Pan-Cancer Diagnosis
" Gabriel Mejía, Natasha Bloch, and Pablo Arbeláez"
Gabriel Mateo Mejia Sepulveda
Automatic cancer diagnosis based on RNA-Seq profiles is at the intersection of transcriptome analysis and machine learning. Methods developed for this task could be a valuable support in clinical practice and provide insights into the cancer causal mechanisms. To correctly approach this problem, the largest existing resource (The Cancer Genome Atlas) must be complemented with healthy tissue samples from the Genotype-Tissue Expression project. In this work, we empirically prove that previous approaches to joining these databases suffer from translation biases and correct them using batch z-score normalization. Moreover, we propose CanDLE, a multinomial logistic regression model that achieves state of the art performance in multilabel cancer/healthy tissue type classification (94.1% balanced accuracy) and all-vs-one cancer type detection (78.0% average max F1).
cancer classification, cancer detection, machine learning
2 - 11
CB-AQM: A protocol agnostic approach to packet prioritization in Disaster Area Networks
A. Tcach, R. Castro, E. Mocskos
Alexis Guido Tcach Lufrano
"We present Class Based-Active Queue Management (CB-AQM), a novel approach for QoS in MANETs, addressing the problem of guaranteeing a minimum operational infrastructure for critical communications over Disas- ter Area Networks while allowing non-critical clients (i.e. people living in the disaster area) to use the communications infrastructure under a best-effort scheme. Non-prioritized traffic is expected to absorb the drawbacks of poor connectivity during eventual periods of traffic overload."
active queue management, disaster area network, manet
2 - 12
Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints
Jose Gallego-Posada, Juan Ramirez, Akram Erraqabi, Yoshua Bengio and Simon Lacoste-Julien
Jose Gallego-Posada
In this paper we study the training of sparse neural networks and show that constrained formulations can deliver improved tunability and hyperparameter interpretability compared to the commonly used L0-penalty regularization. Our proposal reliably achieves arbitrary sparsity targets while retaining high accuracy, and scales successfully to large residual models -- and all this with just a negligible computational overhead!
sparsity, neural networks, constrained optimization
2 - 13
Cooper: a general-purpose library for constrained optimization in Pytorch
Jose Gallego-Posada, Juan Ramirez
Juan Camilo Ramirez De Los Rios
"Cooper ( is a general-purpose, deep learning-first constrained optimization library in Pytorch. Cooper is (almost!) seamlessly integrated with Pytorch and preserves the usual loss ➡️ backward ➡️ step workflow. If you are already familiar with Pytorch, using Cooper will be a breeze! This library aims to encourage and facilitate the study of constrained optimization problems in deep learning. Cooper focuses on non-convex constrained optimization problems for which the loss or constraints are not necessarily “nicely behaved” or “theoretically tractable”. Moreover, Cooper has been designed to play nicely with mini-batched/stochastic estimates for the objective and constraint functions. Cooper implements several popular constrained optimization protocols so you can focus on your project, while we handle the nitty-gritty behind the scenes. "
constrained optimization, pytorch, library
2 - 14
Cross-Lingual and Cross-Domain Crisis Classification for Low-Resource Scenarios
Cinthia Sánchez, Hernan Sarmiento, Andres Abeliuk, Jorge Pérez, Barbara Poblete
Cinthia Mabel Sánchez Macías
Social media data has emerged as a useful source of timely information about real-world crisis events. Several studies have addressed the automatic detection of crisis-related messages to contribute to disaster management and humanitarian assistance. However, most of them have focused on a particular language (usually English) or domain (type of event), which limits their applicability to other contexts. In this work, we study the task of automatically classifying messages that are related to crisis events by leveraging cross-language and cross-domain labeled data. Our goal is to make use of labeled data from high-resource languages to classify messages from other (low-resource) languages and/or of new (previously unseen) types of crises. We proposed an experimental framework based on combinations of multilingual data representations and knowledge transfer scenarios. Our experimental results showed that it is possible to leverage English data to classify the same domain in other languages, such as Spanish and Italian (80.0% F1-score). Also, to classify messages from new domains (80.0% F1-score) in a multilingual scenario. Overall, our work contributes to mitigating cold-start situations in emergency events, when time is of essence.
crisis informatics, text classification, transfer learning
2 - 15
De-identification of Spanish healthcare free-text: not fully reliable but far better than nothing
Sabrina L. López, Luciano Silvi, Laura Alonso Alemany y Laura Ación
Sabrina Laura López
In Argentina, Electronic Health Records (EHR) have been continuously implemented increasing the amount of this type of data. Ethical considerations arise for their reuse to address secondary research, public health, management, and policy-making questions. Health data are sensitive data according to national and international regulations (HIPAA, GDPR, etc.) because they can significantly impact people’s lives. Thus, having tools for effectively eliminating protected personal information (PPI) that could allow patient identification is a must. But anonymization of free text in EHR is a challenging problem because it is full of peculiarities (words outside common vocabulary, ambiguity, etc.). We are presenting our experience in developing a de-identification algorithm for free text in the EHR of a province in Argentina. We found that it is not clear to humans what information is PPI during a manual anonymization task. As expected, automatic processes also miss cases of PPI, even more than humans do. However, a simple, rule-based approach can do a good job in removing most of the PPI, outperforming a more sophisticated, machine-learning approach for low-resources contexts. Although no process can guarantee anonymization, our method can mitigate the impact of possible data breaches from highly sensitive information.
health data, anonymization, natural language processing
2 - 16
DRL for 5G Inter-Slice Scheduling
Lucas Inglés
Lucas Inglés
"The development of 5G networks has opened new opportunities for the integration of a wide range of applications, from the Internet of Things (IoT) to autonomous vehicles, requiring high-speed and reliable communication. However, these new use cases also impose new challenges for network operators, particularly in terms of scheduling, which is critical to ensuring optimal performance and quality of service. Traditional scheduling algorithms based on heuristics have limitations when it comes to addressing the complexity and dynamicity of 5G networks, particularly in terms of the massive number of devices and the heterogeneity of their requirements. To address these challenges, researchers have been exploring the potential of reinforcement learning (RL) as a promising approach for scheduling in 5G networks. RL is a subfield of machine learning that involves training agents to learn optimal actions through trial-and-error interactions with their environment. By leveraging RL, scheduling in 5G networks can be optimized in a data-driven manner that adapts to the changing network conditions and user demands. In this poster, we not only present a review of modern scheduling algorithms for 5G networks, but also demonstrate their performance through implementation in a 5G simulator. "
5g, scheduling, reinforcement learning
2 - 17
Edge adaptive schemes and machine learning for image super-resolution
Cohen Albert, Mula Olga, Agustin Somacal
Somacal Agustín
Edge-adapted methods have been introduced in the context of image processing to reconstruct high-resolution images from coarser cell averages. In particular, when images consist of piecewise smooth functions, the interfaces can be approximated by a pre-specified functional class (lines, circle arcs, etc) through optimization (LVIRA) or specific preprocessing (ENO-EA). In this talk, we will first explore some theoretical aspects of these nonlinear approximation spaces that are useful in the context of inverse problems. Secondly we will show an extension of the ENO-EA approach to polynomials of degree higher than 1 and compare this algebraic approach to that introduced in (LVIRA) as well as to learning-based methods [B. Desprs, H. Jourdren 2020] in which an artificial neural network (NN) (or in principle any other non linear sufficiently rich function family) is used to attain the same goal.
super-resolution, image-reconstruction, curve-learning
2 - 18
eXplainable Artificial Intelligence for Skin Lesion Classification
Rosa Y. G. Paccotacya-Yanque, Sandra Avila
Rosa Yuliana Gabriela Paccotacya Yanque
"Deep Learning has shown outstanding results in computer vision tasks, and healthcare is no exception. Deep Learning (DL) can assist dermatologists in early skin cancer diagnosis, saving many lives. However, there is no straightforward way to map out the decision-making process of DL models. For skin cancer predictions, it is not enough to have good accuracy. Understanding the model's behavior is needed to implement it clinically and get reliable predictions. We propose desiderata for explanations in skin-lesion models and present a study about how eXplainable Artificial Intelligence (XAI) is currently used for skin lesions. We analyzed seven methods (four based on pixel-attribution and three high-level concepts): Grad-CAM, Score-CAM, LIME, SHAP, ACE, ICE, CME for two deep neural networks, Inception-v4 and ResNet-50, trained on the International Skin Imaging Collaboration Archive (ISIC). Our findings indicate that while these techniques effectively show what the model is looking at to make its prediction, the obtained explanations are not complete enough to get transparency into the skin-lesion models."
explainability, interpretability, medical imaging
2 - 19
Exploring the impact of chain-of-thought prompting on the commonsense reasoning performance of medium-sized LLMs
Alberto Mario Ceballos-Arroyo, Hamza Tahboub, Byron Wallace, Huaizu Jiang
Alberto Mario Ceballos Arroyo
" Chain-of-thought (CoT) prompting has been purported to improve the performance of large language models on a variety of tasks. While initially being limited to massive language models (60B+ parameters), recent work has shown that this capability can be unlocked in smaller models via strategies such as instruction fine-tuning including explanations for ground-truth answers. However, it is not immediately clear whether chain-of-thought prompting has a positive impact on the performance of smaller models on tasks such as commonsense reasoning. As part of this work, we evaluate six medium-sized language models on five commonsense question answering datasets. Our results suggest that chain-of-thought prompting often does not improve the performance of such models when compared to direct answering. "
large language models, natural language processing, prompting
2 - 20
Feminism and Artificial Intelligence: A Research Agenda for Latin America
Mariel Zasso, Paola Ricaurte, Jaime Alfaro Gutiérrez
Mariel Rosauro Zasso
The social effects of the development and deployment of artificial intelligence require multidisciplinary approaches to analyze AI as a socio-technical artifact embedded in social relations of power. Our effort aims to propose the need to promote multidisciplinary research from a decolonial, feminist, and anti-capitalist perspective in Latin America and the Caribbean to provide critical views on technologies in general and intelligent systems in particular, necessary to promote technological development alternatives that respond to the realities and needs of communities in the region, considering the inequality gaps, cultural, linguistic, gender, racial, social and educational differences in political contexts of corruption and impunity. Based on the experience of the Feminist Artificial Intelligence Research Network in Latin America and the Caribbean, we present a research agenda for the region, concerns, and preliminary reflections.
feminism, artificial intelligence, equity
2 - 21
It's Not Pornography, It's Abuse! Objective Visual Cues for Child Sexual Abuse Material (CSAM) Detection
Camila Laranjeira, Sandra Avila, Jefersson A. dos Santos
Camila Laranjeira da Silva
The online sharing and viewing of Child Sexual Abuse Material (CSAM) are growing fast, such that human experts can no longer handle manual inspection. However, the automatic classification of CSAM is a challenging field of research, due to the inaccessibility of target data that is — and should forever be — private and in sole possession of law enforcement agencies. In this work, we inspect a broad set of visual features from the literature on CSAM detection and reports from law enforcement institutions. First we provide a structured overview of content level information from a real database, to aid researchers in drawing insights from unseen data and safely providing further understanding of CSAM images. Later in this project we intend to assess the the predictive power of the inspected dimensions, by producing several machine learning models from a diverse combination of this wide set of features. We demonstrate our proposal on the Region-based annotated Child Pornography Dataset (RCPD), one of the few CSAM benchmarks in the literature, produced in partnership with Brazil's Federal Police. Although limited in several senses, we argue that automatic signals can highlight important aspects of such datasets, which is valuable when data can not be disclosed.
sensitive media, computer vision, child sexual abuse material
2 - 22
Kernel Latent Regularization for graph classification
Martin Palazzo
Martin Palazzo
Kernel methods have been widely used in pattern recognition for supervised learning tasks such as sufficient dimension reduction, supervised feature selection and graph classification. These methods methods are useful in statistical learning applications affected by the curse of dimensionality such as biomedical data. In this work I introduce a novel approach to incoporate distribution structure as training labels in order to improve the supervised task.
kernel methods, dimension reduction, graph kernels
2 - 23
Learning Globally Smooth Functions on Manifolds
Juan Cervino, Luiz Chamon, Benjamin D Haeffele, Rene Vidal, Alejandro Ribeiro
Juan Cervino
Smoothness and low dimensional structures play central roles in improving generalization and stability in learning and statistics. The combination of these properties has led to many advances in semi-supervised learning, generative modeling, and control of dynamical systems. However, learning smooth functions is generally challenging, except in simple cases such as learning linear or kernel models. Typical methods are either too conservative, relying on crude upper bounds such as spectral normalization, too lax, penalizing smoothness on average, or too computationally intensive, requiring the solution of large-scale semi-definite programs. These issues are only exacerbated when trying to simultaneously exploit low dimensionality using, e.g., manifolds. This work proposes to overcome these obstacles by combining techniques from semi-infinite constrained learning and manifold regularization. To do so, it shows that, under typical conditions, the problem of learning a Lipschitz continuous function on a manifold is equivalent to a dynamically weighted manifold regularization problem. This observation leads to a practical algorithm based on a weighted Laplacian penalty whose weights are adapted using stochastic gradient techniques. We prove that, under mild conditions, this method estimates the Lipschitz constant of the solution, learning a globally smooth solution as a byproduct.
lipschitz, optimization, machine learning
2 - 24
Onna Diego, Fiorini Guillermo, Tobías V. Aprea, Bilmes Sara A. y Ación Laura
Diego Onna
SiO2 micro- and nanoparticles have diverse applications. A calibrated, monodisperse size is a requirement for the use of these particles in nanomedicine or for the design of new composite materials. Stöber synthesis provides such particles, is robust and scalable. Its main disadvantage is to reach a size that has not been synthesized before due to the complexity of the synthesis process. Machine learning methods are beginning to be used to assist in the synthesis of materials and can be a very useful tool to predict the outcome of Stöber synthesis. These methods require data, ideally abundant and of good quality, in order to learn and make good predictions. The aim of this work is to explore the limitations of working with secondary data reported in the literature.
nanomaterials, secondary data, machine learning
2 - 25
Machine Learning to personalize cognitive training interventions: a proof of concept
"(1,3)* Vladisauskas, Melina; (2,3) Belloli, Laouen; (2,3) Fernández Slezak, Diego; (1,3) Goldin, Andrea P. (1) Laboratorio de Neurociencia, Universidad Torcuato Di Tella; (2) Laboratorio de Inteligencia Artificial Aplicada, Depto. de computación, FCEyN, UBA –CONICET; (3) Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Ministry of Science, Technology and Innovation, Buenos Aires, Argentina *[email protected]"
Melina Vladisauskas
"Executive functions (EF) are a class of processes critical for purposeful, goal-directed behavior. Cognitive training has been studied and applied for more than 25 years in hugely diverse backgrounds. In spite of the accumulated evidence of its positive impact in cognition, there are still reports in the literature that claim that the potential benefits of training are not generalizable. Recently, research regards individual differences as one of the possible causes of these inconsistencies. We consider it is time to start using individual differences as information to guide us towards finding better strategies for each person instead of inevitable experimental noise. In a previous study, we trained a classifier algorithm that successfully predicted whether a subject would benefit, or not, from a fixed training approach, based on his/her performance in previous cognitive tests (accuracy= .67, AUC=.707). In this study we dissect our previous result by training an algorithm to predict specific improvements in Working Memory abilities, which resulted in even better performance (accuracy=.79). Results indicate that in order to improve working memory children need prior solid attentional resources. We believe that understanding the implications of these results is the key to understanding how to personalize cognitive training."
cognitive training; executive functions; machine learning
2 - 26
Mini-batch sample selection techniques for data augmentation in deep learning-based motor imagery decoding
Catalina M. Galván, Rubén D. Spies, Diego H. Milone and Victoria Peterson
Catalina María Galván
Typically, in data augmentation for deep learning (DL) new training samples are generated by suitable modifications of the original data, and then the model is trained via mini-batch stochastic gradient descent. In the context of DL for brain-computer interfaces (BCIs) using simulated electroencephalography (EEG) can be a better approach for data augmentation. However, random sampling might not be appropriate for BCIs applications due to data scarcity. Moreover, synthesized data can be thought of as samples from a different domain than real data and this fact must also be taken into account when sampling for the construction of mini-batches. Here, data augmentation samples consist of subject-specific simulated MI-EEG trials, and different mini-batch sample selection techniques are evaluated: 1) random sampling, 2) domain-aware sampling (DAS), in which all mini-batches have the same ratio of real/augmented samples, 3) stratified sampling (SS), in which mini-batches are stratified by output class, 4) stratified domain-aware sampling (SDAS), which combines DAS and SS. Results demonstrated that data augmentation strategies with mini-batch sample selection techniques that preserve class balance and the ratio of real/augmented data (SDAS) improve DL-based MI decoding, significantly outperforming baseline performance.
brain-computer interface, deep learning, sample selection strategy
2 - 27
Multiple signal recordings in neuroscience: modeling strategies
Melisa Maidana Capitan, Adrian Aleman, Ricardo Pizarro, Arie Kim, Jeroen Bos, Lisa Gensel, Francesco Battaglia
Melisa Maidana Capitan
" The development of in vivo registration technologies has pushed systems neuroscience field growth by allowing the simultaneous recording of multiple data modalities in complex experimental setups. These modalities are a set of time series signals from brain activity (from silicon probes, neuropixels, or calcium imaging) in combination with behavioral readouts (facial expression or navigation videos) and usually categorical external stimuli (external sounds, a set of somatosensory stimuli). Here we will present two examples, for both single or combined data modalities analysis, where standard machine learning tools are used to extract relevant features that help to describe brain activity. At the same time, we will discuss why and how the development of specific models for these datasets could help to better understand the underlying process generating complex behavior and complex brain signals. The first example consists of the classification of electrophysiological recordings into different oscillatory types (one data modality). The second example is related to the prediction of behavioral/stimuli based on calcium imaging signals (a combination of data modalities). "
neuroscience, time series, applied machine learning
2 - 28
Neural Networks with Quantization Constraints
Ignacio Hounie, Juan Elenter, Alejandro Ribeiro
Ignacio Hounie
Enabling low precision implementations of deep learning models, without considerable performance degradation, is necessary in resource and latency constrained settings. Moreover, exploiting the differences in sensitivity to quantization across layers can allow mixed precision implementations to achieve a considerably better computation performance trade-off. However, backpropagating through the quantization operation requires introducing gradient approximations, and choosing which layers to quantize is challenging for modern architectures due to the large search space. In this work, we present a constrained learning approach to quantization aware training. We formulate low precision supervised learning as a constrained optimization problem, and show that despite its non-convexity, the resulting problem is strongly dual and does away with gradient estimations. Furthermore, we show that dual variables indicate the sensitivity of the objective with respect to constraint perturbations. We demonstrate that the proposed approach exhibits competitive performance in image classification tasks, and leverage the sensitivity result to apply layer selective quantization based on the value of dual variables, leading to considerable performance improvements.
constraints, quantization, neural networks
2 - 29
Non-Intrusive Load Monitoring (NILM) for Very-Low-Frequency Electrical Consumption Data: A Case Study in Uruguay
Camilo Mariño
Camilo Mariño
Non-Intrusive Load Monitoring (NILM) is a technique for disaggregating the energy consumption of individual appliances from a single aggregate signal. This study presents a novel approach for NILM at very low frequencies, focusing on electric water heaters and electric vehicles in Uruguay. The study uses a dataset provided by the Uruguayan electricity company. The data is divided into daily segments, allowing a Neural Network to learn the patterns of daily energy consumption for each appliance. Our approach is one of the first to work with electric water heaters and electric vehicles at this frequency. In addition, the study also involves a classification task for identifying different appliances. The method achieves a mean absolute error of 100 W, which is a reasonable performance at very low frequencies. The proposed approach can be applied to a wide range of NILM applications in Uruguay and similar countries.
nilm, neural networks, load disaggregation
2 - 30
On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research
Luiza Pozzobon, Beyza Ermis, Patrick Lewis, Sara Hooker
Luiza Amador Pozzobon
Perception of toxicity evolves over time and often differs between geographies and cultural backgrounds. Similarly, black-box APIs for detecting toxicity, such as the PerspectiveAPI, are not static but frequently retrained to change and improve on their unattended weaknesses and biases. Although prone to errors, these tools are widely used in safety research to benchmark reductions in harm for large-scale evaluations and baseline definitions. Their evolving nature, however, poses a challenge given their widespread adoption. The RealToxicityPrompts (RTP) dataset, developed to study language models' degeneration into toxicity, is composed of prompts, continuations, and the toxicity score for each, obtained from the PerspectiveAPI. Our work shows that toxicity score distributions for prompts and continuations have changed considerably from when the dataset was released to this day. This implies that previous work that either inherited automatic toxicity results from baseline papers or used the original RTP scores to stratify freshly scored sequences, might not have been accurate as per the API's definition of what toxicity was at the moment. Our findings suggest caution in applying apple-to-apple comparisons between works and call for more structured approaches to evaluating toxicity over time.
toxicity, evaluation, language models,
2 - 31
On-device crime detection
Luis Leal
Luis Fernando Leal Hernandez
Armed robbery and some other weapon related crimes are one of the most important social problems in Central América. Using a custom manually collected dataset from news and social media a computer vision algorithm was created using deep learning to identify potential robberies, crimes, guns and dangerous situations, the model is optimized for Google Coral boards to run on-device/edge inference on real time, this can be adapted to house cameras, car dashcams and potentially send alerts to nearby users.
computer-vision, edge-computing, crime-detection
2 - 32
Ontology Based Privacy Management in Health tracking Systems using Differential Privacy
Erika Guetti Suca, Flávio Soarês Corrêa da Silva
Erika Guetti Suca
"The Semantic Web has the potential for serious impact on how personal data is collected and used. Automated and autonomous data collection has grown exponentially over the past few years. More personal and highly detailed data can be collected and analyzed than at any other time in history. Differential privacy offers strong and robust privacy guarantees. It is based on randomised aggregatable response in answers which enables good estimations of accurate population statistics while preserving the privacy of the individual respondents. Our purpose is to prevent the risk of associating sensitive information with the identities of data owners, and, to measure how effective differential privacy techniques are at protecting privacy within larger ontologies. For this purpose, in this Ph.D. project, we aim to propose a differential privacy model for ontologies. Our main contributions are: (1) development of a privacy-preserving model via aggregation synthetic data that satisfy differential privacy; (2) analysis of the feasibility, reliability, and performance of our proposal based on case studies;"
differential privacy, ontologies, health tracking
2 - 33
Portuguese CLIP: Contrastive Vision-Language Pretraining in Portuguese
Diego Moreira, Gabriel Oliveira, Alef Ferreira, Sandra Ávila and Hélio Pedrini
Diego Alysson Braga Moreira
With the advance of multimodal training strategies, some of them have become very popular, such as text-guided training of image models. One of the most influential works in this field is the CLIP (Contrastive Language-Image Pre-Training) which proposes a pre-training to pair captions with their most suitable image. This model is trained with images and respective captions extracted from the internet, avoiding the need for human supervision and annotation of specific concepts represented in the images. This pre-training mode gives great results even in zero-shot scenarios. In this work, we built a large dataset entirely in Portuguese, by merging multiple sets originally in Portuguese and various translated datasets widely used in English. From this set, we train and evaluate a CLIP structure entirely in Portuguese, evaluating aspects such as different encoders, the impact of translations, and duplicate captions. The results are compared with the original CLIP, the multilingual CLIP, and specific language versions such as Chinese CLIP and Italian CLIP.
multimodal, text-guided, portuguese
2 - 34
Protein EM map segmentation: from region optimization to an interactive deep learning approach
Manuel Zumbado-Corrales, Juan Esquivel-Rodríguez
Manuel Zumbado Corrales
"Protein Electron Microscopy (EM) maps are critical in determining the three-dimensional structures of bio-molecules, including proteins and their interactions. The task of identifying regions that correspond to specific proteins is challenging, but crucial for gaining insight into their function and designing drugs to enhance or suppress their processes. Conventional methods of protein EM map segmentation use segmentation algorithms that assign a voxel to a region, but can result in difficulties in obtaining a segmentation that maps each region to a single protein unit. Our approach extends the region optimization technique by incorporating an interactive mechanism that allows for user guidance of the segmentation process. Our deep learning model is trained on a dataset of protein EM maps and uses a convolutional neural network architecture. Results show that our approach has potential to provide efficient and accurate segmentation compared to traditional methods and is comparable to state-of-the-art approaches."
protein electron microscopy (em) maps, segmentation, deep learning
2 - 35
Protein's cavities Bestiario: an atlas of proteins cavities
Ana Julia Velez Rueda, Franco Leonardo Bulgarelli, Gustavo Parisi, Leandro Bugnon
Ana Julia Velez Rueda
"On their surface, proteins are shaped into numerous cavities and protrusions that create unique microenvironments for ligand binding, catalysis, or other biologically relevant processes. Despite their biological and biotechnological relevance and their potential impact on various research areas in medicine, drug design, and evolutionary biology, there is still no exhaustive and comprehensive research on protein cavities combining sequential, structural, and evolutionary perspectives. We propose a representative set of features that enable machine learning, such as clustering and classifying the different types of protein cavities into functional categories. To this end, we combined classical bioinformatics techniques for molecular characterization with unsupervised learning techniques. We created and curated a dataset containing all proteins with known structures and their features, which is currently available in CaviDB. In this dataset, we computationally predicted the cavities present in each of the proteins and characterized them structurally and sequentially using a set of 47 features. We explored the data using nonlinear projections with UMAP and Self-Organized Maps. Using biological information such as volume, drug content, and polarity, we were able to cluster representative cavities and use them to reconstruct protein families."
bioinformatics, clustering, drug discovery
2 - 36
ractable Classification with Non-Ignorable Missing Data Using Generative Random Forests
Julissa Villanueva, Denis Mauá
Julissa Villanueva Llerena
"Missing data is abundant in predictive tasks. Typical approaches assume that the missingness process is ignorable or non-informative and handle missing data either by marginalization or heuristically. Yet, data is often missing in a non-ignorable way, which introduces bias in prediction if not treated properly. In this work, we develop a new method to perform tractable predictive inference under non-ignorable missing data using probabilistic circuits derived from Decision Tree Classifiers and a partially specified response model of missingness. We show empirically that our method delivers less biased (probabilistic) classifications than approaches that assume missing at random and are more determinate than similar existing overcautious approaches."
non ignorable missing data, probabilisc models, tractble classification
2 - 37
Security games along trails
Mauricio Velasco, Nicolás Betancourt
Nicolas Betancourt
"An essential resource in the preservation of earth's biodiversity is keeping large natural areas protected. Unfortunately, sites of ecological interest are constantly threatened by illegal actors and the manpower allocated to monitor and safeguard these spaces is often insufficient. People in charge of taking care of those areas have to optimally allocate the patrolling resources in extensive tracts of land and are often in great disadvantage against the attackers. This problem has been previously studied in the literature and most of the work focuses on open spaces like the African Savannah or requires a discretization of the protected area. These approaches do not capture the reality of South American parks where, due to the density of the vegetation and the ruggedness of the terrain, traveling is done only over a limited collection of available trails (a graph). The problem addressed in this work is the design of near-optimal patrol schedules for rangers in such graphs. We illustrate our results in the trail map of Jamacoaque, a natural reserve in Ecuador. "
online learning, decission making under uncertainty, natural resource management
2 - 38
Trainable quaternion convolutional layer in Fourier domain
E. Ulises Moya-Sánchez, Abraham Sanchez, Sebastia Xambó and Ulises Cortes
Eduardo Ulises Moya Sanchez
The conventional ConvNets are easily fooled even by human-imperceptible corruptions. In this poster, we will present a new trainable entry layer named the monogenic layer which in concurrence with different ConvNets outperforms regular ConvNets and adversarial training in front of large illumination changes.
robust learning, convolutional neural networks, feature representation
2 - 39
Ultrasound Imaging for the Detection of Thyroid Nodules: A Deep Learning Approach
Bermejo Danitza, Castañeda Benjamin
Danitza Bermejo
Accurate diagnosis of thyroid nodules is crucial in clinical practice as it can help in early detection of thyroid cancer. In this study, a machine learning approach for the diagnosis of thyroid nodules in ultrasound images is proposed. The proposed method uses two modules, a Convolutional Neural Network (CNN) and a Decision Tree (ML), to perform the classification task. The transfer learning technique is applied to overcome the challenge of small data sets. The used dataset consists of 124 patients, with 18 having thyroid nodules and 106 without, and comparative experiments were performed to evaluate the performance of the proposed method. The results show that the proposed method has a significantly high thyroid nodule detection rate compared to other existing methods.
thyroid nodules, ultrasound images, deep learning
2 - 40
Benchmarking Visual Search in Natural Scenes: Extension to New Models.
Travi, F.*, Ruarte, G.*. Bujia, G., Kamienkowski, J. E..
Gaston Bujia
This work represents an expansion of the recently proposed benchmark framework for evaluating visual search models in natural scenes: ViSioNS. Our previous study provided a unified format and criteria for evaluating the efficiency and similarity of different state-of-the-art visual search models to human behavior. In this expanded work, we introduce two additional models: EccNet and an adaptation of a more classical model, Entropy-Limit Minimization (ELM). By incorporating these models, we are able to provide a more comprehensive evaluation of the performance of different state-of-the-art visual search models. Our results highlight the limitations of current models and demonstrate the value of integrating different approaches with unified criteria to develop more accurate and efficient visual search algorithms. Furthermore, our work contributes to addressing the urgent need for benchmarking data and metrics to support the development of more general human visual search computational models.
visual search, eye movements, computer vision, human behavior, ideal bayesian observer, benchmark
2 - 41
Machine learning to analyze multiplexed immunofluorescence images
Paúl Hernández Herrera, Saraí G. De León Rodríguez, Cristina Aguilar Flores, Vadim Pérez Koldenkova, Adán Guerrero, Alejandra Mantilla, Ezequiel M. Fuentes-Pananá, Christopher Wood and Laura C Bonifaz
Paul Hernandez Herrera
Melanoma is the deadliest form of cancer. Multiplexed immunofluorescence is a technique to analyze several markers which can be used to analyze dendritic cells encarged of activating cytotoxic CD8+ T lymphocytes responsible for killing tumor cells. Analyzing multiplexed images can be time consuming, requiring a large amount of manual labor. Here, we evaluated several machine learning approaches to automatically analyze multiplexed images in a reproducible and accessible manner.
deep learning, multiplexed images
2 - 42
A Highly Accurate Deep Learning Model for a Novel Olive Leaf Disease Classification Dataset
Erbert F. Osco-Mamani and Israel N. Chaparro-Cruz (Universidad Nacional Jorge Basadre Grohmann de Tacna, Perú)
Israel N. Chaparro-Cruz
Plant diseases can cause significant yield and quality losses, and some diseases can even produce toxins that are hazardous to human health. Management measures against these diseases depend on adequate and fast classification. Deep learning, applied to computer vision, is a state-of-the-art method with many successes in agriculture and medicine where disease classification is a very important task. Olive leaf diseases are a problem that threatens the quality of the harvest in the producers, so the objective of this work is to classify olive leaf diseases with Deep Learning in the olive harvests of the La Yarada-Los Palos in the Tacna region, Peru. For this purpose, a novel and public RGB image dataset were elaborated with leaves of olive plants affected by the most common diseases: virosis, fumagina, and nutritional deficiencies. Furthermore, we conduct extensive comparative experimental studies using all possible configurations of Data Augmentation, Transfer Learning, and Fine-Tuning techniques to train a modified VGG16 architecture for olive leaf disease classification, achieving 100%, 98.67%, and 100% accuracy in the training, validation, and test sets respectively. Thus demonstrating experimentally good compatibility on Transfer Learning, very good ability of the pre-trained model to adapt the feature extraction to our dataset using Fine-Tuning, and very good support provided by the Data Augmentation technique to avoid model overfitting. Also was found experimentally that the use of Fine-Tuning without Data Augmentation is not effective. The reproducibility of the experiments is ensured by making public the novel dataset and final model on our GitHub.
Olive; Leaf Diseases; Disease Classification; Deep Learning; Transfer Learning; Fine-Tuning, VGG16
2 - 43
Knowledge Management and Information Retrieval Research Group at ICIC (CONICET UNS)
Ana Gabriela Maguitman
Ana Gabriela Maguitman
We outline the research lines of the Knowledge Management and Information Retrieval Research Group from the Institute for Computer Science and Engineering (ICIC CONICET UNS). Our main projects include learning causal models from digital media, building stance and opinion trees from social media, and the design and application of topic-based search methods in different domains.
causal modeling, stance detection, topic-based search
2 - 44
Design and development of a remote eye tracker
Leonardo Martínez Hornak, Álvaro Gómez, Germán Capdehourat
Leonardo Martínez Hornak
Laptops are used everyday for work, leisure and other activities. People with motor disabilities, such as paralysis, are not able to fully utilize them. Digital solutions have been developed to help on these situations. Eye tracking systems allow users to command their devices only by their eyes movements. This technology is expensive for general public and consists of specialized hardware and software. The main objective of this work is to understand how these systems work and try to develop a prototype which could allow more people to use a laptop in a more affordable way.
eye tracker
2 - 45
Text Representation through Multimodal Variational Autoencoder for One-Class Learning
Marcos Paulo Silva Gôlo and Ricardo Marcondes Marcacini
Marcos Paulo Silva Gôlo
Multi-class learning (MCL) methods perform Automatic Text Classification (ATC), which requires labeling for all classes. MCL fails when there is no well-defined information about the classes and requires a great effort to label instances. One-Class Learning (OCL) can mitigate these limitations since the training only has instances from one class, reducing the labeling effort and making the ATC more appropriate for open-domain applications. However, OCL is more challenging due to the lack of counterexamples for model training, requiring more robust representations. However, most studies use unimodal representations, even though different domains contain other information that can be used as modalities. Thus, this study proposes the Multimodal Variational Autoencoder (MVAE). MVAE is a multimodal method that learns a new representation from more than one modality, capturing the characteristics of the interest class in an adequate way. MVAE explores semantic, density, linguistic, and spatial information modalities. We explore MVAE with a few labeled instances in fake news, relevant reviews, and interest event detections. MVAE outperforms other baselines commonly used in the one-class text learning literature with statical significance differences.
One-Class Classification, Text Multimodality, Text Classification, Neural Networks
2 - 46
Using NLP models for modeling brain processes during reading
Bianchi Bruno, Umfurer Alfredo, Cócaro Cesar, Fernández Slezak Diego, Kamienkowski Juan
Bruno Bianchi
The advancement of the Natural Language Processing field has allowed for the development of causal language models with a high capacity for generating text. For several years, the field of Cognitive Neuroscience has been using these models to better understand cognitive processes. In previous works, we found that models such as Ngrams and those based on LSTM networks are capable of partially modeling the predictability of words (cloze-Pred), particularly when used as a co-variable to explain the eye movements of people reading natural texts. In this work, we deepen this line of research using models based on GPT2. The results show that this architecture achieves better modeling of cloze-Pred than its predecessors. We also found and that fine-tuning the model with a domain-specific corpus generates its predictions to be less based on lexical frequency. Here we also discuss how to generate an multi-domain corpus of text in "Español Rioplantese".
NLP, Cognitive Neuroscience, Reading
2 - 47
GROWS - Deep Reinforcement Learning and Graph Neural Networks for Efficient Resource Allocation in 5G Networks
Martín Randall, Federico La Rocca, Pedro Casas, Pablo Belzarena
Martín Randall
The increased sophistication of next generation mobile networks such as 5G and beyond or the Flying Ad-hoc NETworks, and the plethora of devices and novel use cases to be supported by these networks, make of the already complex problem of resource allocation in wireless networks a paramount challenge. We address the specific problem of user association, a largely explored yet open resource allocation problem in wireless systems. We introduce GROWS, a deep reinforcement learning (DRL) driven approach to efficiently assign mobile users to base stations, which combines a well-known extension of Deep Q Networks (DQNs) with Graph Neural Networks (GNNs) to better model the function of expected rewards. By leveraging the benefits of combining reinforcement learning and graph representation learning, GROWS is able to learn a user association policy which improves over currently applied assignation heuristics, both in throughput utility as in diminishing user rejection.
resource allocation, wireless systems, DRL+GNN
2 - 48
A methodology to characterize bias and harmful stereotypes in natural language processing in Latin America
Laura Alonso Alemany, Luciana Benotti, Lucía González, Hernán Maina, Lautaro Martínez, Mariela Rajngewerc, Amanda Mata Rojo, Jorge Sánchez, Mauro Schilman, Alexia Halvorsen, Matías Bordone, and Beatriz Busaniche
Lucia Gonzalez
"Automated decision-making systems, specially those based on natural language processing, are pervasive in our lives. They are not only behind the internet search engines we use daily, but also take more critical roles: selecting candidates for a job, determining suspects of a crime, diagnosing autism and more. Such automated sys- tems make errors, which may be harmful in many ways, be it because of the severity of the consequences (as in health issues) or because of the sheer number of people they affect. When errors made by an automated system affect a population more than other we call the system biased. Most modern natural language technologies are based on artifacts obtained from enormous volumes of text using machine learning, namely language models and word embeddings. Since they are created applying subsymbolic machine learning, mostly artificial neural networks, they are opaque and practically uninterpretable by direct inspection, thus making it very difficult to audit them. In this poster we present a methodology that spells out how social scientists, domain experts, and machine learning experts can collaboratively explore biases and harm- ful stereotypes in word embeddings and large language models. "
nlp, fairness, ethics
2 - 48
End-to-end sequence to structure: Learning to fold non-coding RNA
Leandro A. Bugnon, L. Di Persia, M. Gerard, J. Raad, E. Fenoy, S. Prochetto, A. Edera, G. Stegmayer and D. H. Milone
Leandro A. Bugnon
There are several challenges in learning sequence representations, especially in the context of few labeled data, high class imbalances and domain variability. In bioinformatics, the determination of secondary structures from biological sequences (such as RNA) is a very costly process, which cannot be scaled up efficiently, limiting our ability to functionally characterize such molecules. Non-coding RNAs are relevant for numerous biological processes in medical and agroindustrial applications. Computational methods are promising for the prediction of RNA structures, but show limited capacity for modeling their wide structural diversity. We present new end-to-end approaches for secondary structure prediction from the sequence alone. To harness larger datasets with unlabeled sequences, a self-supervised encoder is proposed. Then, an architecture based on RenNet is used to encode information about each sequence position and its neighbors, and then use element pair information to predict a connection matrix. We have compared several recent methods for secondary structure prediction, with special focus on how to measure generalization capabilities, using benchmark datasets and experimentally validated sequences. By using biophysical constraints to guide the learning, results are improved for different types of RNAs.
sequence representation learning, structure prediction, bioinformatics
2 - 50
La similitud entre word embeddings depende de las frecuencias de las palabras
Francisco Valentini, Diego Fernandez Slezak, Edgar Altszyler
Edgar Altszyler
"Investigaciones recientes sugieren que embeddings como word2vec pueden codificar información sobre la frecuencia de las palabras. En este trabjo decidimos estudiar más exhaustivamente este fenómeno poco estudiado. Aquí encontramos que Skip-gram con negative sampling (SGNS), GloVe y FastText tienden a producir una mayor similitud semántica entre las palabras de alta frecuencia (A). La asociación entre frecuencia y similitud también aparece cuando las palabras se permutan aleatoriamente antes de entrenar — esto demuestra que los patrones que encontramos son un artefacto producido por los embeddings, los cuales tienen la capacidad de codificar la frecuencia de las palabras, además de su semántica (B). Por último, hacemos un experimento para ilustrar que la frecuencia de las palabras puede influir mucho en la medición de sesgos de género con embeddings (C)."
Word-embeddings, frecuencia, sesgos
2 - 51
Developing a Controllable Long Document Summarization Model / Improving LLMs with Community-Supported Data Acquisition using a Discord Bot
David Cairuz, Luisa Moura
David Cairuz, Luisa Moura
This poster is divided in two parts. The first one is about Cohere’s latest summarization model, currently available in Public Beta. This model was able to overcome LLMs input token limitations and provided users with control over the generated summaries’ length, abstraction level, and output format. The model was able to effectively summarize longer texts, such as academic papers, and is applicable to various use cases, including news and articles. The second part is about Cohere’s Discord bot that allows users to provide feedback on the quality of texts generated by our latest models, and generate data that will be used to fine-tune LLMs. Users who engage will have early access to Cohere's latest converse models and will be an important part on improving them.
LLMs, summarization, data acquisition
2 - 53
Reinforcement Learning applied to the optimal energy dispatch of the national interconnected system
Vanina Camacho, Ruben Chaer
Vanina Camacho
Aprendizaje por Refuerzo aplicado al despacho óptimo de energía del sistema interconectado nacional utilizando SimSEE.
Reinforcement Learning, energy dispatch, optimization
2 - 54
Learning from Label Proportions
Andrés Muñoz Medina, Travis Dick, Robert Busafekete, Claudio Gentile
Andres Munoz Medina
Event level data powers all current machine learning models. However, as users become more privacy-conscious, availability to event level data becomes sparser. Instead, to protect data privacy, training of models can be done using aggregate data. In this poster we describe techniques for training models when labels can only be observed in aggregate. We demonstrate that with a simple trick, one can adapt pipelines that consume event level data to instead train using aggregate label information.
Learning from aggregates, learning theory, privacy
2 - 55
Estrategia de Inteligencia Artificial
Maximiliano Maneiro Vaz
La Estrategia de IA surge como herramienta para la transformación digital que se desarrolla con el propósito de promover y fortalecer su uso responsable en la Administración Pública. Para ello se definen 9 principios generales, pilares, objetivos y líneas de acción que faciliten y fomenten el uso responsable de esta tecnología en el ámbito público.
Cocreacion, coparticipación, transparencia
2 - 56
Data exploitation from customer's smart meters using AI
Sebastián Alpuy, Marcelo Rey, Fernando Santomauro, Santiago Garabedian, Ignacio Cáceres
Sebastián Alpuy
Use the data from the telemetering as well as static data of the clients and the electric grid in order to improve and validate the topology of the electrical grid as well as determine the way of using the air conditioning in our clients.
Energy, telemetering, smartgrid
2 - 57
Navigating Constraints: Dual Formulation RL for Autonomous Monitoring
Leopoldo Agorio, Juan Bazerque
Leopoldo Agorio
" The field of Reinforcement Learning (RL) has gained significant attention in recent years due to its ability to optimize control using reward signals. In this study, we focus on a monitoring problem, which involves monitoring regions with occupation restrictions using one autonomous agent. The problem is modeled as an RL problem, where the dual variables act as a reward signal. To solve the problem, we propose a novel parameterization approach through a neural network that processes both primal and dual variables in parallel. By incorporating this structural innovation, the network learns to select navigation policies based on the degree of constraint satisfaction. Moreover, this satisfaction level is observed in real-time through the dual variables. To evaluate the proposed approach, we conduct experiments with a single agent monitoring multiple regions. The results indicate that the approach can efficiently optimize the control for monitoring the restricted regions with a high degree of constraint satisfaction. Our proposed approach has significant implications in monitoring applications, where the dual variables can be used as a measure of satisfaction and the optimization is based on reward signals in RL. Our results also highlight the potential of using RL techniques with dual variables for monitoring problems."
Reinforcement Learning, Dual Variables, Autonomous Monitoring.