• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 138
  • 3
  • 1
  • Tagged with
  • 142
  • 141
  • 141
  • 140
  • 140
  • 140
  • 140
  • 140
  • 82
  • 43
  • 40
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Towards Uncovering the True Use of Unlabeled Data in Machine Learning

Sansone, Emanuele January 2018 (has links)
Knowing how to exploit unlabeled data is a fundamental problem in machine learning. This dissertation provides contributions in different contexts, including semi-supervised learning, positive unlabeled learning and representation learning. In particular, we ask (i) whether is possible to learn a classifier in the context of limited data, (ii) whether is possible to scale existing models for positive unlabeled learning, and (iii) whether is possible to train a deep generative model with a single minimization problem.
122

A phylogenetic framework for large-scale analysis of microbial communities

Asnicar, Francesco January 2019 (has links)
The human microbiome represents the community of archaea, bacteria, micro-eukaryotes, and viruses present in and on the human body. Metagenomics is the most recent and advanced tool that allows the study of the microbiome at high resolution by sequencing the whole genetic content of a biological sample. The computational side of the metagenomic pipeline is recognized as the most challenging one as it needs to process large amounts of data coming from next-generation sequencing technologies to obtain accurate profiles of the microbiomes. Among all the analyses that can be performed, phylogenetics allows researchers to study microbial evolution, resolve strain-level relationships between microbes, and also taxonomically place and characterize novel and unknown microbial genomes. This thesis presents a novel computational phylogenetic approach implemented during my doctoral studies. The aims of the work range from the high-quality visualization of large phylogenies to the reconstruction of phylogenetic trees at unprecedented scale and resolution. Large-scale and accurate phylogeny reconstruction is crucial in tracking species at strain-level resolution across samples and phylogenetically characterizing unknown microbes by placing their genomes reconstructed via metagenomic assembly into a large reference phylogeny. The proposed computational phylogenetic framework has been used in several different metagenomic analyses, improving our understanding of the complexity of microbial communities. It proved, for example, to be crucial in the detection of vertical transmission events from mothers to infants and for the placement of thousands of unknown metagenome-reconstructed genomes leading to the definition of many new candidate species. This poses the basis for large-scale and more accurate analysis of the microbiome.
123

The Dao of Wikipedia: Extracting Knowledge from the Structure of Wikilinks

Consonni, Cristian 24 October 2019 (has links)
Wikipedia is a multilingual encyclopedia written collaboratively by volunteers online, and it is now the largest, most visited encyclopedia in existence. Wikipedia has arisen through the self-organized collaboration of contributors, and since its launch in January 2001, its potential as a research resource has become apparent to scientists, its appeal lies in the fact that it strikes a middle ground between accurate, manually created, limited-coverage resources, and noisy knowledge mined from the web. For this reason, Wikipedia's content has been exploited for a variety of applications: to build knowledge bases, to study interactions between users on the Internet, and to investigate social and cultural issues such as gender bias in history, or the spreading of information. Similarly to what happened for the Web at large, a structure has emerged from the collaborative creation of Wikipedia: its articles contain hundreds of millions of links. In Wikipedia parlance, these internal links are called wikilinks. These connections explain the topics being covered in articles and provide a way to navigate between different subjects, contextualizing the information, and making additional information available. In this thesis, we argue that the information contained in the link structure of Wikipedia can be harnessed to gain useful insights by extracting it with dedicated algorithms. More prosaically, in this thesis, we explore the link structure of Wikipedia with new methods. In the first part, we discuss in depth the characteristics of Wikipedia, and we describe the process and challenges we have faced to extract the network of links. Since Wikipedia is available in several language editions and its entire edition history is publicly available, we have extracted the wikilink network at various points in time, and we have performed data integration to improve its quality. In the second part, we show that the wikilink network can be effectively used to find the most relevant pages related to an article provided by the user. We introduce a novel algorithm, called CycleRank, that takes advantage of the link structure of Wikipedia considering cycles of links, thus giving weight to both incoming and outgoing connections, to produce a ranking of articles with respect to an article chosen by the user. In the last part, we explore applications of CycleRank. First, we describe the Engineroom EU project, where we faced the challenge to find which were the most relevant Wikipedia pages connected to the Wikipedia article about the Internet. Finally, we present another contribution using Wikipedia article accesses to estimate how the information about diseases propagates. In conclusion, with this thesis, we wanted to show that browsing Wikipedia's wikilinks is not only fascinating and serendipitous, but it is an effective way to extract useful information that is latent in the user-generated encyclopedia.
124

Advanced methods for simulation-based performance assessment and analysis of radar sounder data

Donini, Elena 06 May 2021 (has links)
Radar Sounders (RSs) are active sensors that transmit in the nadir electromagnetic (EM) waves with a low frequency in the range of High-Frequency and Very-High-Frequency and relatively wide bandwidth. Such a signal penetrates the surface and propagates in the subsurface, interacting with dielectric interfaces. This interaction yields to backscattered echoes detectable by the antenna that are coherently summed and stored in radargrams. RSs are used for planetary exploration and Earth observation for their value in investigating subsurface geological structures and processes, which reveal the past geomorphological history and possible future evolution. RS instruments have several parameter configurations that have to be designed to achieve the mission science goals. On Mars, radargram visual analyses revealed the icy layered deposits and liquid water evidence in the poles. On the Earth, RSs showed relevant structures and processes in the cryosphere and the arid areas that help to monitor the subsurface geological evolution, which is critical for climate change. Despite the valuable results, visual analysis is subjective and not feasible for processing a large amount of data. Therefore, a need emerges for automatic methods extracting fast and reliable information from radargrams. The thesis addresses two main open issues of the radar-sounding literature: i) assessing target detectability in simulated orbiting radargrams to guide the design of RS instruments, and ii) designing automatic methods for information extraction from RS data. The RS design is based on assessing the performance of a given instrument parameter configuration in achieving the mission science goals and detecting critical targets. The assessment guides the parameter selection by determining the appropriate trade-off between the achievable performance and technical limitations. We propose assessing the detectability of subsurface targets (e.g., englacial layering and basal interface) from satellite radar sounders with novel performance metrics. This performance assessment strategy can be applied to guide the design of the SNR budget at the surface, which can further support the selection of the main EORS instrument parameters. The second contribution is designing automatic methods for analyzing radargrams based on fuzzy logic and deep learning. The first method aims at identifying buried cavities, such as lava tubes, exploiting their geometric and EM models. A fuzzy system is built on the model that detects candidate reflections from the surface and the lava tube boundary. The second and third proposed methods are based on deep learning, as they showed groundbreaking results in several applications. We contributed with an automatic technique for analyzing radargram acquired in icy areas to investigate the basal layer. To this end, radargrams are segmented with a deep learning network into literature classes, including englacial layers, bedrock, and echo-free zone (EFZ) and thermal noise, as well as new classes of basal ice and signal perturbation. The third method proposes an unsupervised segmentation of radargrams with deep learning for detecting subsurface features. Qualitative and quantitative experimental results obtained on planetary and terrestrial radargrams confirm the effectiveness of the proposed methods, which investigate new subsurface targets and allow an improvement in terms of accuracy when compared to other state-of-the-art methods.
125

Cognitively Guided Modeling of Visual Perception in Intelligent Vehicles

Plebe, Alice 20 April 2021 (has links)
This work proposes a strategy for visual perception in the context of autonomous driving. Despite the growing research aiming to implement self-driving cars, no artificial system can claim to have reached the driving performance of a human, yet. Humans---when not distracted or drunk---are still the best drivers you can currently find. Hence, the theories about the human mind and its neural organization could reveal precious insights on how to design a better autonomous driving agent. This dissertation focuses specifically on the perceptual aspect of driving, and it takes inspiration from four key theories on how the human brain achieves the cognitive capabilities required by the activity of driving. The first idea lies at the foundation of current cognitive science, and it argues that thinking nearly always involves some sort of mental simulation, which takes the form of imagery when dealing with visual perception. The second theory explains how the perceptual simulation takes place in neural circuits called convergence-divergence zones, which expand and compress information to extract abstract concepts from visual experience and code them into compact representations. The third theory highlights that perception---when specialized for a complex task as driving---is refined by experience in a process called perceptual learning. The fourth theory, namely the free-energy principle of predictive brains, corroborates the role of visual imagination as a fundamental mechanism of inference. In order to implement these theoretical principles, it is necessary to identify the most appropriate computational tools currently available. Within the consolidated and successful field of deep learning, I select the artificial architectures and strategies that manifest a sounding resemblance with their cognitive counterparts. Specifically, convolutional autoencoders have a strong correspondence with the architecture of convergence-divergence zones and the process of perceptual abstraction. The free-energy principle of predictive brains is related to variational Bayesian inference and the use of recurrent neural networks. In fact, this principle can be translated into a training procedure that learns abstract representations predisposed to predicting how the current road scenario will change in the future. The main contribution of this dissertation is a method to learn conceptual representations of the driving scenario from visual information. This approach forces a semantic internal organization, in the sense that distinct parts of the representation are explicitly associated to specific concepts useful in the context of driving. Specifically, the model uses as few as 16 neurons for each of the two basic concepts here considered: vehicles and lanes. At the same time, the approach biases the internal representations towards the ability to predict the dynamics of objects in the scene. This property of temporal coherence allows the representations to be exploited to predict plausible future scenarios and to perform a simplified form of mental imagery. In addition, this work includes a proposal to tackle the problem of opaqueness affecting deep neural networks. I present a method that aims to mitigate this issue, in the context of longitudinal control for automated vehicles. A further contribution of this dissertation experiments with higher-level spaces of prediction, such as occupancy grids, which could conciliate between the direct application to motor controls and the biological plausibility.
126

Semantic Image Interpretation - Integration of Numerical Data and Logical Knowledge for Cognitive Vision

Donadello, Ivan January 2018 (has links)
Semantic Image Interpretation (SII) is the process of generating a structured description of the content of an input image. This description is encoded as a labelled direct graph where nodes correspond to objects in the image and edges to semantic relations between objects. Such a detailed structure allows a more accurate searching and retrieval of images. In this thesis, we propose two well-founded methods for SII. Both methods exploit background knowledge, in the form of logical constraints of a knowledge base, about the domain of the images. The first method formalizes the SII as the extraction of a partial model of a knowledge base. Partial models are built with a clustering and reasoning algorithm that considers both low-level and semantic features of images. The second method uses the framework Logic Tensor Networks to build the labelled direct graph of an image. This framework is able to learn from data in presence of the logical constraints of the knowledge base. Therefore, the graph construction is performed by predicting the labels of the nodes and the relations according to the logical constraints and the features of the objects in the image. These methods improve the state-of-the-art by introducing two well-founded methodologies that integrate low-level and semantic features of images with logical knowledge. Indeed, other methods, do not deal with low-level features or use only statistical knowledge coming from training sets or corpora. Moreover, the second method overcomes the performance of the state-of-the-art on the standard task of visual relationship detection.
127

A New Design For the Support of Collaborative Care Work in Nursing Homes

Ceschel, Francesco January 2018 (has links)
Nursing homes are complex healthcare settings that take care of older adults with sever cognitive and physical impairments. Given the conditions of the patients, nursing homes can be considered end-of-life contexts. There, the care work that aim to mitigate and treat the conditions of the patients it is the result of the collaboration between the care professionals and the relatives of the patients. Indeed, when the patients are very old adults in a end-of-life situation, the provision of care often involves a family caregiver as the main point of contact for the healthcare service. However, caring for institutionalized older adults is known to be a complex issue both for the families of the older adults and the care professionals. Over the last few years, there has been an increasing interest in this topic primarily due to a growing older population and, hence, a heightened need of research contributions in this area. Previous studies on caregiving for older adults living in nursing homes recognize the necessity to support professionals’ work practices to ameliorate their working conditions, and decrease the risk of burnout and job dissatisfaction, as well as to relieve the families of the patients from the burden of caring for their loved ones. Yet, the literature shows a lack of solutions in terms of technologies for this kind of environments. In this thesis we report an extensive study and analysis we performed within a network of six and nursing homes located in the northern Italy.We investigated the practice of caregiving within the nursing homes. In particular, we focused on the work practices of care professionals, and on the relational issues between the care professionals and the families of the patients. We conducted, first, an exploratory study to comprehend the nature of our research context. Afterwards, we carried out a series of participatory design sessions and validation workshops to elicit the requirements for the development of a new technology platform to support the collaboration between care professionals and relatives of older patients. The outcomes of this work shed new light on the opportunities of using ICT solutions to improve relations and information sharing among caregivers. Indeed, our findings state that the organizational and relational complexity of nursing homes emphasize how poor communication practices hinder the collaboration and the mutual understanding between the relatives of the patients and the care professional. As a result, we deliver a series of functional requirements for the development of a technology platform that aims to support relationships, communication, and coordination among care professionals, and between care professionals and families of the patients.
128

Real-time adaptation of stimulation protocols for neuroimaging studies

Kalinina, Elena January 2018 (has links)
Neuroimaging techniques allow to acquire images of the brain involved in cognitive tasks. In traditional neuroimaging studies, the brain response to external stimulation is investigated. Stimulation categories, the order they are presented to the subject and the presentation duration are dened in the stimulation protocol. The protocol is xed before the beginning of the study and does not change in the course of experiment. Recently, there has been a major rise in the number of real-time neuroscientic experiments where the incoming brain data is analysed in an online mode. Real-time neuroimaging studies open an avenue for approaching a whole new broad range of questions, like, for instance, how the outcome of the cognitive task depends on the current brain state. Real-time experiments need a dierent protocol type that can be exibly and interactively adjusted in line with the experimental scope, e.g. hypotheses testing or optimising design for individual subject's parameters. A plethora of methods is currently deployed for protocol adaptation: information theory, optimisation algorithms, genetic algorithms. What is lacking, however, is the paradigm for interacting with the subject's state, brain state in particular. I am addressing this problem in my research. I have concentrated on two types of real-time experiments: closed-loop stimulation experiments and brain-state dependent stimulation (BSDS). As the rst contribution, I put forward a method for closed-loop stimulation adaptation and apply it in a real-time Galvanic Skin Response (GSR) experimental setting. The second contribution is an unsupervised method for brain state detection and a real-time functional Magnetic Resonance Imaging (rtfMRI) setup making use of this method. In a neurofeedback setting the goal is for the subject to achieve a target state. Ideally, the stimulation protocol should be adapted to the subject to better guide them towards that state. One way to do this would be modelling the subject's activity in a way that we can evaluate the eect of various stimulation options and choose the optimised ones, maximising the reward or minimising the error. However, currently developing such models for neuroimaging neurofeedback experiments presents a number of challenges, namely: complex dynamics of a very noisy neural signal and non-trivial mapping of neural and cognitive processes. We designed a simpler experiment as a proof of concept using GSR signal. We showed that if it is possible to model the subject's state and the dynamics of the system, it is also possible to steer the subject towards the desired state. In BSDS, there is no target state, but the challenge lies in the most accurate identication of the subject state in any given moment. The reference, state-of-the-art method for determining the current brain state is the use of machine learning classiers, or multivariate decoding. However, running supervised machine learning classiers on neuroimaging data has a number of issues that might seriously limit their application, especially in real- time scenarios. For BSDS, we show how an unsupervised machine learning algorithm (clustering in real-time) can be employed with fMRI data to determine the onset of the activated brain state. We also developed a real-time fMRI setup for BSDS that uses this method. In an initial attempt to base BSDS on brain decoding, we encountered a set of issues related to classier use. These issues prompted us to developed a new set of methods based on statistical inference that help address fundamental neuroscientic questions. The methods are presented as the secondary contribution of the thesis.
129

Automatic Design Space Exploration of Fault-tolerant Embedded Systems Architectures

Tierno, Antonio 26 January 2023 (has links)
Embedded Systems may have competing design objectives, such as to maximize the reliability, increase the functional safety, minimize the product cost, and minimize the energy consumption. The architectures must be therefore configured to meet varied requirements and multiple design objectives. In particular, reliability and safety are receiving increasing attention. Consequently, the configuration of fault-tolerant mechanisms is a critical design decision. This work proposes a method for automatic selection of appropriate fault-tolerant design patterns, optimizing simultaneously multiple objective functions. Firstly, we present an exact method that leverages the power of Satisfiability Modulo Theory to encode the problem with a symbolic technique. It is based on a novel assessment of reliability which is part of the evaluation of alternative designs. Afterwards, we empirically evaluate the performance of a near-optimal approximation variation that allows us to solve the problem even when the instance size makes it intractable in terms of computing resources. The efficiency and scalability of this method is validated with a series of experiments of different sizes and characteristics, and by comparing it with existing methods on a test problem that is widely used in the reliability optimization literature.
130

Designing Video Games to Crowdsource Linguistic Annotations

Bonetti, Federico 19 May 2022 (has links)
This PhD thesis explores gamification strategies concerning video games for crowdsourcing, in particular for linguistic annotation. First, a categorization of the current approaches is proposed. In doing so, a new framework is provided to analyse and understand different game design strategies and their impact on linguistic annotation tasks. Two artefacts are developed to test and validate the framework: Spacewords, a 2D space shooter game, and High School Superhero, a 3D role-playing game. In particular, research questions and hypotheses concerning so-called orthogonal mechanics are tested, which are defined by previous research as game mechanics that although being similar to those found in commercial games, can hinder the annotation process by adding a layer of challenge and unpredictability. The artefacts are employed for three tasks: synonymy, linguistic acceptability and abusive language annotation. It is found that some challenging game-like features slightly improves Precision measure in certain circumstances. Experiments also suggest that motivation to play may be improved by in-game resources or collectible elements. Finally, in High School Superhero a mismatch is observed between the judgements given by linguists and players. The approach adopted in this work is intended to pave the way to a more insightful use of gaming elements inspired by entertainment games in the context of games with a purpose for linguistic annotation.

Page generated in 0.4843 seconds