• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 137
  • 3
  • 1
  • Tagged with
  • 141
  • 140
  • 140
  • 139
  • 139
  • 139
  • 139
  • 139
  • 81
  • 43
  • 40
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Pushing Forward Distributed Positioning Systems: Unleashing the Potential of Ultrawide-Band Networks

Santoro, Luca 19 April 2024 (has links)
This doctoral thesis presents a comprehensive exploration of ultrawideband technology in addressing diverse challenges within localization systems. Beginning with the development of an innovative, cost-effective, and anonymous contact tracing solution for industrial environments during the COVID-19 pandemic, the research integrates ultra-wideband positioning, Bluetooth low-energy, and inertial measurement units. The subsequent sections delve into relative positioning systems, device-free localization, UWB bistatic radar sensors, and UAV-based tracking, showcasing novel methodologies and hardware implementations with promising outcomes. The work extends to groundbreaking approaches in deploying UWB infrastructure through self-deployable robots and cooperative positioning schemes using a UAV swarm. The contributions highlight versatility, costeffectiveness, and scalability, opening new possibilities for applications in security, logistics, IoT services, and space exploration. In summary, this thesis represents a significant advancement in localization systems, offering practical solutions and paving the way for future research and applications
122

A phylogenetic framework for large-scale analysis of microbial communities

Asnicar, Francesco January 2019 (has links)
The human microbiome represents the community of archaea, bacteria, micro-eukaryotes, and viruses present in and on the human body. Metagenomics is the most recent and advanced tool that allows the study of the microbiome at high resolution by sequencing the whole genetic content of a biological sample. The computational side of the metagenomic pipeline is recognized as the most challenging one as it needs to process large amounts of data coming from next-generation sequencing technologies to obtain accurate profiles of the microbiomes. Among all the analyses that can be performed, phylogenetics allows researchers to study microbial evolution, resolve strain-level relationships between microbes, and also taxonomically place and characterize novel and unknown microbial genomes. This thesis presents a novel computational phylogenetic approach implemented during my doctoral studies. The aims of the work range from the high-quality visualization of large phylogenies to the reconstruction of phylogenetic trees at unprecedented scale and resolution. Large-scale and accurate phylogeny reconstruction is crucial in tracking species at strain-level resolution across samples and phylogenetically characterizing unknown microbes by placing their genomes reconstructed via metagenomic assembly into a large reference phylogeny. The proposed computational phylogenetic framework has been used in several different metagenomic analyses, improving our understanding of the complexity of microbial communities. It proved, for example, to be crucial in the detection of vertical transmission events from mothers to infants and for the placement of thousands of unknown metagenome-reconstructed genomes leading to the definition of many new candidate species. This poses the basis for large-scale and more accurate analysis of the microbiome.
123

The Dao of Wikipedia: Extracting Knowledge from the Structure of Wikilinks

Consonni, Cristian 24 October 2019 (has links)
Wikipedia is a multilingual encyclopedia written collaboratively by volunteers online, and it is now the largest, most visited encyclopedia in existence. Wikipedia has arisen through the self-organized collaboration of contributors, and since its launch in January 2001, its potential as a research resource has become apparent to scientists, its appeal lies in the fact that it strikes a middle ground between accurate, manually created, limited-coverage resources, and noisy knowledge mined from the web. For this reason, Wikipedia's content has been exploited for a variety of applications: to build knowledge bases, to study interactions between users on the Internet, and to investigate social and cultural issues such as gender bias in history, or the spreading of information. Similarly to what happened for the Web at large, a structure has emerged from the collaborative creation of Wikipedia: its articles contain hundreds of millions of links. In Wikipedia parlance, these internal links are called wikilinks. These connections explain the topics being covered in articles and provide a way to navigate between different subjects, contextualizing the information, and making additional information available. In this thesis, we argue that the information contained in the link structure of Wikipedia can be harnessed to gain useful insights by extracting it with dedicated algorithms. More prosaically, in this thesis, we explore the link structure of Wikipedia with new methods. In the first part, we discuss in depth the characteristics of Wikipedia, and we describe the process and challenges we have faced to extract the network of links. Since Wikipedia is available in several language editions and its entire edition history is publicly available, we have extracted the wikilink network at various points in time, and we have performed data integration to improve its quality. In the second part, we show that the wikilink network can be effectively used to find the most relevant pages related to an article provided by the user. We introduce a novel algorithm, called CycleRank, that takes advantage of the link structure of Wikipedia considering cycles of links, thus giving weight to both incoming and outgoing connections, to produce a ranking of articles with respect to an article chosen by the user. In the last part, we explore applications of CycleRank. First, we describe the Engineroom EU project, where we faced the challenge to find which were the most relevant Wikipedia pages connected to the Wikipedia article about the Internet. Finally, we present another contribution using Wikipedia article accesses to estimate how the information about diseases propagates. In conclusion, with this thesis, we wanted to show that browsing Wikipedia's wikilinks is not only fascinating and serendipitous, but it is an effective way to extract useful information that is latent in the user-generated encyclopedia.
124

Advanced methods for simulation-based performance assessment and analysis of radar sounder data

Donini, Elena 06 May 2021 (has links)
Radar Sounders (RSs) are active sensors that transmit in the nadir electromagnetic (EM) waves with a low frequency in the range of High-Frequency and Very-High-Frequency and relatively wide bandwidth. Such a signal penetrates the surface and propagates in the subsurface, interacting with dielectric interfaces. This interaction yields to backscattered echoes detectable by the antenna that are coherently summed and stored in radargrams. RSs are used for planetary exploration and Earth observation for their value in investigating subsurface geological structures and processes, which reveal the past geomorphological history and possible future evolution. RS instruments have several parameter configurations that have to be designed to achieve the mission science goals. On Mars, radargram visual analyses revealed the icy layered deposits and liquid water evidence in the poles. On the Earth, RSs showed relevant structures and processes in the cryosphere and the arid areas that help to monitor the subsurface geological evolution, which is critical for climate change. Despite the valuable results, visual analysis is subjective and not feasible for processing a large amount of data. Therefore, a need emerges for automatic methods extracting fast and reliable information from radargrams. The thesis addresses two main open issues of the radar-sounding literature: i) assessing target detectability in simulated orbiting radargrams to guide the design of RS instruments, and ii) designing automatic methods for information extraction from RS data. The RS design is based on assessing the performance of a given instrument parameter configuration in achieving the mission science goals and detecting critical targets. The assessment guides the parameter selection by determining the appropriate trade-off between the achievable performance and technical limitations. We propose assessing the detectability of subsurface targets (e.g., englacial layering and basal interface) from satellite radar sounders with novel performance metrics. This performance assessment strategy can be applied to guide the design of the SNR budget at the surface, which can further support the selection of the main EORS instrument parameters. The second contribution is designing automatic methods for analyzing radargrams based on fuzzy logic and deep learning. The first method aims at identifying buried cavities, such as lava tubes, exploiting their geometric and EM models. A fuzzy system is built on the model that detects candidate reflections from the surface and the lava tube boundary. The second and third proposed methods are based on deep learning, as they showed groundbreaking results in several applications. We contributed with an automatic technique for analyzing radargram acquired in icy areas to investigate the basal layer. To this end, radargrams are segmented with a deep learning network into literature classes, including englacial layers, bedrock, and echo-free zone (EFZ) and thermal noise, as well as new classes of basal ice and signal perturbation. The third method proposes an unsupervised segmentation of radargrams with deep learning for detecting subsurface features. Qualitative and quantitative experimental results obtained on planetary and terrestrial radargrams confirm the effectiveness of the proposed methods, which investigate new subsurface targets and allow an improvement in terms of accuracy when compared to other state-of-the-art methods.
125

Incremental Linearization for Satisfiability and Verification Modulo Nonlinear Arithmetic and Transcendental Functions

Irfan, Ahmed January 2018 (has links)
Satisfiability Modulo Theories (SMT) is the problem of deciding the satisfiability of a first-order formula with respect to some theory or combination of theories; Verification Modulo Theories (VMT) is the problem of analyzing the reachability for transition systems represented in terms of SMT formulae. In this thesis, we tackle the problems of SMT and VMT over the theories of polynomials over the reals (NRA), over the integers (NIA), and of NRA augmented with transcendental functions (NTA). We propose a new abstraction-refinement approach called Incremental Linearization. The idea is to abstract nonlinear multiplication and transcendental functions as uninterpreted functions in an abstract domain limited to linear arithmetic with uninterpreted functions. The uninterpreted functions are incrementally axiomatized by means of upper- and lower-bounding piecewise-linear constraints. In the case of transcendental functions, particular care is required to ensure the soundness of the abstraction. The method has been implemented in the MathSAT SMT solver, and in the nuXmv VMT model checker. An extensive experimental evaluation on a wide set of benchmarks from verification and mathematics demonstrates the generality and the effectiveness of our approach. Moreover, the proposed technique is an enabler for the (nonlinear) VMT problems arising in practical scenarios with design environments such as Simulink. This capability has been achieved by integrating nuXmv with Simulink using a compilation-based approach and is evaluated on an industrial-level case study.
126

Energy-efficient, Large-scale Ultra-wideband Communication and Localization

Vecchia, Davide 08 July 2022 (has links)
Among the low-power wireless technologies that have emerged in recent years, ultra-wideband (UWB) has successfully established itself as the reference for accurate ranging and localization, both outdoors and indoors. Due to its unprecedented performance, paired with relatively low energy consumption, UWB is going to play a central role in the next wave of location-based applications. As the trend of integration in smartphones continues, UWB is also expected to reach ordinary users, revolutionizing our lives the same way GPS and similar technologies have done. But the impact of UWB may not be limited to ranging and localization. Because of its considerable data rate, and its robustness to obstacles and interference, UWB communication may hold untapped potential for sensing and control applications. Nevertheless, several research questions still need to be answered to assess whether UWB can be adopted widely in the communication and localization landscapes. On one hand, the rapid evolution of UWB radios and the release of ever more efficient chips is a clear indication of the growing market for this technology. However, for it to become pervasive, full-fledged communication and localization systems must be developed and evaluated, tackling the shortcomings affecting current prototypes. UWB systems are typically single-hop networks designed for small areas, making them impractical for large-scale coverage. This limitation is found in communication and localization systems alike. Specifically for communication systems, energy-efficient multi-hop protocols are hitherto unexplored. As for localization systems, they rely on mains-powered anchors to circumvent the issue of energy consumption, in addition to only supporting small areas. Very few options are available for light, easy to deploy infrastructures using battery-powered anchors. Nonetheless, large-scale systems are required in common settings like industrial facilities and agricultural fields, but also office spaces and museums. The general goal of enabling UWB in spaces like these entails a number of issues. Large multi-hop infrastructures exacerbate the known limitations of small, single-hop, networks; notably, reliability and latency requirements clash with the need to reduce energy consumption. Finally, when device mobility is a factor, continuity of operations across the covered area is a challenge in itself. In this thesis, we design energy-efficient UWB systems for large-scale areas, supporting device mobility across multi-hop infrastructures. As our opening contribution, we study the unique interference rejection properties of the radio to inform our design. This analysis yields a number of findings on the impact of interference in communication and distance estimation, that are directly usable by developers to improve UWB solutions. These findings also suggest that concurrent transmissions in the same frequency channel are a practical option in UWB. While the overlapping of frames is typically avoided to prevent collisions, concurrent transmissions have counter-intuitively been used to provide highly reliable communication primitives for a variety of traffic patterns in narrowband radios. In our first effort to use concurrent transmissions in a full system, we introduce the UWB version of Glossy, a renowned protocol for efficient network-wide synchronization and data dissemination. Inspired by the success of concurrency-based protocols in narrowband, we then apply the same principles to define a novel data collection protocol, Weaver. Instead of relying on independent Glossy floods like state-of-the-art systems, we weave multiple data flows together to make our collection engine faster, more reliable and more energy-efficient. With Glossy and Weaver supporting the communication aspect in large-scale networks, we then propose techniques for large-scale localization systems. We introduce TALLA, a TDoA solution for continuous position estimation based on wireless synchronization. We evaluate TALLA in an UWB testbed and in simulations, for which we replicate accurately the behavior of the clocks in our real-world platforms. We then offer a glimpse of what TALLA can be employed for, deploying an infrastructure in a science museum to track visitors. The collected movement traces allow us to analyze fine-grained stop-move mobility patterns and infer the sequence of visited exhibits, which is only possible because of the high spatio-temporal granularity offered by TALLA. Finally, with SONAR, we tackle the issue of large-scale ranging and localization when the infrastructure cannot be mains-powered. By blending synchronization and scheduling operations into neighbor discovery and ranging, we drastically reduce energy consumption and ensure years-long system lifetime. Overall, this thesis enhances UWB applicability in scenarios that were previously precluded to the technology, by providing the missing communication and localization support for large areas and battery-powered devices. Throughout the thesis, we follow an experiment-driven approach to validate our protocol models and simulations. Based on the evidence collected during this research endeavor, we develop full systems that operate in a large testbed at our premises, showing that our solutions are immediately applicable in real settings.
127

Cognitively Guided Modeling of Visual Perception in Intelligent Vehicles

Plebe, Alice 20 April 2021 (has links)
This work proposes a strategy for visual perception in the context of autonomous driving. Despite the growing research aiming to implement self-driving cars, no artificial system can claim to have reached the driving performance of a human, yet. Humans---when not distracted or drunk---are still the best drivers you can currently find. Hence, the theories about the human mind and its neural organization could reveal precious insights on how to design a better autonomous driving agent. This dissertation focuses specifically on the perceptual aspect of driving, and it takes inspiration from four key theories on how the human brain achieves the cognitive capabilities required by the activity of driving. The first idea lies at the foundation of current cognitive science, and it argues that thinking nearly always involves some sort of mental simulation, which takes the form of imagery when dealing with visual perception. The second theory explains how the perceptual simulation takes place in neural circuits called convergence-divergence zones, which expand and compress information to extract abstract concepts from visual experience and code them into compact representations. The third theory highlights that perception---when specialized for a complex task as driving---is refined by experience in a process called perceptual learning. The fourth theory, namely the free-energy principle of predictive brains, corroborates the role of visual imagination as a fundamental mechanism of inference. In order to implement these theoretical principles, it is necessary to identify the most appropriate computational tools currently available. Within the consolidated and successful field of deep learning, I select the artificial architectures and strategies that manifest a sounding resemblance with their cognitive counterparts. Specifically, convolutional autoencoders have a strong correspondence with the architecture of convergence-divergence zones and the process of perceptual abstraction. The free-energy principle of predictive brains is related to variational Bayesian inference and the use of recurrent neural networks. In fact, this principle can be translated into a training procedure that learns abstract representations predisposed to predicting how the current road scenario will change in the future. The main contribution of this dissertation is a method to learn conceptual representations of the driving scenario from visual information. This approach forces a semantic internal organization, in the sense that distinct parts of the representation are explicitly associated to specific concepts useful in the context of driving. Specifically, the model uses as few as 16 neurons for each of the two basic concepts here considered: vehicles and lanes. At the same time, the approach biases the internal representations towards the ability to predict the dynamics of objects in the scene. This property of temporal coherence allows the representations to be exploited to predict plausible future scenarios and to perform a simplified form of mental imagery. In addition, this work includes a proposal to tackle the problem of opaqueness affecting deep neural networks. I present a method that aims to mitigate this issue, in the context of longitudinal control for automated vehicles. A further contribution of this dissertation experiments with higher-level spaces of prediction, such as occupancy grids, which could conciliate between the direct application to motor controls and the biological plausibility.
128

Semantic Image Interpretation - Integration of Numerical Data and Logical Knowledge for Cognitive Vision

Donadello, Ivan January 2018 (has links)
Semantic Image Interpretation (SII) is the process of generating a structured description of the content of an input image. This description is encoded as a labelled direct graph where nodes correspond to objects in the image and edges to semantic relations between objects. Such a detailed structure allows a more accurate searching and retrieval of images. In this thesis, we propose two well-founded methods for SII. Both methods exploit background knowledge, in the form of logical constraints of a knowledge base, about the domain of the images. The first method formalizes the SII as the extraction of a partial model of a knowledge base. Partial models are built with a clustering and reasoning algorithm that considers both low-level and semantic features of images. The second method uses the framework Logic Tensor Networks to build the labelled direct graph of an image. This framework is able to learn from data in presence of the logical constraints of the knowledge base. Therefore, the graph construction is performed by predicting the labels of the nodes and the relations according to the logical constraints and the features of the objects in the image. These methods improve the state-of-the-art by introducing two well-founded methodologies that integrate low-level and semantic features of images with logical knowledge. Indeed, other methods, do not deal with low-level features or use only statistical knowledge coming from training sets or corpora. Moreover, the second method overcomes the performance of the state-of-the-art on the standard task of visual relationship detection.
129

A New Design For the Support of Collaborative Care Work in Nursing Homes

Ceschel, Francesco January 2018 (has links)
Nursing homes are complex healthcare settings that take care of older adults with sever cognitive and physical impairments. Given the conditions of the patients, nursing homes can be considered end-of-life contexts. There, the care work that aim to mitigate and treat the conditions of the patients it is the result of the collaboration between the care professionals and the relatives of the patients. Indeed, when the patients are very old adults in a end-of-life situation, the provision of care often involves a family caregiver as the main point of contact for the healthcare service. However, caring for institutionalized older adults is known to be a complex issue both for the families of the older adults and the care professionals. Over the last few years, there has been an increasing interest in this topic primarily due to a growing older population and, hence, a heightened need of research contributions in this area. Previous studies on caregiving for older adults living in nursing homes recognize the necessity to support professionals’ work practices to ameliorate their working conditions, and decrease the risk of burnout and job dissatisfaction, as well as to relieve the families of the patients from the burden of caring for their loved ones. Yet, the literature shows a lack of solutions in terms of technologies for this kind of environments. In this thesis we report an extensive study and analysis we performed within a network of six and nursing homes located in the northern Italy.We investigated the practice of caregiving within the nursing homes. In particular, we focused on the work practices of care professionals, and on the relational issues between the care professionals and the families of the patients. We conducted, first, an exploratory study to comprehend the nature of our research context. Afterwards, we carried out a series of participatory design sessions and validation workshops to elicit the requirements for the development of a new technology platform to support the collaboration between care professionals and relatives of older patients. The outcomes of this work shed new light on the opportunities of using ICT solutions to improve relations and information sharing among caregivers. Indeed, our findings state that the organizational and relational complexity of nursing homes emphasize how poor communication practices hinder the collaboration and the mutual understanding between the relatives of the patients and the care professional. As a result, we deliver a series of functional requirements for the development of a technology platform that aims to support relationships, communication, and coordination among care professionals, and between care professionals and families of the patients.
130

Real-time adaptation of stimulation protocols for neuroimaging studies

Kalinina, Elena January 2018 (has links)
Neuroimaging techniques allow to acquire images of the brain involved in cognitive tasks. In traditional neuroimaging studies, the brain response to external stimulation is investigated. Stimulation categories, the order they are presented to the subject and the presentation duration are dened in the stimulation protocol. The protocol is xed before the beginning of the study and does not change in the course of experiment. Recently, there has been a major rise in the number of real-time neuroscientic experiments where the incoming brain data is analysed in an online mode. Real-time neuroimaging studies open an avenue for approaching a whole new broad range of questions, like, for instance, how the outcome of the cognitive task depends on the current brain state. Real-time experiments need a dierent protocol type that can be exibly and interactively adjusted in line with the experimental scope, e.g. hypotheses testing or optimising design for individual subject's parameters. A plethora of methods is currently deployed for protocol adaptation: information theory, optimisation algorithms, genetic algorithms. What is lacking, however, is the paradigm for interacting with the subject's state, brain state in particular. I am addressing this problem in my research. I have concentrated on two types of real-time experiments: closed-loop stimulation experiments and brain-state dependent stimulation (BSDS). As the rst contribution, I put forward a method for closed-loop stimulation adaptation and apply it in a real-time Galvanic Skin Response (GSR) experimental setting. The second contribution is an unsupervised method for brain state detection and a real-time functional Magnetic Resonance Imaging (rtfMRI) setup making use of this method. In a neurofeedback setting the goal is for the subject to achieve a target state. Ideally, the stimulation protocol should be adapted to the subject to better guide them towards that state. One way to do this would be modelling the subject's activity in a way that we can evaluate the eect of various stimulation options and choose the optimised ones, maximising the reward or minimising the error. However, currently developing such models for neuroimaging neurofeedback experiments presents a number of challenges, namely: complex dynamics of a very noisy neural signal and non-trivial mapping of neural and cognitive processes. We designed a simpler experiment as a proof of concept using GSR signal. We showed that if it is possible to model the subject's state and the dynamics of the system, it is also possible to steer the subject towards the desired state. In BSDS, there is no target state, but the challenge lies in the most accurate identication of the subject state in any given moment. The reference, state-of-the-art method for determining the current brain state is the use of machine learning classiers, or multivariate decoding. However, running supervised machine learning classiers on neuroimaging data has a number of issues that might seriously limit their application, especially in real- time scenarios. For BSDS, we show how an unsupervised machine learning algorithm (clustering in real-time) can be employed with fMRI data to determine the onset of the activated brain state. We also developed a real-time fMRI setup for BSDS that uses this method. In an initial attempt to base BSDS on brain decoding, we encountered a set of issues related to classier use. These issues prompted us to developed a new set of methods based on statistical inference that help address fundamental neuroscientic questions. The methods are presented as the secondary contribution of the thesis.

Page generated in 0.1184 seconds