• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 137
  • 3
  • 1
  • Tagged with
  • 141
  • 140
  • 140
  • 139
  • 139
  • 139
  • 139
  • 139
  • 81
  • 43
  • 40
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Study and Development of Novel Techniques for PHY-Layer Optimization of Smart Terminals in the Context of Next-Generation Mobile Communications

D'Orazio, Leandro January 2008 (has links)
Future mobile broadband communications working over wireless channels are required to provide high performance services in terms of speed, capacity, and quality. A key issue to be considered is the design of multi-standard and multi-modal ad-hoc network architectures, capable of self-configuring in an adaptive and optimal way with respect to channel conditions and traffic load. In the context of 4G-wireless communications, the implementation of efficient baseband receivers characterized by affordable computational load is a crucial point in the development of transmission systems exploiting diversity in different domains. This thesis proposes some novel multi-user detection techniques based on different criterions (i.e., MMSE, ML, and MBER) particularly suited for multi-carrier CDMA systems, both in the single- and multi-antenna cases. Moreover, it considers the use of evolutionary strategies (such as GA and PSO) to channel estimation purposes in MIMO multicarrier scenarios. Simulation results evidenced that the proposed PHY-layer optimization techniques always outperform state of the art schemes by spending an affordable computational burden. Particular attention has been used on the software implementation of the formulated algorithms, in order to obtain a modular software architecture that can be used in an adaptive and optimized reconfigurable scenario.
102

Event Detection and Classification for the Digital Humanities

Sprugnoli, Rachele January 2018 (has links)
In recent years, event processing has become an active area of research in the Natural Language Processing community but resources and automatic systems developed so far have mainly addressed contemporary texts. However, the recognition and elaboration of events is a crucial step when dealing with historical texts: research in this domain can lead to the development of methodologies and tools that can assist historians in enhancing their work and can have an impact both in the fields of Natural Language Processing and Digital Humanities. Our work aims at shedding light on the complex concept of events adopting an interdisciplinary perspective. More specifically, theoretical and practical investigations are carried out on the specific topic of event detection and classification in historical texts by developing and releasing new annotation guidelines, new resources and new models for automatic annotation.
103

Adaptive Personality Recogntion from Text

Celli, Fabio January 2012 (has links)
We address the issue of domain adaptation for automatic Personality Recognition from Text (PRT). The PRT task consists in the classification of the personality traits of some authors, given some pieces of text they wrote. The purpose of our work is to improve current approaches to PRT in order to extract personality information from social network sites, which is a really challenging task. We argue that current approaches, based on supervised learning, have several limitations for the adaptation to social network domain, mainly due to 1) difficulties in data annotation, 2) overfitting, 3) lack of domain adaptability and 4) multilinguality issues. We propose and test a new approach to PRT, that we will call Adaptive Personality Recognition (APR). We argue that this new approach solves domain adaptability problems and it is suitable for the application in Social Network Sites. We start from an introduction that covers all the background knowledge required for understanding PRT. It includes arguments like personality, the the Big5 factor model, the sets of correlations between language features and personality traits and a brief survey on learning approaches, that includes also feature selection and domain adaptation. We also provide an overview of the state-of-theart in PRT and we outline the problems we see in the application of PRT to social network domain. Basically, our APR approach is based on 1) an external model: a set of features/correlations between language and Big5 personality traits (taken from literature); 2) an adaptive strategy, that makes the model fit the distribution of the features in the dataset at hand, before generating personality hypotheses; 3) an evaluation strategy, that compares all the hypotheses generated for each single text of each author, computing confidence scores. This allows domain adaptation, semi-supervised learning and the automatic extraction of patterns associated to personality traits, that can be added to the initial correlation set, thus combining top-down and bottom-up approaches. The main contributions of our approach to the research in the field of PRT are: 1) the possibility to run top-down PRT from models taken from literature, adapting them to new datasets; 2) the definition of a small, language-independent and resource-free feature/ correlation set, tested on Italian and English; 3) the possibility to integrate top-down and bottom-up PRT strategies, allowing the enrichment of the initial feature/correlation from the dataset at hand; 4) the development of a system for APR, that does not require large labeled datasets for training, but just a small one for testing, minimizing the data annotation problem. Finally, we describe some applications of APR to the analysis of personality in online social network sites, reporting results and findings. We argue that the APR approach is very useful for Social Network Analysis, social marketing, opinion mining, sentiment analysis, mood detection and related fields.
104

Remote Sensing-based Channel Modeling and Deployment Planning for Low-power Wireless Networks

Demetri, Silvia January 2018 (has links)
The deployment of low power wireless networks is notoriously effort-demanding, as costly in-field campaigns are required to assess the connectivity properties of the target location and understand where to place the wireless nodes. The characteristics of the environment, both static (e.g., obstacles obstructing the link line of sight) and dynamic (e.g., changes in weather conditions) cause variability in the communication performance, thus affecting the network operation quality and reliability. This translates into difficulties in effectively deploy, plan and manage these networks in real-world scenarios, especially outdoor. Despite the large literature on node placement, existing approaches make over-simplifying assumptions neglecting the complexity of the radio environment. Airborne and satellite Remote Sensing (RS) systems acquire data and images over wide areas, thus enabling one to derive information about these areas at large scale. In this dissertation, we propose to leverage RS systems and related data processing techniques to i) automatically derive the static characteristics of the deployment environment that affect low power wireless communication; ii) model the relation between such characteristics and the communication quality; and iii) exploit this knowledge to support the deployment planning. We focus on two main scenarios: a) the deployment of Wireless Sensor Networks (WSNs) in forests; and b) the communication performance of Internet of Things (IoT) networks based on Long Range (LoRa) wireless technology in the presence of mixed environments. As a first major contribution, we propose a novel WSN node placement approach (LaPS) that integrates remote sensing data acquired by airborne Light Detection and Ranging (LiDAR) instruments, a specialized path loss model and evolutionary computation to identify (near-)optimal node position in forests, automatically and prior to the actual deployment. When low-power WSNs operating at 2.4 GHz are deployed in forests, the presence of trees greatly affects communication. We define a processing architecture that automatically derives local forest attributes (e.g., tree density) from LiDAR data acquired over the target forest. This information is incorporated into a specialized path loss model, which is validated in deployments in a real forest, enabling fine-grained, per-link estimates of the radio signal attenuation induced by trees. Combining the forest attributes derived from LiDAR data with the specialized path loss model and a genetic algorithm, LaPS provides node placement solutions with higher quality than approaches based on a regular placement or on a standard path loss model, while satisfying the spatial and network requirements provided by the user. In addition, LaPS enables the exploration of the impact of changes in the user requirements on the resulting topologies in advance, thus reducing the in-field deployment effort. Moreover, to explore a different low-power wireless technology with starkly different trade-offs, we consider a LoRa-based IoT network operating in i) a free space like communication environment, i.e., the LoRa signal is transmitted from an high altitude weather balloon, traverses a free-of-obstacles space and is received by gateways on the ground; and ii) a mixed environment that contains built-up areas, farming fields and groups of trees, with both LoRa transmitters and receiving gateways close to the ground. These scenarios show a huge gap in terms of communication range, thus revealing to which extent the presence of objects affects the coverage that LoRa gateways can provide. To characterize the mixed environment we exploit detailed land cover maps (i.e., with spatial grain 10x10m2) derived by automatically classifying multispectral remote sensing satellite images. The land cover information is jointly analyzed with LoRa connectivity traces, enabling us to observe a correlation between the land cover types involved in LoRa links and the trend of the signal attenuation with the distance. This analysis opens interesting research venues aimed at defining LoRa connectivity models that quantitatively account for the type of environment involved in the communication by leveraging RS data.
105

Advanced methods for tree species classification and biophysical parameter estimation using crown geometric information in high density LiDAR data

Harikumar, Aravind January 2019 (has links)
The ecological, climatic and economic influence of forests makes them an essential natural resource to be studied, preserved, and managed. Forest inventorying using single sensor data has a huge economic advantage over multi-sensor data. Remote sensing of forests using high density multi-return small footprint Light Detection and Ranging (LiDAR) data is becoming a cost-effective method to automatic estimation of forest parameters at the Individual Tree Crown (ITC) level. Individual tree detection and delineation techniques form the basis for ITC level parameter estimation. However SoA techniques often fail to exploit the huge amount of three dimensional (3D) structural information in the high density LiDAR data to achieve accurate detection and delineation of the 3D crown in dense forests, and thus, the first contribution of the thesis is a technique that detects and delineates both dominant and subdominant trees in dense multilayered forests. The proposed method uses novel two dimensional (2D) and 3D features to achieve this goal. Species knowledge at individual tree level is relevant for accurate forest parameter estimation. Most state-of-the-art techniques use features that represent the distribution of data points within the crown to achieve species classification. However, the performance of such methods is low when the trees belong to the same taxonomic class (e.g., the conifer class). High density LiDAR data contain a huge amount of fine structural information of individual tree crowns. Thus, the second contribution of the thesis is on novel methods for classifying conifer species using both the branch level and the crown level geometric characteristics. Accurate localization of trees is fundamental to calibrate the individual tree level inventory data, as it allows to match reference to LiDAR data. An important biophysical parameter for precision forestry applications is the Diameter at Breast Height (DBH). SoA methods locate the stem directly below the tree top, and indirectly estimate DBH using species-specific allometric models. Both approaches tend to be inaccurate and depend on the forest type. Thus, in this thesis, a method for accurate stem localization and DBH measurement is proposed. This is the third contribution of the thesis. Qualitative and quantitative results of the experiments confirm the effectiveness of the proposed methods over the SoA ones.
106

Learning the Meaning of Quantifiers from Language and Vision

Pezzelle, Sandro January 2018 (has links)
Defining the meaning of vague quantifiers (‘few’, ‘most’, ‘all’) has been, and still is, the Holy Grail of a mare magnum of studies in philosophy, logic, and linguistics. The way by which they are learned by children has been largely investigated in the realm of language acquisition, and the mechanisms underlying their comprehension and processing have received attention from experimental pragmatics, cognitive psychology, and neuroscience. Very often their meaning has been tied to that of numbers, amounts, and proportions, and many attempts have been made to place them on ordered scales. In this thesis, I study quantifiers from a novel, cognitively-inspired computational perspective. By carrying out several behavioral studies with human speakers, I seek to answer several questions concerning their meaning and use: Is the choice of quantifiers modulated by the linguistic context? Do quantifiers lie on a mental, semantically-ordered scale? Which are the features of such a scale? By exploiting recent advances in computational linguistics and computer vision, I test the performance of state-of-art neural networks in performing the same tasks and propose novel architectures to model speakers’ use of quantifiers in grounded contexts. In particular, I ask the following questions: Can the meaning of quantifiers be learned from visual scenes? How does this mechanism compare with that subtending comparatives, numbers, and proportions? The contribution of this work is two-fold: On the cognitive level, it sheds new light on various issues concerning the meaning and use of such expressions, and provides experimental evidence supporting the validity of the foundational theories. On the computational level, it proposes a novel, theoretically-informed approach to the modeling of vague and context-dependent expressions from both linguistic and visual data. By carefully analyzing the performance and errors of the models, I show the effectiveness of neural networks in performing challenging, high-level tasks. At the same time, I highlight commonalities and differences with human behavior.
107

Towards Uncovering the True Use of Unlabeled Data in Machine Learning

Sansone, Emanuele January 2018 (has links)
Knowing how to exploit unlabeled data is a fundamental problem in machine learning. This dissertation provides contributions in different contexts, including semi-supervised learning, positive unlabeled learning and representation learning. In particular, we ask (i) whether is possible to learn a classifier in the context of limited data, (ii) whether is possible to scale existing models for positive unlabeled learning, and (iii) whether is possible to train a deep generative model with a single minimization problem.
108

Verification of Hybrid Systems using Satisfiability Modulo Theories

Mover, Sergio January 2014 (has links)
Embedded systems are formed by hardware and software components that interact with the physical environment and thus may be modeled as Hybrid Systems. Due to the complexity the system,there is an increasing need of automatic techniques to support the design phase, ensuring that a system behaves as expected in all the possible operating conditions.In this thesis, we propose novel techniques for the verification and the validation of hybrid systems using Satisfiability Modulo Theories (SMT). SMT is an established technique that has been used successfully in many verification approaches, targeted for both hardware and software systems. The use of SMT to verify hybrid systems has been limited, due to the restricted support of complex continuous dynamics and the lack of scalability. The contribution of the thesis is twofold. First, we propose novel encoding techniques, which widen the applicability and improve the effectiveness of the SMT-based approaches. Second, we propose novel SMT-based algorithms that improve the performance of the existing state of the art approaches. In particular we show algorithms to solve problems such as invariant verification, scenario verification and parameter synthesis. The algorithms fully exploit the underlying structure of a network of hybrid systems and the functionalities of modern SMT-solvers. We show and discuss the effectiveness of the the proposed techniques when applied to benchmarks from the hybrid systems domain.
109

A Lexi-ontological Resource for Consumer Healthcare: The Italian Consumer Medical Vocabulary

Cardillo, Elena January 2011 (has links)
In the era of Consumer Health Informatics, healthcare consumers and patients play an active role because they increasingly explore health related information sources on their own, and they become more responsible for their personal healthcare, trying to find information on the web, consulting decision-support healthcare systems, trying to interpret clinical notes or test results provided by their physician, or filling in parts of their own Personal Health Record (PHR). In spite of the advances in Healthcare Informatics for answering consumer needs, it is still difficult for laypersons who do not have a good level of healthcare literacy to find, understand, and act on health information, due to the communication gap which still persists between consumer and professional language (in terms of lexicon, semantics, and explanation). Significant effort has been devoted to promote access to and the integration of medical information, and many standard terminologies have been developed for this aim, some of which have been formalized into ontologies. Many of these terminological resources are used in healthcare information systems, but one of the most important problems is that these types of terminologies have been developed according to the physiciansâ€TMperspective, and thus cannot provide sufficient support when integrated into consumer-oriented applications, such as Electronic Health Records, Personal Health Records, etc. This highlights the need for intermediate consumer-understandable terminologies or ontologies being integrated with more technical ones in order to support communication between patient-applications and those designed for experts. The aim of this thesis is to develop a lexical-ontological resource for consumer-oriented healthcare applications which is based on the construction of a Consumer-oriented Medical Vocabulary for Italian, able to reflect the different ways consumers and patients express and think about health topics, helping to bridge the vocabulary gap, integrated with standard medical terminologies/ontologies used by professionals in the general practice for representing the process of care, by means of Semantic Web technologies, in order to have a coherent semantic medical resource useful both for professionals and for consumers. The feasibility of this consumer-oriented resource and of the Integration Framework has been tested by its application to an Italian Personal Health Record in order to help consumers and patients in the process of querying healthcare information, and easily describe their problems, complaints and clinical history.
110

Modern Anomaly Detection: Benchmarking, Scalability and a Novel Approach

Pasupathipillai, Sivam 27 November 2020 (has links)
Anomaly detection consists in automatically detecting the most unusual elements in a data set. Anomaly detection applications emerge in domains such as computer security, system monitoring, fault detection, and wireless sensor networks. The strategic importance of detecting anomalies in these domains makes anomaly detection a critical data analysis task. Moreover, the contextual nature of anomalies, among other issues, makes anomaly detection a particularly challenging problem. Anomaly detection has received significant research attention in the last two decades. Much effort has been invested in the development of novel algorithms for anomaly detection. However, several open challenges still exist in the field.This thesis presents our contributions toward solving these challenges. These contributions include: a methodological survey of the recent literature, a novel benchmarking framework for anomaly detection algorithms, an approach for scaling anomaly detection techniques to massive data sets, and a novel anomaly detection algorithm inspired by the law of universal gravitation. Our methodological survey highlights open challenges in the field, and it provides some motivation for our other contributions. Our benchmarking framework, named BAD, tackles the problem of reliably assess the accuracy of unsupervised anomaly detection algorithms. BAD leverages parallel and distributed computing to enable massive comparison studies and hyperparameter tuning tasks. The challenge of scaling unsupervised anomaly detection techniques to massive data sets is well-known in the literature. In this context, our contributions are twofold: we investigate the trade-offs between a single-threaded implementation and a distributed approach considering price-performance metrics, and we propose a scalable approach for anomaly detection algorithms to arbitrary data volumes. Our results show that, when high scalability is required, our approach can handle arbitrarily large data sets without significantly compromising detection accuracy. We conclude our contributions by proposing a novel algorithm for anomaly detection, named Gravity. Gravity identifies anomalies by considering the attraction forces among massive data elements. Our evaluation shows that Gravity is competitive with other popular anomaly detection techniques on several benchmark data sets. Additionally, the properties of Gravity makes it preferable in cases where hyperparameter tuning is challenging or unfeasible.

Page generated in 0.0862 seconds