Spelling suggestions: "subject:"informazioni"" "subject:"uniformazioni""
101 |
Adaptive Personality Recogntion from TextCelli, Fabio January 2012 (has links)
We address the issue of domain adaptation for automatic Personality Recognition from Text (PRT).
The PRT task consists in the classification of the personality traits of some authors, given some pieces of text they wrote.
The purpose of our work is to improve current approaches to PRT in order to extract personality information from social network sites, which is a really challenging task.
We argue that current approaches, based on supervised learning, have several limitations for the adaptation to social network domain, mainly due to 1) difficulties in data annotation, 2) overfitting, 3) lack of domain adaptability and 4) multilinguality issues.
We propose and test a new approach to PRT, that we will call Adaptive Personality Recognition (APR).
We argue that this new approach solves domain adaptability problems and it is suitable for the application in Social Network Sites.
We start from an introduction that covers all the background knowledge required for understanding PRT.
It includes arguments like personality, the the Big5 factor model, the sets of correlations between language features and personality traits and a brief survey on learning approaches, that includes also feature selection and domain adaptation.
We also provide an overview of the state-of-theart in PRT and we outline the problems we see in the application of PRT to social network domain.
Basically, our APR approach is based on 1) an external model: a set of features/correlations between language and Big5 personality traits (taken from literature); 2) an adaptive strategy, that makes the model fit the distribution of the features in the dataset at hand, before generating personality hypotheses; 3) an evaluation strategy, that compares all the hypotheses generated for each single text of each author, computing confidence scores.
This allows domain adaptation, semi-supervised learning and the automatic extraction of patterns associated to personality traits, that can be added to the initial correlation set, thus combining top-down and bottom-up approaches.
The main contributions of our approach to the research in the field of PRT are: 1) the possibility to run top-down PRT from models taken from literature, adapting them to new datasets; 2) the definition of a small, language-independent and resource-free feature/ correlation set, tested on Italian and English; 3) the possibility to integrate top-down and bottom-up PRT strategies, allowing the enrichment of the initial feature/correlation from the dataset at hand; 4) the development of a system for APR, that does not require large labeled datasets for training, but just a small one for testing, minimizing the data annotation problem.
Finally, we describe some applications of APR to the analysis of personality in online social network sites, reporting results and findings.
We argue that the APR approach is very useful for Social Network Analysis, social marketing, opinion mining, sentiment analysis, mood detection and related fields.
|
102 |
Remote Sensing-based Channel Modeling and Deployment Planning for Low-power Wireless NetworksDemetri, Silvia January 2018 (has links)
The deployment of low power wireless networks is notoriously effort-demanding, as costly in-field campaigns are required to assess the connectivity properties of the target location and understand where to place the wireless nodes. The characteristics of the environment, both static (e.g., obstacles obstructing the link line of sight) and dynamic (e.g., changes in weather conditions) cause variability in the communication performance, thus affecting the network operation quality and reliability. This translates into difficulties in effectively deploy, plan and manage these networks in real-world scenarios, especially outdoor. Despite the large literature on node placement, existing approaches make over-simplifying assumptions neglecting the complexity of the radio environment.
Airborne and satellite Remote Sensing (RS) systems acquire data and images over wide areas, thus enabling one to derive information about these areas at large scale. In this dissertation, we propose to leverage RS systems and related data processing techniques to i) automatically derive the static characteristics of the deployment environment that affect low power wireless communication; ii) model the relation between such characteristics and the communication quality; and iii) exploit this knowledge to support the deployment planning. We focus on two main scenarios: a) the deployment of Wireless Sensor Networks (WSNs) in forests; and b) the communication performance of Internet of Things (IoT) networks based on Long Range (LoRa) wireless technology in the presence of mixed environments.
As a first major contribution, we propose a novel WSN node placement approach (LaPS) that integrates remote sensing data acquired by airborne Light Detection and Ranging (LiDAR) instruments, a specialized path loss model and evolutionary computation to identify (near-)optimal node position in forests, automatically and prior to the actual deployment. When low-power WSNs operating at 2.4 GHz are deployed in forests, the presence of trees greatly affects communication. We define a processing architecture that automatically derives local forest attributes (e.g., tree density) from LiDAR data acquired over the target forest. This information is incorporated into a specialized path loss model, which is validated in deployments in a real forest, enabling fine-grained, per-link estimates of the radio signal attenuation induced by trees. Combining the forest attributes derived from LiDAR data with the specialized path loss model and a genetic algorithm, LaPS provides node placement solutions with higher quality than approaches based on a regular placement or on a standard path loss model, while satisfying the spatial and network requirements provided by the user. In addition, LaPS enables the exploration of the impact of changes in the user requirements on the resulting topologies in advance, thus reducing the in-field deployment effort.
Moreover, to explore a different low-power wireless technology with starkly different trade-offs, we consider a LoRa-based IoT network operating in i) a free space like communication environment, i.e., the LoRa signal is transmitted from an high altitude weather balloon, traverses a free-of-obstacles space and is received by gateways on the ground; and ii) a mixed environment that contains built-up areas, farming fields and groups of trees, with both LoRa transmitters and receiving gateways close to the ground. These scenarios show a huge gap in terms of communication range, thus revealing to which extent the presence of objects affects the coverage that LoRa gateways can provide. To characterize the mixed environment we exploit detailed land cover maps (i.e., with spatial grain 10x10m2) derived by automatically classifying multispectral remote sensing satellite images. The land cover information is jointly analyzed with LoRa connectivity traces, enabling us to observe a correlation between the land cover types involved in LoRa links and the trend of the signal attenuation with the distance. This analysis opens interesting research venues aimed at defining LoRa connectivity models that quantitatively account for the type of environment involved in the communication by leveraging RS data.
|
103 |
Advanced methods for tree species classification and biophysical parameter estimation using crown geometric information in high density LiDAR dataHarikumar, Aravind January 2019 (has links)
The ecological, climatic and economic influence of forests makes them an essential natural resource to be studied, preserved, and managed. Forest inventorying using single sensor data has a huge economic advantage over multi-sensor data. Remote sensing of forests using high density multi-return small footprint Light Detection and Ranging (LiDAR) data is becoming a cost-effective method to automatic estimation of forest parameters at the Individual Tree Crown (ITC) level.
Individual tree detection and delineation techniques form the basis for ITC level parameter estimation. However SoA techniques often fail to exploit the huge amount of three dimensional (3D) structural information in the high density LiDAR data to achieve accurate detection and delineation of the 3D crown in dense forests, and thus, the first contribution of the thesis is a technique that detects and delineates both dominant and subdominant trees in dense multilayered forests. The proposed method uses novel two dimensional (2D) and 3D features to achieve this goal.
Species knowledge at individual tree level is relevant for accurate forest parameter estimation. Most state-of-the-art techniques use features that represent the distribution of data points within the crown to achieve species classification. However, the performance of such methods is low when the trees belong to the same taxonomic class (e.g., the conifer class). High density LiDAR data contain a huge amount of fine structural information of individual tree crowns. Thus, the second contribution of the thesis is on novel methods for classifying conifer species using both the branch level and the crown level geometric characteristics.
Accurate localization of trees is fundamental to calibrate the individual tree level inventory data, as it allows to match reference to LiDAR data. An important biophysical parameter for precision forestry applications is the Diameter at Breast Height (DBH). SoA methods locate the stem directly below the tree top, and indirectly estimate DBH using species-specific allometric models. Both approaches tend to be inaccurate and depend on the forest type. Thus, in this thesis, a method for accurate stem localization and DBH measurement is proposed. This is the third contribution of the thesis.
Qualitative and quantitative results of the experiments confirm the effectiveness of the proposed methods over the SoA ones.
|
104 |
Learning the Meaning of Quantifiers from Language and VisionPezzelle, Sandro January 2018 (has links)
Defining the meaning of vague quantifiers (‘few’, ‘most’, ‘all’) has been, and still is, the Holy Grail of a mare magnum of studies in philosophy, logic, and linguistics. The way by which they are learned by children has been largely investigated in the realm of language acquisition, and the mechanisms underlying their comprehension and processing have received attention from experimental pragmatics, cognitive psychology, and neuroscience. Very often their meaning has been tied to that of numbers, amounts, and proportions, and many attempts have been made to place them on ordered scales. In this thesis, I study quantifiers from a novel, cognitively-inspired computational perspective. By carrying out several behavioral studies with human speakers, I seek to answer several questions concerning their meaning and use: Is the choice of quantifiers modulated by the linguistic context? Do quantifiers lie on a mental, semantically-ordered scale? Which are the features of such a scale? By exploiting recent advances in computational linguistics and computer vision, I test the performance of state-of-art neural networks in performing the same tasks and propose novel architectures to model speakers’ use of quantifiers in grounded contexts. In particular, I ask the following questions: Can the meaning of quantifiers be learned from visual scenes? How does this mechanism compare with that subtending comparatives, numbers, and proportions? The contribution of this work is two-fold: On the cognitive level, it sheds new light on various issues concerning the meaning and use of such expressions, and provides experimental evidence supporting the validity of the foundational theories. On the computational level, it proposes a novel, theoretically-informed approach to the modeling of vague and context-dependent expressions from both linguistic and visual data. By carefully analyzing the performance and errors of the models, I show the effectiveness of neural networks in performing challenging, high-level tasks. At the same time, I highlight commonalities and differences with human behavior.
|
105 |
Efficient Motion Planning for Wheeled Mobile RoboticsBevilacqua, Paolo January 2019 (has links)
Nowadays, the field of wheeled robotics is undergoing an impressive growth and development. Different hardware and software components are being developed and applied in various contexts, including assistive robotics, industrial robotics, automotive, ... Motion Planning is a fundamental aspect for the development of autonomous wheeled mobile robots. The capability of planning safe, smooth trajectories, and to locally adjust them in real-time to deal with contingent situations and avoid collisions is an essential requirement to allow robots to work and perform activities in public spaces shared with humans. Moreover, in general, efficiency is a key constraint for this kind of applications, given the limited computational power usually available on robotic platforms. In this thesis, we focus on the development of efficient algorithms to solve different kind of motion planning problems. Specifically, in the first part of the thesis, we propose a complete planning system for an assisitive robot supporting the navigation of older users. The developed planner generates paths connecting different locations on the map, that are smooth and specifically tailored to optimize the comfort perceived by the human users. During the navigation, the system applies an efficient model to predict the behaviours of the surrounding pedestrians, and to locally adapt the reference path to minimise the probability of collisions. Finally, the motion planner is integrated with an "high-level" reasoning component, to generate and propose complete activities, like the visit to a museum or a shopping mall, specifically tailored to the preferences, needs and requirements of each user. In the second part of the thesis, we show how the efficient solutions and building blocks developed for the assistive robots, can be adapted and applied also to a completely different context, such as the generation of optimal trajectories for an autonomous racing vehicle.
|
106 |
Agon: a Gamification-Based Framework for Acceptance RequirementsPiras, Luca January 2018 (has links)
We live in the days of social software where social interactions, from simple notifications to complex business processes, are supported by software platforms such as Facebook and Twitter. But for any social software to be successful, it must be used by a sizeable portion of its intended user community. This is fundamental for social software, but also a crucial point for most of the software systems in general, and the fulfillment of such (Usage) Acceptance Requirements critically depends on psychological, behavioral and social factors which may influence intrinsic and extrinsic motivations of the user. Operationalization techniques for Acceptance Requirements largely consist of making a game out of software usage where users are rewarded depending on the degree of their participation. The game, for instance, may be competitive or non-competitive, depending on the anticipated personality traits of intended users. Making a game out of usage is often referred to Gamification. It has attracted significant attention in the literature for the past few years because it offers a novel approach to software usage. Gamification is a powerful paradigm and a set of best practices used to motivate people carrying out a variety of ICT-mediated tasks. Designing gamification solutions and applying them to a ICT system is a complex and expensive process (in time, competences and money) as software engineers have to cope with heterogeneous stakeholder requirements on one hand, and Acceptance Requirements on the other, that together ensure effective user participation and a high level of system utilization. As such, gamification solutions require significant analysis and design as well as suitable supporting tools and techniques. In this thesis, we describe Agon, an Acceptance Requirements Framework based on Gamification, for supporting the requirements engineer in the analysis and design of engaging software systems. The framework adopts concepts and design techniques from Requirements Engineering, Human Behavior and Gamification. Agon encompasses both a method and a meta-model capturing acceptance and gamification knowledge. In particular, the framework consists of a generic acceptance goal meta-model that characterizes the problem space by capturing possible refinements for acceptance requirements, and a generic gamification meta-model that captures possible gamified operationalizations for acceptance requirements. The framework is illustrated with the Meeting Scheduler Exemplar and different heterogeneous case studies. In particular, we describe Agon through a real case study concerning the gamification of a system for collaborative decision-making, within the Participatory Architectural Change MAnagement in ATM Systems (PACAS) European Project. We describe also the Agon-Tool, a tool for enabling the requirements engineer in carrying out the systematic acceptance requirements analysis of the Agon framework in a semi-automatic supported way.
|
107 |
Verification of Hybrid Systems using Satisfiability Modulo TheoriesMover, Sergio January 2014 (has links)
Embedded systems are formed by hardware and software components that interact with the physical environment and thus may be modeled as Hybrid Systems. Due to the complexity the system,there is an increasing need of automatic techniques to support the design phase, ensuring that a system behaves as expected in all the possible operating conditions.In this thesis, we propose novel techniques for the verification and the validation of hybrid systems using Satisfiability Modulo Theories (SMT). SMT is an established technique that has been used successfully in many verification approaches, targeted for both hardware and software systems. The use of SMT to verify hybrid systems has been limited, due to the restricted support of complex continuous dynamics and the lack of scalability. The contribution of the thesis is twofold. First, we propose novel encoding techniques, which widen the applicability and improve the effectiveness of the SMT-based approaches. Second, we propose novel SMT-based algorithms that improve the performance of the existing state of the art approaches. In particular we show algorithms to solve problems such as invariant verification, scenario verification and parameter synthesis. The algorithms fully exploit the underlying structure of a network of hybrid systems and the functionalities of modern SMT-solvers. We show and discuss the effectiveness of the the proposed techniques when applied to benchmarks from the hybrid systems domain.
|
108 |
A Lexi-ontological Resource for Consumer Healthcare: The Italian Consumer Medical VocabularyCardillo, Elena January 2011 (has links)
In the era of Consumer Health Informatics, healthcare consumers and patients play an active role because they increasingly explore health related information sources on their own, and they become more responsible for their personal healthcare, trying to find information on the web, consulting decision-support healthcare systems, trying to interpret clinical notes or test results provided by their physician, or filling in parts of their own Personal Health Record (PHR).
In spite of the advances in Healthcare Informatics for answering consumer needs, it is still difficult for laypersons who do not have a good level of healthcare literacy to find, understand, and act on health information, due to the communication gap which still persists between consumer and professional language (in terms of lexicon, semantics, and explanation). Significant
effort has been devoted to promote access to and the integration of medical information, and many standard terminologies have been developed for this aim, some of which have been formalized into ontologies.
Many of these terminological resources are used in healthcare information systems, but one of the most important problems is that these types of terminologies have been developed according to the physiciansâ€TMperspective, and thus cannot provide sufficient support when integrated into consumer-oriented applications, such as Electronic Health Records, Personal Health Records, etc. This highlights the need for intermediate consumer-understandable terminologies or ontologies being integrated with more technical ones in order to support communication between patient-applications and those designed for experts. The aim of this thesis is to develop a lexical-ontological resource for consumer-oriented healthcare applications which is based on the construction of a Consumer-oriented Medical Vocabulary for Italian, able to reflect the different ways consumers and patients express and think about health topics, helping to bridge the vocabulary gap, integrated with standard medical terminologies/ontologies used by professionals in the general practice for representing the process of care, by means of Semantic Web technologies, in order to have a coherent semantic medical resource useful both for professionals and for consumers.
The feasibility of this consumer-oriented resource and of the Integration Framework has been tested by its application to an Italian Personal Health Record in order to help consumers and patients in the process of querying healthcare information, and easily describe their problems, complaints and clinical history.
|
109 |
Modern Anomaly Detection: Benchmarking, Scalability and a Novel ApproachPasupathipillai, Sivam 27 November 2020 (has links)
Anomaly detection consists in automatically detecting the most unusual elements in a data set. Anomaly detection applications emerge in domains such as computer security, system monitoring, fault detection, and wireless sensor networks. The strategic importance of detecting anomalies in these domains makes anomaly detection a critical data analysis task. Moreover, the contextual nature of anomalies, among other issues, makes anomaly detection a particularly challenging problem. Anomaly detection has received significant research attention in the last two decades. Much effort has been invested in the development of novel algorithms for anomaly detection. However, several open challenges still exist in the field.This thesis presents our contributions toward solving these challenges. These contributions include: a methodological survey of the recent literature, a novel benchmarking framework for anomaly detection algorithms, an approach for scaling anomaly detection techniques to massive data sets, and a novel anomaly detection algorithm inspired by the law of universal gravitation. Our methodological survey highlights open challenges in the field, and it provides some motivation for our other contributions. Our benchmarking framework, named BAD, tackles the problem of reliably assess the accuracy of unsupervised anomaly detection algorithms. BAD leverages parallel and distributed computing to enable massive comparison studies and hyperparameter tuning tasks. The challenge of scaling unsupervised anomaly detection techniques to massive data sets is well-known in the literature. In this context, our contributions are twofold: we investigate the trade-offs between a single-threaded implementation and a distributed approach considering price-performance metrics, and we propose a scalable approach for anomaly detection algorithms to arbitrary data volumes. Our results show that, when high scalability is required, our approach can handle arbitrarily large data sets without significantly compromising detection accuracy. We conclude our contributions by proposing a novel algorithm for anomaly detection, named Gravity. Gravity identifies anomalies by considering the attraction forces among massive data elements. Our evaluation shows that Gravity is competitive with other popular anomaly detection techniques on several benchmark data sets. Additionally, the properties of Gravity makes it preferable in cases where hyperparameter tuning is challenging or unfeasible.
|
110 |
Toward the "Deep Learning" of Brain White Matter StructuresAstolfi, Pietro 08 April 2022 (has links)
In the brain, neuronal cells located in different functional regions communicate through a dense structural network of axons known as the white matter (WM) tissue. Bundles of axons that share similar pathways characterize the WM anatomy, which can be investigated in-vivo thanks to the recent advances of magnetic resonance (MR) techniques. Diffusion MR imaging combined with tractography pipelines allows for a virtual reconstruction of the whole WM anatomy of in-vivo brains, namely the tractogram. It consists of millions of WM fibers as 3D polylines, each approximating thousands of axons. From the analysis of a tractogram, neuroanatomists can characterize well-known white matter structures and detect anatomically non-plausible fibers, which are artifacts of the tractography and often constitute a large portion of it. The accurate characterization of tractograms is pivotal for several clinical and neuroscientific applications. However, such characterization is a complex and time-consuming process that is difficult to be automatized as it requires properly encoding well-known anatomical priors. In this thesis, we propose to investigate the encoding of anatomical priors with a supervised deep learning framework. The ultimate goal is to reduce the presence of artifactual fibers to enable a more accurate automatic process of WM characterization. We devise the problem by distinguishing between volumetric and non-volumetric representations of white matter structures. In the first case, we learn the segmentation of the WM regions that represent relevant anatomical waypoints not yet classified by WM atlases. We investigate using Convolutional Neural Networks (CNNs) to exploit the volumetric representation of such priors. In the second case, the goal is to learn from the 3D polyline representation of fibers where the typical CNN models are not suitable. We introduce the novelty of using Geometric Deep Learning (GDL) models designed to process data having an irregular representation. The working assumption is that the geometrical properties of fibers are informative for the detection of tractogram artifacts. As a first contribution, we present StemSeg that extends the use of CNNs to detect the WM portion representing the waypoints of all the fibers for a specific bundle. This anatomical landmark, called stem, can be critical for extracting that bundle. We provide the results of an empirical analysis focused on the Inferior Fronto-Occipital Fasciculus (IFOF). The effective segmentation of the stem improves the final segmentation of the IFOF, outperforming with a significant gap the reference state of the art. As a second and major contribution, we present Verifyber, a supervised tractogram filtering approach based on GDL, distinguishing between anatomically plausible and non-plausible fibers. The proposed model is designed to learn anatomical features directly from the fiber represented as a 3D points sequence. The extended empirical analysis on healthy and clinical subjects reveals multiple benefits of Verifyber: high filtering accuracy, low inference time, flexibility to different plausibility definitions, and good generalization. Overall, this thesis constitutes a step toward characterizing white matter using deep learning. It provides effective ways of encoding anatomical priors and an original deep learning model designed for fiber.
|
Page generated in 0.1743 seconds