• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 137
  • 3
  • 1
  • Tagged with
  • 141
  • 140
  • 140
  • 139
  • 139
  • 139
  • 139
  • 139
  • 81
  • 43
  • 40
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Specification-based Predictive Continuous Monitoring for Cyber Physical Systems with Unobservables

Coletta, Alessio January 2018 (has links)
Cyber Physical Systems (CPS), like IoT and industrial control systems, are typically vulnerable to cyber threats due to a lack of cyber security measures and hard change management. Security monitoring is aimed at improving the situational awareness and the resilience to cyber attacks. Solutions tailored to CPS are required for greater effectiveness. This PhD work proposes a monitoring framework that leverages the knowledge of the CPS in order to specify, check, and predict known critical conditions. This approach is particularly suitable to CPS which are designed for a precise purpose, well documented, and predictable to a good extent. The framework uses a formal logical language to specify quantitative critical conditions and an optimisation linear programming and SMT-based engine. The framework computes a quantitative measure of the criticality of the current CPS system: checking how criticality changes in time enables to predict whether the system is approaching to a critical condition or reaching back a licit state. An important novelty of the approach is the capability of dealing with unobservable variables, for a greater feasibility for real cases, and the possibility to use observation times in critical specifications. This work presents the formal framework, a prototype, a testbed, and first experimental results that validate the feasibility of the approach.
82

Advanced Techniques based on Mathematical Morphology for the Analysis of Remote Sensing Images

Dalla Mura, Mauro January 2011 (has links)
Remote sensing optical images of very high geometrical resolution can provide a precise and detailed representation of the surveyed scene. Thus, the spatial information contained in these images is fundamental for any application requiring the analysis of the image. However, modeling the spatial information is not a trivial task. We addressed this problem by using operators defined in the mathematical morphology framework in order to extract spatial features from the image. In this thesis novel techniques based on mathematical morphology are presented and investigated for the analysis of remote sensing optical images addressing different applications. Attribute Profiles (APs) are proposed as a novel generalization based on attribute filters of the Morphological Profile operator. Attribute filters are connected operators which can process an image by removing flat zones according to a given criterion. They are flexible operators since they can transform an image according to many different attributes (e.g., geometrical, textural and spectral). Furthermore, Extended Attribute Profiles (EAPs), a generalization of APs, are presented for the analysis of hyperspectral images. The EAPs are employed for including spatial features in the thematic classification of hyperspectral images. Two techniques dealing with EAPs and dimensionality reduction transformations are proposed and applied in image classification. In greater detail, one of the techniques is based on Independent Component Analysis and the other one deals with feature extraction techniques. Moreover, a technique based on APs for extracting features for the detection of buildings in a scene is investigated. Approaches that process an image by considering both bright and dark components of a scene are investigated. In particular, the effect of applying attribute filters in an alternating sequential setting is investigated. Furthermore, the concept of Self-Dual Attribute Profile (SDAP) is introduced. SDAPs are APs built on an inclusion tree instead of a min- and max-tree, providing an operator that performs a multilevel filtering of both the bright and dark components of an image. Techniques developed for applications different from image classification are also considered. In greater detail, a general approach for image simplification based on attribute filters is proposed. Finally, two change detection techniques are developed. The experimental analysis performed with the novel techniques developed in this thesis demonstrates an improvement in terms of accuracies in different fields of application when compared to other state of the art methods.
83

Social interaction analysis in in videos, from wide to close perspective

Rota, Paolo January 2015 (has links)
In today’s digital age, the enhancement of the hardware technology has set new horizons on the computer science universe, asking new questions, proposing new solutions and re-opening some branches that have been temporary closed due to the overwhelming computational complexity. In this sense many algorithms have been proposed but they have never been successfully applied in practice up to now. In this work we will tackle the issues related to the detection and the localization of an interaction conducted by humans. We will begin analysing group interactions then moving to dyadic interactions and then elevate our considerations to the real world scenario. We will propose new challenging datasets, introducing new important tasks and suggesting some possible solutions.
84

Agon: a Gamification-Based Framework for Acceptance Requirements

Piras, Luca January 2018 (has links)
We live in the days of social software where social interactions, from simple notifications to complex business processes, are supported by software platforms such as Facebook and Twitter. But for any social software to be successful, it must be used by a sizeable portion of its intended user community. This is fundamental for social software, but also a crucial point for most of the software systems in general, and the fulfillment of such (Usage) Acceptance Requirements critically depends on psychological, behavioral and social factors which may influence intrinsic and extrinsic motivations of the user. Operationalization techniques for Acceptance Requirements largely consist of making a game out of software usage where users are rewarded depending on the degree of their participation. The game, for instance, may be competitive or non-competitive, depending on the anticipated personality traits of intended users. Making a game out of usage is often referred to Gamification. It has attracted significant attention in the literature for the past few years because it offers a novel approach to software usage. Gamification is a powerful paradigm and a set of best practices used to motivate people carrying out a variety of ICT-mediated tasks. Designing gamification solutions and applying them to a ICT system is a complex and expensive process (in time, competences and money) as software engineers have to cope with heterogeneous stakeholder requirements on one hand, and Acceptance Requirements on the other, that together ensure effective user participation and a high level of system utilization. As such, gamification solutions require significant analysis and design as well as suitable supporting tools and techniques. In this thesis, we describe Agon, an Acceptance Requirements Framework based on Gamification, for supporting the requirements engineer in the analysis and design of engaging software systems. The framework adopts concepts and design techniques from Requirements Engineering, Human Behavior and Gamification. Agon encompasses both a method and a meta-model capturing acceptance and gamification knowledge. In particular, the framework consists of a generic acceptance goal meta-model that characterizes the problem space by capturing possible refinements for acceptance requirements, and a generic gamification meta-model that captures possible gamified operationalizations for acceptance requirements. The framework is illustrated with the Meeting Scheduler Exemplar and different heterogeneous case studies. In particular, we describe Agon through a real case study concerning the gamification of a system for collaborative decision-making, within the Participatory Architectural Change MAnagement in ATM Systems (PACAS) European Project. We describe also the Agon-Tool, a tool for enabling the requirements engineer in carrying out the systematic acceptance requirements analysis of the Agon framework in a semi-automatic supported way.
85

Optimization Modulo Theories with OptiMathSAT

Trentin, Patrick January 2019 (has links)
In the contexts of Formal Verification (FV) and Automated Reasoning (AR), Satisfiability Modulo Theories (SMT) is an important discipline that allows for dealing with industrial-level decision problems. Optimization Modulo Theories (OMT) extends Satisfiability Modulo Theories with the ability to express, and optimize, objective functions. Recently, there has been a growing interest towards OMT, as witnessed by an increasing number of applications using, at their core, some OMT solver as main power-horse engine. However, at present few OMT solvers exist, and the development of OMT technology is still at an early stage, with large margins of improvement. We identify two major advancement directions in particular. First, there is a general need for closing the expressiveness gap with respect to SMT, and provide optimization procedures that can deal with the wider range of theories supported by SMT solvers. Second, there is an urgent need for more efficient techniques that can improve on the performance of state-of-the-art OMT solvers, because solving an OMT problem is inherently more expensive than dealing with its SMT counterpart, often by at least one order of magnitude. In this dissertation, we present a variety of techniques that deal with the identified issues and advance both the expressiveness and the efficiency of OMT. We describe our implementation of these techniques inside OptiMathSAT, a state-of-the-art OMT solver based on MathSAT5, along with its high-level architecture, Input/Output interfaces and configurable options. Thanks to our novel contributions, OptiMathSAT can now deal with the single- and the multi-objective incremental optimization of goals defined over multiple domains –the Boolean, the mixed Linear Integer and Rational Arithmetic, the Bit-Vector and the Floating Point domain– including (Partial Weighted) MaxSMT. We validate our theoretical contributions experimentally, by comparing the performance of OptiMathSAT against other, competing, OMT solvers. Finally, we investigate the effectiveness of OMT beyond the scope of Formal Verification, and describe an experimental evaluation comparing OptiMathSAT with Finite Domain Constraint Programming tools on benchmark-sets coming from their respective domains.
86

Greedy Feature Selection in Tree Kernel Spaces

Pighin, Daniele January 2010 (has links)
Tree Kernel functions are powerful tools for solving different classes of problems requiring large amounts of structured information. Combined with accurate learning algorithms, such as Support Vector Machines, they allow us to directly encode rich syntactic data in our learning problems without requiring an explicit feature mapping function or deep specific domain knowledge. However, as other very high dimensional kernel families, they come with two major drawbacks: first, the computational complexity induced by the dual representation makes them unpractical for very large datasets or for situations where very fast classifiers are necessary, e.g. real time systems or web applications; second, their implicit nature somehow limits their scientific appeal, as the implicit models that we learn cannot cast new light on the studied problems. As a possible solution to these two problems, this Thesis presents an approach to feature selection for tree kernel functions in the context of Support Vector learning, based on a greedy exploration of the fragment space. Features are selected according to a gradient norm preservation criterion, i.e. we select the heaviest features that account for a large percentage of the gradient norm, and are explicitly modeled and represented. The result of the feature extraction process is a data structure that can be used to decode the input structured data, i.e. to explicitly describe a tree in terms of its more relevant fragments. We present theoretical insights that justify the adopted strategy and detail the algorithms and data structures used to explore the feature space and store the most relevant features. Experiments on three different multi-class NLP tasks and data sets, namely question classification, relation extraction and semantic role labeling, confirm the theoretical findings and show that the decoding process can produce very fast and accurate linear classifiers, along with the explicit representation of the most relevant structured features identified for each class.
87

Information Quality Requirements Engineering: a Goal-based Modeling and Reasoning Approach

Gharib, Mohamad January 2015 (has links)
Information Quality (IQ) has been always a growing concern for most organizations, since they depend on information for managing their daily tasks, delivering their services to their costumers, making important decisions, etc., and relying on low-quality information may negatively influence their overall performance, or even disasters in the case of critical systems (e.g., air traffic management systems, healthcare systems, etc.). Although there exist several techniques for dealing with IQ related problems in the literature (e.g., checksum, integrity constraints, etc.), but most of them propose solutions that are able to address the technical aspects of IQ, and seem to be limited in addressing social and organizational aspects. In other words, these techniques do not satisfy the needs of current complex systems, such as socio-technical systems, where humans and their interactions are considered as an integral part of the system along with the technical elements (e.g., healthcare systems, smart cities, etc.). This introduces the need of analyzing the social and organizational context where the system will eventually operates, since IQ related problems might manifest themselves in the actors' interactions and dependencies. Moreover, considering IQ requirements since the early phase of the system development (the requirements phase) can prevent revising the system to accommodate such needs after the system deployment, which might be too costly. Despite this, most of the Requirements Engineering (RE) frameworks and approaches either loosely define, or simply ignore IQ requirements. To this end, we propose a goal-oriented framework for modeling and reasoning about IQ requirements since the early phases of the system development. The proposed framework consists of (i) a modeling language that provides concepts and constructs for modeling IQ requirements; (ii) a set of analysis techniques that support system designers while performing the required analysis to verify the correctness and consistency of the IQ requirements model; (iii) an engineering methodology to assist designers in using the framework for capturing IQ requirements; and (iv) an automated tool-support, namely ST-IQ Tool. In addition, we empirically evaluated the framework to demonstrate its applicability, usefulness, and the scalability of its reasoning techniques by successfully applying it to a case study concerning a stock market system.
88

Bringing Probabilistic Real-Time Guarantees to the Real World

Villalba Frias, Bernardo January 2018 (has links)
Stochastic analysis of real-time systems has received a remarkable attention in the past few years. In general, this analysis has been mainly focused on sets of applications competing for a shared CPU and assuming independence in the computation and inter-arrival times of the jobs composing the tasks. However, for a large class of modern real-time applications, this assumption cannot be considered realistic. Indeed, this type of applications exhibit important variations in the computation time, making the stochastic analysis not accurate enough to provide precise and tight probabilistic guarantees. Fortunately, for such applications we have verified that the computation time is more faithfully described by a Markov model. Hence, we propose a procedure based on the theory of hidden Markov models to extract the structure of the model from the observation of a number of execution traces of the application. Additionally, we show how to adapt probabilistic guarantees to a Markovian computation time. Performed over a large set of both synthetic and real robotic applications, our experimental results reveal a very good match between the theoretical findings and the ones obtained experimentally. Finally, the estimation procedure and the stochastic analysis method are integrated into the PRObabilistic deSign of Real--Time Systems (PROSIT) framework.
89

Learning to Learn Concept Descriptions

Petrucci, Giulio January 2018 (has links)
The goal of automatically encoding natural language text into some formal representation has been pursued in the field of Knowledge Engineering to support the construction of Formal Ontologies. Many \SOA{} methods have been proposed for the automatic extraction of lightweight Ontologies and to populate them. Only few have tackled the challenge of extracting expressive axioms that formalize the possibly complex semantics of ontological concepts. In this thesis, we address the problem of encoding a natural language sentence expressing the description of a concept into a corresponding Description Logic axiom. In our approach, the encoding happens through a syntactic transformation, so that all the extralogical symbols in the formula are words actually occurring in the input sentence. We followed the recent advances in the field of Deep Learning in order to design suitable Neural Network architectures capable to learn by examples how to perform this transformation. Since no pre-existing dataset was available to adequately train Neural Networks for this task, we designed a data generation pipeline to produce datasets to train and evaluate the architectures proposed in this thesis. These datasets provide therefore a first reference corpus for the task of learning concept description axioms from text via Machine Learning techniques, and are now available for the Knowledge Engineering community to fill the pre-existing lack of data. During our evaluation, we assessed some key characteristics of the approach we propose. First, we evaluated the capability of the trained models to generalize over the syntactic structures used in the expression of concept descriptions, together with the tolerance to unknown words. The importance of these characteristics is due to the fact that Machine Learning systems are trained on a statistical sample of the problem space, and they have to learn to generalize over this sample in order to process new inputs. In particular, in our scenario, even an extremely large training set is not able to include all the possible ways a human can express the definition of a concept. At the same time, part of the human vocabulary is likely to fall out of the training set. Thus, testing these generalization capabilities and the tolerance to unknown words is crucial to evaluate the effectiveness of the model. Second, we evaluated the improvement in the performance of the model when it is incrementally trained with additional training examples. This is also a pivotal characteristic of our approach, since Machine Learning-based systems are typically supposed to continuously evolve and improve, on the long term, through iterative repetitions of training set enlargements and training process runs. Therefore, a valuable model has to show the ability to improve its performance when new training examples are added to the training set. To the best of our knowledge, this work represents the first assessment of an approach to the problem of encoding expressive concept descriptions from text that is entirely Machine Learning-based and is trained in a end-to-end fashion starting from raw text. In detail, this thesis proposes the first two Neural Network architectures in literature to solve the problem together with their evaluation with respect to the above pivotal characteristics, and a first dataset generation pipeline together with concrete datasets.
90

Learning from noisy data through robust feature selection, ensembles and simulation-based optimization

Mariello, Andrea January 2019 (has links)
The presence of noise and uncertainty in real scenarios makes machine learning a challenging task. Acquisition errors or missing values can lead to models that do not generalize well on new data. Under-fitting and over-fitting can occur because of feature redundancy in high-dimensional problems as well as data scarcity. In these contexts the learning task can show difficulties in extracting relevant and stable information from noisy features or from a limited set of samples with high variance. In some extreme cases, the presence of only aggregated data instead of individual samples prevents the use of instance-based learning. In these contexts, parametric models can be learned through simulations to take into account the inherent stochastic nature of the processes involved. This dissertation includes contributions to different learning problems characterized by noise and uncertainty. In particular, we propose i) a novel approach for robust feature selection based on the neighborhood entropy, ii) an approach based on ensembles for robust salary prediction in the IT job market, and iii) a parametric simulation-based approach for dynamic pricing and what-if analyses in hotel revenue management when only aggregated data are available.

Page generated in 0.0806 seconds