• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 137
  • 3
  • 1
  • Tagged with
  • 141
  • 140
  • 140
  • 139
  • 139
  • 139
  • 139
  • 139
  • 81
  • 43
  • 40
  • 10
  • 9
  • 9
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Advanced Techniques based on Mathematical Morphology for the Analysis of Remote Sensing Images

Dalla Mura, Mauro January 2011 (has links)
Remote sensing optical images of very high geometrical resolution can provide a precise and detailed representation of the surveyed scene. Thus, the spatial information contained in these images is fundamental for any application requiring the analysis of the image. However, modeling the spatial information is not a trivial task. We addressed this problem by using operators defined in the mathematical morphology framework in order to extract spatial features from the image. In this thesis novel techniques based on mathematical morphology are presented and investigated for the analysis of remote sensing optical images addressing different applications. Attribute Profiles (APs) are proposed as a novel generalization based on attribute filters of the Morphological Profile operator. Attribute filters are connected operators which can process an image by removing flat zones according to a given criterion. They are flexible operators since they can transform an image according to many different attributes (e.g., geometrical, textural and spectral). Furthermore, Extended Attribute Profiles (EAPs), a generalization of APs, are presented for the analysis of hyperspectral images. The EAPs are employed for including spatial features in the thematic classification of hyperspectral images. Two techniques dealing with EAPs and dimensionality reduction transformations are proposed and applied in image classification. In greater detail, one of the techniques is based on Independent Component Analysis and the other one deals with feature extraction techniques. Moreover, a technique based on APs for extracting features for the detection of buildings in a scene is investigated. Approaches that process an image by considering both bright and dark components of a scene are investigated. In particular, the effect of applying attribute filters in an alternating sequential setting is investigated. Furthermore, the concept of Self-Dual Attribute Profile (SDAP) is introduced. SDAPs are APs built on an inclusion tree instead of a min- and max-tree, providing an operator that performs a multilevel filtering of both the bright and dark components of an image. Techniques developed for applications different from image classification are also considered. In greater detail, a general approach for image simplification based on attribute filters is proposed. Finally, two change detection techniques are developed. The experimental analysis performed with the novel techniques developed in this thesis demonstrates an improvement in terms of accuracies in different fields of application when compared to other state of the art methods.
82

Social interaction analysis in in videos, from wide to close perspective

Rota, Paolo January 2015 (has links)
In today’s digital age, the enhancement of the hardware technology has set new horizons on the computer science universe, asking new questions, proposing new solutions and re-opening some branches that have been temporary closed due to the overwhelming computational complexity. In this sense many algorithms have been proposed but they have never been successfully applied in practice up to now. In this work we will tackle the issues related to the detection and the localization of an interaction conducted by humans. We will begin analysing group interactions then moving to dyadic interactions and then elevate our considerations to the real world scenario. We will propose new challenging datasets, introducing new important tasks and suggesting some possible solutions.
83

Optimization Modulo Theories with OptiMathSAT

Trentin, Patrick January 2019 (has links)
In the contexts of Formal Verification (FV) and Automated Reasoning (AR), Satisfiability Modulo Theories (SMT) is an important discipline that allows for dealing with industrial-level decision problems. Optimization Modulo Theories (OMT) extends Satisfiability Modulo Theories with the ability to express, and optimize, objective functions. Recently, there has been a growing interest towards OMT, as witnessed by an increasing number of applications using, at their core, some OMT solver as main power-horse engine. However, at present few OMT solvers exist, and the development of OMT technology is still at an early stage, with large margins of improvement. We identify two major advancement directions in particular. First, there is a general need for closing the expressiveness gap with respect to SMT, and provide optimization procedures that can deal with the wider range of theories supported by SMT solvers. Second, there is an urgent need for more efficient techniques that can improve on the performance of state-of-the-art OMT solvers, because solving an OMT problem is inherently more expensive than dealing with its SMT counterpart, often by at least one order of magnitude. In this dissertation, we present a variety of techniques that deal with the identified issues and advance both the expressiveness and the efficiency of OMT. We describe our implementation of these techniques inside OptiMathSAT, a state-of-the-art OMT solver based on MathSAT5, along with its high-level architecture, Input/Output interfaces and configurable options. Thanks to our novel contributions, OptiMathSAT can now deal with the single- and the multi-objective incremental optimization of goals defined over multiple domains –the Boolean, the mixed Linear Integer and Rational Arithmetic, the Bit-Vector and the Floating Point domain– including (Partial Weighted) MaxSMT. We validate our theoretical contributions experimentally, by comparing the performance of OptiMathSAT against other, competing, OMT solvers. Finally, we investigate the effectiveness of OMT beyond the scope of Formal Verification, and describe an experimental evaluation comparing OptiMathSAT with Finite Domain Constraint Programming tools on benchmark-sets coming from their respective domains.
84

Greedy Feature Selection in Tree Kernel Spaces

Pighin, Daniele January 2010 (has links)
Tree Kernel functions are powerful tools for solving different classes of problems requiring large amounts of structured information. Combined with accurate learning algorithms, such as Support Vector Machines, they allow us to directly encode rich syntactic data in our learning problems without requiring an explicit feature mapping function or deep specific domain knowledge. However, as other very high dimensional kernel families, they come with two major drawbacks: first, the computational complexity induced by the dual representation makes them unpractical for very large datasets or for situations where very fast classifiers are necessary, e.g. real time systems or web applications; second, their implicit nature somehow limits their scientific appeal, as the implicit models that we learn cannot cast new light on the studied problems. As a possible solution to these two problems, this Thesis presents an approach to feature selection for tree kernel functions in the context of Support Vector learning, based on a greedy exploration of the fragment space. Features are selected according to a gradient norm preservation criterion, i.e. we select the heaviest features that account for a large percentage of the gradient norm, and are explicitly modeled and represented. The result of the feature extraction process is a data structure that can be used to decode the input structured data, i.e. to explicitly describe a tree in terms of its more relevant fragments. We present theoretical insights that justify the adopted strategy and detail the algorithms and data structures used to explore the feature space and store the most relevant features. Experiments on three different multi-class NLP tasks and data sets, namely question classification, relation extraction and semantic role labeling, confirm the theoretical findings and show that the decoding process can produce very fast and accurate linear classifiers, along with the explicit representation of the most relevant structured features identified for each class.
85

Information Quality Requirements Engineering: a Goal-based Modeling and Reasoning Approach

Gharib, Mohamad January 2015 (has links)
Information Quality (IQ) has been always a growing concern for most organizations, since they depend on information for managing their daily tasks, delivering their services to their costumers, making important decisions, etc., and relying on low-quality information may negatively influence their overall performance, or even disasters in the case of critical systems (e.g., air traffic management systems, healthcare systems, etc.). Although there exist several techniques for dealing with IQ related problems in the literature (e.g., checksum, integrity constraints, etc.), but most of them propose solutions that are able to address the technical aspects of IQ, and seem to be limited in addressing social and organizational aspects. In other words, these techniques do not satisfy the needs of current complex systems, such as socio-technical systems, where humans and their interactions are considered as an integral part of the system along with the technical elements (e.g., healthcare systems, smart cities, etc.). This introduces the need of analyzing the social and organizational context where the system will eventually operates, since IQ related problems might manifest themselves in the actors' interactions and dependencies. Moreover, considering IQ requirements since the early phase of the system development (the requirements phase) can prevent revising the system to accommodate such needs after the system deployment, which might be too costly. Despite this, most of the Requirements Engineering (RE) frameworks and approaches either loosely define, or simply ignore IQ requirements. To this end, we propose a goal-oriented framework for modeling and reasoning about IQ requirements since the early phases of the system development. The proposed framework consists of (i) a modeling language that provides concepts and constructs for modeling IQ requirements; (ii) a set of analysis techniques that support system designers while performing the required analysis to verify the correctness and consistency of the IQ requirements model; (iii) an engineering methodology to assist designers in using the framework for capturing IQ requirements; and (iv) an automated tool-support, namely ST-IQ Tool. In addition, we empirically evaluated the framework to demonstrate its applicability, usefulness, and the scalability of its reasoning techniques by successfully applying it to a case study concerning a stock market system.
86

Bringing Probabilistic Real-Time Guarantees to the Real World

Villalba Frias, Bernardo January 2018 (has links)
Stochastic analysis of real-time systems has received a remarkable attention in the past few years. In general, this analysis has been mainly focused on sets of applications competing for a shared CPU and assuming independence in the computation and inter-arrival times of the jobs composing the tasks. However, for a large class of modern real-time applications, this assumption cannot be considered realistic. Indeed, this type of applications exhibit important variations in the computation time, making the stochastic analysis not accurate enough to provide precise and tight probabilistic guarantees. Fortunately, for such applications we have verified that the computation time is more faithfully described by a Markov model. Hence, we propose a procedure based on the theory of hidden Markov models to extract the structure of the model from the observation of a number of execution traces of the application. Additionally, we show how to adapt probabilistic guarantees to a Markovian computation time. Performed over a large set of both synthetic and real robotic applications, our experimental results reveal a very good match between the theoretical findings and the ones obtained experimentally. Finally, the estimation procedure and the stochastic analysis method are integrated into the PRObabilistic deSign of Real--Time Systems (PROSIT) framework.
87

Learning to Learn Concept Descriptions

Petrucci, Giulio January 2018 (has links)
The goal of automatically encoding natural language text into some formal representation has been pursued in the field of Knowledge Engineering to support the construction of Formal Ontologies. Many \SOA{} methods have been proposed for the automatic extraction of lightweight Ontologies and to populate them. Only few have tackled the challenge of extracting expressive axioms that formalize the possibly complex semantics of ontological concepts. In this thesis, we address the problem of encoding a natural language sentence expressing the description of a concept into a corresponding Description Logic axiom. In our approach, the encoding happens through a syntactic transformation, so that all the extralogical symbols in the formula are words actually occurring in the input sentence. We followed the recent advances in the field of Deep Learning in order to design suitable Neural Network architectures capable to learn by examples how to perform this transformation. Since no pre-existing dataset was available to adequately train Neural Networks for this task, we designed a data generation pipeline to produce datasets to train and evaluate the architectures proposed in this thesis. These datasets provide therefore a first reference corpus for the task of learning concept description axioms from text via Machine Learning techniques, and are now available for the Knowledge Engineering community to fill the pre-existing lack of data. During our evaluation, we assessed some key characteristics of the approach we propose. First, we evaluated the capability of the trained models to generalize over the syntactic structures used in the expression of concept descriptions, together with the tolerance to unknown words. The importance of these characteristics is due to the fact that Machine Learning systems are trained on a statistical sample of the problem space, and they have to learn to generalize over this sample in order to process new inputs. In particular, in our scenario, even an extremely large training set is not able to include all the possible ways a human can express the definition of a concept. At the same time, part of the human vocabulary is likely to fall out of the training set. Thus, testing these generalization capabilities and the tolerance to unknown words is crucial to evaluate the effectiveness of the model. Second, we evaluated the improvement in the performance of the model when it is incrementally trained with additional training examples. This is also a pivotal characteristic of our approach, since Machine Learning-based systems are typically supposed to continuously evolve and improve, on the long term, through iterative repetitions of training set enlargements and training process runs. Therefore, a valuable model has to show the ability to improve its performance when new training examples are added to the training set. To the best of our knowledge, this work represents the first assessment of an approach to the problem of encoding expressive concept descriptions from text that is entirely Machine Learning-based and is trained in a end-to-end fashion starting from raw text. In detail, this thesis proposes the first two Neural Network architectures in literature to solve the problem together with their evaluation with respect to the above pivotal characteristics, and a first dataset generation pipeline together with concrete datasets.
88

Learning from noisy data through robust feature selection, ensembles and simulation-based optimization

Mariello, Andrea January 2019 (has links)
The presence of noise and uncertainty in real scenarios makes machine learning a challenging task. Acquisition errors or missing values can lead to models that do not generalize well on new data. Under-fitting and over-fitting can occur because of feature redundancy in high-dimensional problems as well as data scarcity. In these contexts the learning task can show difficulties in extracting relevant and stable information from noisy features or from a limited set of samples with high variance. In some extreme cases, the presence of only aggregated data instead of individual samples prevents the use of instance-based learning. In these contexts, parametric models can be learned through simulations to take into account the inherent stochastic nature of the processes involved. This dissertation includes contributions to different learning problems characterized by noise and uncertainty. In particular, we propose i) a novel approach for robust feature selection based on the neighborhood entropy, ii) an approach based on ensembles for robust salary prediction in the IT job market, and iii) a parametric simulation-based approach for dynamic pricing and what-if analyses in hotel revenue management when only aggregated data are available.
89

Architecting Evolving Internet of Things Application

Sasidharan, Swaytha January 2019 (has links)
The Internet of Things paradigm has witnessed an unprecedented growth pervading every sector of the societal fabric from homes, cars, health, industry, and business. Given the technological advances witnessed in all the enabling technologies of IoT, it is now possible to connect an increasing number of devices to the internet. The data value chain has slowly risen in prominence. Moving from the vertical solutions to a horizontal architecture has resulted in the development of services which is the culmination of multiple sources of data. The complexity of handling the growing number of connected devices has resulted in active research of architectures and platforms which enable the service providers and data providers to navigate through the maze of technologies. We also look at the data generated by the real world, which is dynamic and non-stationary in nature. The always connected virtual representations of the devices facilitates applications to proactively perceive, comprehend and adapt to the real world situations. Paving the way to integrate learning algorithms, this thesis presents a modular architecture with elements to detect, respond and adapt to the changing data. Given the scope of IoT in different applications, we explore the implementation challenges both the advantages and limitations in two different domains. It includes (i) Smart Asset Management Framework: To provide real time localization of movable medical objects in a hospital. Additionally the movement patterns of the objects in studied and modeled to facilitate predictions. This helps to improve the energy savings of the localization technology. It helps the hospital authorities to understand the usage of the objects for efficient resource planning. (ii)Transitioning to a Industrial4.0 application: To facilitate in the digital transformation of a solar cell research center. With the similar concepts of virtualization (digital twins) and real world knowledge generation, the digital factory vision is conceptualized and implemented in phases.The supporting work including prototypes for smart home environment control, people activity detection and presence detection using bluetooth beacons leading to the development of the architectural components is presented. The implementation details along with results and observations in each of the sections is presented.
90

Multi-Resolution Techniques Based on Shape-Optimization for the Solution of Inverse-Scattering Problems

Benedetti, Manuel January 2008 (has links)
In the framework of inverse electromagnetic scattering techniques, the thesis focuses on the development and the analysis of the integration between a multi-resolution imaging procedure and a shape-optimization-based technique. The arising methodology allows, on one hand, to fully exploit the limited amount of information collectable from scattering measurements by means of the iterative multi-scaling approach (IMSA) which enables a detailed reconstruction only where needed without increasing the number of unknowns. On the other hand, the use of shape-optimization, such as the level-set-based minimization, provide an effective description of the class of targets to be retrieved by using a-priori" information about the homogeneity of the scatterers. In order to assess strong points and drawbacks of such an hybrid approach when dealing with one or multiple scatterers a numerical validation of the proposed implementations is carried out by processing both synthetic and laboratory-controlled scattering data."

Page generated in 0.0624 seconds