• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 44
  • 44
  • 20
  • 14
  • 12
  • 11
  • 10
  • 9
  • 9
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

An Evolutionary Approach to Adaptive Image Analysis for Retrieving and Long-term Monitoring Historical Land Use from Spatiotemporally Heterogeneous Map Sources

Herold, Hendrik 23 March 2015 (has links)
Land use changes have become a major contributor to the anthropogenic global change. The ongoing dispersion and concentration of the human species, being at their orders unprecedented, have indisputably altered Earth’s surface and atmosphere. The effects are so salient and irreversible that a new geological epoch, following the interglacial Holocene, has been announced: the Anthropocene. While its onset is by some scholars dated back to the Neolithic revolution, it is commonly referred to the late 18th century. The rapid development since the industrial revolution and its implications gave rise to an increasing awareness of the extensive anthropogenic land change and led to an urgent need for sustainable strategies for land use and land management. By preserving of landscape and settlement patterns at discrete points in time, archival geospatial data sources such as remote sensing imagery and historical geotopographic maps, in particular, could give evidence of the dynamic land use change during this crucial period. In this context, this thesis set out to explore the potentials of retrospective geoinformation for monitoring, communicating, modeling and eventually understanding the complex and gradually evolving processes of land cover and land use change. Currently, large amounts of geospatial data sources such as archival maps are being worldwide made online accessible by libraries and national mapping agencies. Despite their abundance and relevance, the usage of historical land use and land cover information in research is still often hindered by the laborious visual interpretation, limiting the temporal and spatial coverage of studies. Thus, the core of the thesis is dedicated to the computational acquisition of geoinformation from archival map sources by means of digital image analysis. Based on a comprehensive review of literature as well as the data and proposed algorithms, two major challenges for long-term retrospective information acquisition and change detection were identified: first, the diversity of geographical entity representations over space and time, and second, the uncertainty inherent to both the data source itself and its utilization for land change detection. To address the former challenge, image segmentation is considered a global non-linear optimization problem. The segmentation methods and parameters are adjusted using a metaheuristic, evolutionary approach. For preserving adaptability in high level image analysis, a hybrid model- and data-driven strategy, combining a knowledge-based and a neural net classifier, is recommended. To address the second challenge, a probabilistic object- and field-based change detection approach for modeling the positional, thematic, and temporal uncertainty adherent to both data and processing, is developed. Experimental results indicate the suitability of the methodology in support of land change monitoring. In conclusion, potentials of application and directions for further research are given.
42

Algorithmic and Graph-Theoretic Approaches for Optimal Sensor Selection in Large-Scale Systems

Lintao Ye (9741149) 15 December 2020 (has links)
<div>Using sensor measurements to estimate the states and parameters of a system is a fundamental task in understanding the behavior of the system. Moreover, as modern systems grow rapidly in scale and complexity, it is not always possible to deploy sensors to measure all of the states and parameters of the system, due to cost and physical constraints. Therefore, selecting an optimal subset of all the candidate sensors to deploy and gather measurements of the system is an important and challenging problem. In addition, the systems may be targeted by external attackers who attempt to remove or destroy the deployed sensors. This further motivates the formulation of resilient sensor selection strategies. In this thesis, we address the sensor selection problem under different settings as follows. </div><div><br></div><div>First, we consider the optimal sensor selection problem for linear dynamical systems with stochastic inputs, where the Kalman filter is applied based on the sensor measurements to give an estimate of the system states. The goal is to select a subset of sensors under certain budget constraints such that the trace of the steady-state error covariance of the Kalman filter with the selected sensors is minimized. We characterize the complexity of this problem by showing that the Kalman filtering sensor selection problem is NP-hard and cannot be approximated within any constant factor in polynomial time for general systems. We then consider the optimal sensor attack problem for Kalman filtering. The Kalman filtering sensor attack problem is to attack a subset of selected sensors under certain budget constraints in order to maximize the trace of the steady-state error covariance of the Kalman filter with sensors after the attack. We show that the same results as the Kalman filtering sensor selection problem also hold for the Kalman filtering sensor attack problem. Having shown that the general sensor selection and sensor attack problems for Kalman filtering are hard to solve, our next step is to consider special classes of the general problems. Specifically, we consider the underlying directed network corresponding to a linear dynamical system and investigate the case when there is a single node of the network that is affected by a stochastic input. In this setting, we show that the corresponding sensor selection and sensor attack problems for Kalman filtering can be solved in polynomial time. We further study the resilient sensor selection problem for Kalman filtering, where the problem is to find a sensor selection strategy under sensor selection budget constraints such that the trace of the steady-state error covariance of the Kalman filter is minimized after an adversary removes some of the deployed sensors. We show that the resilient sensor selection problem for Kalman filtering is NP-hard, and provide a pseudo-polynomial-time algorithm to solve it optimally.</div><div> </div><div> Next, we consider the sensor selection problem for binary hypothesis testing. The problem is to select a subset of sensors under certain budget constraints such that a certain metric of the Neyman-Pearson (resp., Bayesian) detector corresponding to the selected sensors is optimized. We show that this problem is NP-hard if the objective is to minimize the miss probability (resp., error probability) of the Neyman-Pearson (resp., Bayesian) detector. We then consider three optimization objectives based on the Kullback-Leibler distance, J-Divergence and Bhattacharyya distance, respectively, in the hypothesis testing sensor selection problem, and provide performance bounds on greedy algorithms when applied to the sensor selection problem associated with these optimization objectives.</div><div> </div><div> Moving beyond the binary hypothesis setting, we also consider the setting where the true state of the world comes from a set that can have cardinality greater than two. A Bayesian approach is then used to learn the true state of the world based on the data streams provided by the data sources. We formulate the Bayesian learning data source selection problem under this setting, where the goal is to minimize the cost spent on the data sources such that the learning error is within a certain range. We show that the Bayesian learning data source selection is also NP-hard, and provide greedy algorithms with performance guarantees.</div><div> </div><div> Finally, in light of the COVID-19 pandemic, we study the parameter estimation measurement selection problem for epidemics spreading in networks. Here, the measurements (with certain costs) are collected by conducting virus and antibody tests on the individuals in the epidemic spread network. The goal of the problem is then to optimally estimate the parameters (i.e., the infection rate and the recovery rate of the virus) in the epidemic spread network, while satisfying the budget constraint on collecting the measurements. Again, we show that the measurement selection problem is NP-hard, and provide approximation algorithms with performance guarantees.</div>
43

Accelerated In-situ Workflow of Memory-aware Lattice Boltzmann Simulation and Analysis

Yuankun Fu (10223831) 29 April 2021 (has links)
<div>As high performance computing systems are advancing from petascale to exascale, scientific workflows to integrate simulation and visualization/analysis are a key factor to influence scientific campaigns. As one of the campaigns to study fluid behaviors, computational fluid dynamics (CFD) simulations have progressed rapidly in the past several decades, and revolutionized our lives in many fields. Lattice Boltzmann method (LBM) is an evolving CFD approach to significantly reducing the complexity of the conventional CFD methods, and can simulate complex fluid flow phenomena with cheaper computational cost. This research focuses on accelerating the workflow of LBM simulation and data analysis.</div><div><br></div><div>I start my research on how to effectively integrate each component of a workflow at extreme scales. Firstly, we design an in-situ workflow benchmark that integrates seven state-of-the-art in-situ workflow systems with three synthetic applications, two real-world CFD applications, and corresponding data analysis. Then detailed performance analysis using visualized tracing shows that even the fastest existing workflow system still has 42% overhead. Then, I develop a novel minimized end-to-end workflow system, Zipper, which combines the fine-grain task parallelism of full asynchrony and pipelining. Meanwhile, I design a novel concurrent data transfer optimization method, which employs a multi-threaded work-stealing algorithm to transfer data using both channels of network and parallel file system. It significantly reduces the data transfer time by up to 32%, especially when the simulation application is stalled. Then investigation on the speedup using OmniPath network tools shows that the network congestion has been alleviated by up to 80%. At last, the scalability of the Zipper system has been verified by a performance model and various largescale workflow experiments on two HPC systems using up to 13,056 cores. Zipper is the fastest workflow system and outperforms the second-fastest by up to 2.2 times.</div><div><br></div><div>After minimizing the end-to-end time of the LBM workflow, I began to accelerate the memory-bound LBM algorithms. We first design novel parallel 2D memory-aware LBM algorithms. Then I extend to design 3D memory-aware LBM that combine features of single-copy distribution, single sweep, swap algorithm, prism traversal, and merging multiple temporal time steps. Strong scalability experiments on three HPC systems show that 2D and 3D memory-aware LBM algorithms outperform the existing fastest LBM by up to 4 times and 1.9 times, respectively. The speedup reasons are illustrated by theoretical algorithm analysis. Experimental roofline charts on modern CPU architectures show that memory-aware LBM algorithms can improve the arithmetic intensity (AI) of the fastest existing LBM by up to 4.6 times.</div>
44

A New Approach for Automated Feature Selection

Gocht, Andreas 05 April 2019 (has links)
Feature selection or variable selection is an important step in different machine learning tasks. In a traditional approach, users specify the amount of features, which shall be selected. Afterwards, algorithm select features by using scores like the Joint Mutual Information (JMI). If users do not know the exact amount of features to select, they need to evaluate the full learning chain for different feature counts in order to determine, which amount leads to the lowest training error. To overcome this drawback, we extend the JMI score and mitigate the flaw by introducing a stopping criterion to the selection algorithm that can be specified depending on the learning task. With this, we enable developers to carry out the feature selection task before the actual learning is done. We call our new score Historical Joint Mutual Information (HJMI). Additionally, we compare our new algorithm, using the novel HJMI score, against traditional algorithms, which use the JMI score. With this, we demonstrate that the HJMI-based algorithm is able to automatically select a reasonable amount of features: Our approach delivers results as good as traditional approaches and sometimes even outperforms them, as it is not limited to a certain step size for feature evaluation.

Page generated in 0.0525 seconds