• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Time Series Analysis informed by Dynamical Systems Theory

Schumacher, Johannes 11 June 2015 (has links)
This thesis investigates time series analysis tools for prediction, as well as detection and characterization of dependencies, informed by dynamical systems theory. Emphasis is placed on the role of delays with respect to information processing in dynamical systems, as well as with respect to their effect in causal interactions between systems. The three main features that characterize this work are, first, the assumption that time series are measurements of complex deterministic systems. As a result, functional mappings for statistical models in all methods are justified by concepts from dynamical systems theory. To bridge the gap between dynamical systems theory and data, differential topology is employed in the analysis. Second, the Bayesian paradigm of statistical inference is used to formalize uncertainty by means of a consistent theoretical apparatus with axiomatic foundation. Third, the statistical models are strongly informed by modern nonlinear concepts from machine learning and nonparametric modeling approaches, such as Gaussian process theory. Consequently, unbiased approximations of the functional mappings implied by the prior system level analysis can be achieved. Applications are considered foremost with respect to computational neuroscience but extend to generic time series measurements.
2

A System Architecture for the Monitoring of Continuous Phenomena by Sensor Data Streams

Lorkowski, Peter 15 March 2019 (has links)
The monitoring of continuous phenomena like temperature, air pollution, precipitation, soil moisture etc. is of growing importance. Decreasing costs for sensors and associated infrastructure increase the availability of observational data. These data can only rarely be used directly for analysis, but need to be interpolated to cover a region in space and/or time without gaps. So the objective of monitoring in a broader sense is to provide data about the observed phenomenon in such an enhanced form. Notwithstanding the improvements in information and communication technology, monitoring always has to function under limited resources, namely: number of sensors, number of observations, computational capacity, time, data bandwidth, and storage space. To best exploit those limited resources, a monitoring system needs to strive for efficiency concerning sampling, hardware, algorithms, parameters, and storage formats. In that regard, this work proposes and evaluates solutions for several problems associated with the monitoring of continuous phenomena. Synthetic random fields can serve as reference models on which monitoring can be simulated and exactly evaluated. For this purpose, a generator is introduced that can create such fields with arbitrary dynamism and resolution. For efficient sampling, an estimator for the minimum density of observations is derived from the extension and dynamism of the observed field. In order to adapt the interpolation to the given observations, a generic algorithm for the fitting of kriging parameters is set out. A sequential model merging algorithm based on the kriging variance is introduced to mitigate big workloads and also to support subsequent and seamless updates of real-time models by new observations. For efficient storage utilization, a compression method is suggested. It is designed for the specific structure of field observations and supports progressive decompression. The unlimited diversity of possible configurations of the features above calls for an integrated approach for systematic variation and evaluation. A generic tool for organizing and manipulating configurational elements in arbitrary complex hierarchical structures is proposed. Beside the root mean square error (RMSE) as crucial quality indicator, also the computational workload is quantified in a manner that allows an analytical estimation of execution time for different parallel environments. In summary, a powerful framework for the monitoring of continuous phenomena is outlined. With its tools for systematic variation and evaluation it supports continuous efficiency improvement.

Page generated in 0.5226 seconds