Spelling suggestions: "subject:"statistische"" "subject:"etatistische""
221 |
Entwicklung chromatographischer und spektroskopischer Screeningmethoden zur Bestimmung der Migration aus LebensmittelverpackungenPaul, Nadine 27 July 2010 (has links)
Neben der Sicherheit für Lebensmittel stehen auch immer mehr die Lebensmittelverpackungen im Fokus der Öffentlichkeit. Der Übergang von Stoffen aus der Verpackung in das Lebensmittel ist unerwünscht und gesetzlich reglementiert. Um den Verbraucherschutz zu gewährleisten, müssen Grenzwerte und gesetzliche Anforderungen eingehalten werden. Der Übergang von rechtlich geregelten und nicht geregelten Substanzen muss überprüft werden, was eine analytische Herausforderung darstellt.
Die Untersuchung der migrierenden stickstoffhaltigen Substanzen aus Doseninnenbeschichtungen mittels eines Screenings aller migrierenden nicht-flüchtigen stickstoffhaltigen Substanzen mit einer molaren Masse kleiner 1000 Da wurde durchgeführt. Die Anwendbarkeit eines Stickstoff-selektiven Detektors für das Screening von Coating-Extrakten, welche stickstoffhaltige Verbindungen enthalten konnte gezeigt werden. Gegenstand der Untersuchung waren Vernetzersubstanzen, Flüssiglacke sowie Migrate der fertigen Beschichtung. Stickstoffhaltige potenziell migrierende Substanzen wurden zunächst in den Ausgangsmaterialien der Beschichtung identifiziert, um diese dann im Migrat der Beschichtung zu quantifizieren. Es sollte gezeigt werden, ob Substanzen, welche als Ausgangsstoffe im Lack eingesetzt werden, oder entstehende Reaktionsprodukte in ein Lebensmittelsimulanz migrieren. Um die Relevanz der migrierenden stickstoffhaltigen Verbindungen im Hinblick auf weitere nicht stickstoffhaltige migrierende Verbindungen zu zeigen, wurde das Gesamtmigrat der zur Verfügung stehenden Coatings bestimmt. Es zeigte sich, dass der Anteil von NCS an den insgesamt migrierenden Verbindungen zwischen 0,2 und 6,3 % liegt.
Der Fokus des zweiten Teils der vorliegenden Arbeit liegt auf Lebensmittelverpackungen aus Kunststoff. Zunächst wurde eine HPLC-Methode mit Hilfe des Verdampfungslichtstreudetektors zur Bestimmung der Gesamtmigration mit dem Simulanz Sonnenblumenöl etabliert werden. Das Ziel dieser Untersuchungen ist, den Einfluss von Temperatur, Zeit und Schichtdicke auf das Migrationsverhalten von Siegelschichten für den Hochtemperaturbereich (> 70 °C) mit fetthaltigen Lebensmitteln mit Hilfe von statistischer Versuchsplanung vorherzusagen. Mit Hilfe einer statistischen Software konnte eine Regressionsgleichung zur Berechnung der Gesamtmigration auf der Grundlage eines Box-Behnken-Versuchsplans erstellt werden. Dabei hatte die Temperatur den größten Einfluss auf die Gesamtmigration. Die Einflüsse von Zeit und Schichtdicke waren im untersuchten Bereich des hier gezeigten Modells linear und stiegen mit Erhöhung der Temperatur. Weiterhin konnte je 10 °C Temperaturerhöhung eine Verdopplung des ermittelten Gesamtmigrationswertes beobachtet werden.
Die Bestimmung der Additive aus den Ersatzsimulanzien 95 % Ethanol und Iso-Octan von Verpackungen sollte ebenfalls gezeigt werden. Ein Screening-Gradient zur Bestimmung von 25 Additiven in den Ersatzsimulanzien wurde etabliert. Die Identifizierung der migrierenden Additive erfolgte mittels der Detektoren UVD (DAD), FLD, ELSD und CLND. Mit Hilfe der verschiedenen Detektionsarten ist es möglich, die strukturelle Vielfalt der eingesetzten Additive abzudecken. Eine Absicherung der Ergebnisse konnte zudem über MS-Detektion erfolgen.
Mit Hilfe der Untersuchungen wurden die gesamtmigrierenden Substanzen aus Verbundfolien zu 50 % (95 % Ethanol-Migrat) bzw. 10 % (Isooctan-Migrat) aufgeklärt. Die Konzentration der quantifizierten Additive zeigte im Verhältnis gesehen annähernd gleiche Werte. Der Unterschied in den ermittelten Gesamtmigraten (95 % Ethanol: 1,2 mg/dm2, Iso-Octan: 5,6 mg/dm2) konnte demnach nicht über die migrierenden Additive erklärt werden. Als weitere migrierende Substanzen wurden Ethylen-Oligomere identifiziert. Die Quantifizierung dieser erfolgte erstmals mit Hilfe der 1H-NMR-Spektroskopie. Die nahezu vollständige Aufklärung der Gesamtmigration einer Verbundfolie in den Ersatzsimulanzien konnte gezeigt werden. Die migrierenden Ethylen-Oligomere des Iso-Octan-Migrats wurden eingehender untersucht. Mit Hilfe von verschiedenen chromatographischen und spektroskopischen Methoden gelang eine Charakterisierung dieser im Migrat identifizierten Substanzen. / Besides the safety of food the focus on food packaging material increases in public. The migration of substances from the packaging into food is undesired and regulated by law. To ensure consumer protection legal limits and requirements have to be kept. The migration of regulated und not regulated substances has to be verified which means an analytical challenge.
The determination of nitrogen containing substances (NCS) from food can coatings by screening of migrating, non-volatile substances with a molecular mass below 1000 Da from the coatings was carried out. The applicability of a nitrogen selective detector for the screening of coating extracts which contain nitrogen containing susbtances was shown. For the investigations crosslinking substances, liquid lacquers as well as migrates of the finished coatings have been available for determination. Nitrogen containing and potential migrating substances have been identified first in the raw marterial of the coating in order to quantify them in the migrates of the coating. It should be shown if substances from the raw materials or reaction products migrate into the food simulant. In order to show the relevance of the migrating nitrogen containing substances in respect to other non nitrogen containing compounds the overall migration of the available coatings was determined. It could be shown that the amount of NCS in the overall migrating substances was between 0.2 and 6.3 %.
Focus of the second part of the work was on food packing made of plastic. First an HPLC-method with ELS detection for the determination of the overall migration in sunflower oil was developed. Purpose of this determination was to predict the influence of temperature, time and thickness of the layer on the migration behavior with fatty food of sealing layers in high temperature range (> 70 °C) by means of design of experiments. A statistical software computed a regression equation for the calculation the overall migration based on a Box-Behnken-Design. The highest influence could be shown for the temperature. The modell showed a duplication of the determined overall migration with 10 °C increase of temperature.
The determination of plastic additives out of the 95 % ethanol and isooctane migrates of packaging material should also be conducted. An HPLC-screening method for the determination of 25 additives in the fat substitutes was established. The identification of the migrating additives was carried out with UV detection (DAD), FLD, ELSD and CLND. By means of the different detection systems it was possible to cover the structural diversity of the mainly used additives. To insure the results MS detection was used.
By means of this investigations a clarification of the total migrating substances of a multilayer film was 50 % (95 % ethanol) and 10 % (isooctane), respectively. The concentration of the migrating substances on the scale of things is nearly identical. The difference in the overall migration (95 % ethanol: 1.2 mg/dm2, isooctane: 5.6 mg/dm2) can not be clarified by migration of additives. As other migrating substances ethylen oligomers can be identified. The quantification was carried out for the first time with 1HNMR spectroscopy. An almost complete identification of migration substances of the overall migrate in food simulants can be shown. The migrating ethylen oligomers have been further investitgated. With the help of different chromatographic and spectroscopic methods a further characterisation of the migrating ethylen oligomers was successul.
|
222 |
A Bridge between Short-Range and Seasonal Forecasts: Data-Based First Passage Time Prediction in TemperaturesWulffen, Anja von 25 January 2013 (has links)
Current conventional weather forecasts are based on high-dimensional numerical models. They are usually only skillful up to a maximum lead time of around 7 days due to the chaotic nature of the climate dynamics and the related exponential growth of model and data initialisation errors. Even the fully detailed medium-range predictions made for instance at the European Centre for Medium-Range Weather Forecasts do not exceed lead times of 14 days, while even longer-range predictions are limited to time-averaged forecast outputs only.
Many sectors would profit significantly from accurate forecasts on seasonal time scales without needing the wealth of details a full dynamical model can deliver. In this thesis, we aim to study the potential of a much cheaper data-based statistical approach to provide predictions of comparable or even better skill up to seasonal lead times, using as an examplary forecast target the time until the next occurrence of frost.
To this end, we first analyse the properties of the temperature anomaly time series obtained from measured data by subtracting a sinusoidal seasonal cycle, as well as the distribution properties of the first passage times to frost. The possibility of generating additional temperature anomaly data with the same properties by using very simple autoregressive model processes to potentially reduce the statistical fluctuations in our analysis is investigated and ultimately rejected.
In a next step, we study the potential for predictability using only conditional first passage time distributions derived from the temperature anomaly time series and confirm a significant dependence of the distributions on the initial conditions. After this preliminary analysis, we issue data-based out-of-sample forecasts for three different prediction targets: The specific date of first frost, the probability of observing frost before summer for forecasts issued in spring, and the full probability distribution of the first passage times to frost.
We then study the possibility of improving the forecast quality first by enhancing the stationarity of the temperature anomaly time series and then by adding as an additional input variable the state of the North Atlantic Oscillation on the date the predictions are issued.
We are able to obtain significant forecast skill up to seasonal lead times when comparing our results to an unskilled reference forecast.
A first comparison between the data-based forecasts and corresponding predictions gathered from a dynamical weather model, necessarily using a lead time of only up to 15 days, shows that our simple statistical schemes are only outperformed (and then only slightly) if further statistical post-processing is applied to the model output. / Aktuelle Wetterprognosen werden mit Hilfe von hochdimensionalen, numerischen Modellen generiert. Durch die dem Klima zugrunde liegende chaotische Dynamik wachsen Modellfehler und Ungenauigkeiten in der Modellinitialisierung exponentiell an, sodass Vorhersagen mit signifikanter Güte üblicherweise nur für eine Vorlaufzeit von maximal sieben Tagen möglich sind. Selbst die detaillierten Prognosen des Europäischen Zentrums für mittelfristige Wettervorhersagen gehen nicht über eine Vorlaufzeit von 14 Tagen hinaus, während noch längerfristigere Vorhersagen auf zeitgemittelte Größen beschränkt sind.
Viele Branchen würden signifikant von akkuraten Vorhersagen auf saisonalen Zeitskalen pro-fitieren, ohne das ganze Ausmaß an Details zu benötigen, das von einem vollständigen dynamischen Modell geliefert werden kann. In dieser Dissertation beabsichtigen wir, am Beispiel einer Vorhersage der Zeitdauer bis zum nächsten Eintreten von Frost zu untersuchen, inwieweit deutlich kostengünstigere, datenbasierte statistische Verfahren Prognosen von gleicher oder sogar besserer Güte auf bis zu saisonalen Zeitskalen liefern können.
Dazu analysieren wir zunächst die Eigenschaften der Zeitreihe der Temperaturanomalien, die aus den Messdaten durch das Subtrahieren eines sinusförmigen Jahresganges erhalten werden, sowie die Charakteristiken der Wahrscheinlichkeitsverteilungen der Zeitdauer bis zum nächsten Eintreten von Frost. Die Möglichkeit, durch einen einfachen autoregressiven Modellprozess zusätzliche Datenpunkte gleicher statistischer Eigenschaften wie der Temperaturanomalien zu generieren, um die statistischen Fluktuationen in der Analyse zu reduzieren, wird untersucht und letztendlich verworfen.
Im nächsten Schritt analysieren wir das Vorhersagepotential, wenn ausschließlich aus den Temperaturanomalien gewonnene bedingte Wahrscheinlichkeitsverteilungen der Wartezeit bis zum nächsten Frost verwendet werden, und können eine signifikante Abhängigkeit der Verteilungen von den Anfangsbedingungen nachweisen. Nach dieser einleitenden Untersuchung erstellen wir datenbasierte Prognosen für drei verschiedene Vorhersagegrößen: Das konkrete Datum, an dem es das nächste Mal Frost geben wird; die Wahrscheinlichkeit, noch vor dem Sommer Frost zu beobachten, wenn die Vorhersagen im Frühjahr ausgegeben werden; und die volle Wahrscheinlichkeitsverteilung der Zeitdauer bis zum nächsten Eintreten von Frost.
Anschließend untersuchen wir die Möglichkeit, die Vorhersagegüte weiter zu erhöhen - zunächst durch eine Verbesserung der Stationarität der Temperaturanomalien und dann durch die zusätzliche Berücksichtigung der Nordatlantischen Oszillation als einer zweiten, den Anfangszustand charakterisierenden Variablen im Vorhersageschema.
Wir sind in der Lage, im Vergleich mit einem naiven Referenzvorhersageschema eine signifikante Verbesserung der Vorhersagegüte auch auf saisonalen Zeitskalen zu erreichen.
Ein erster Vergleich zwischen den datenbasierten Vorhersagen und entsprechenden, aus den dynamischen Wettermodellen gewonnenen Prognosen, der sich notwendigerweise auf eine Vorlaufzeit der Vorhersagen von lediglich 15 Tagen beschränkt, zeigt, dass letztere unsere simplen statistischen Vorhersageschemata nur schlagen (und zwar knapp), wenn der Modelloutput noch einer statistischen Nachbearbeitung unterzogen wird.
|
223 |
Normal forms for multidimensional databasesLehner, Wolfgang, Albrecht, J., Wedekind, H. 02 June 2022 (has links)
In the area of online analytical processing (OLAP), the concept of multidimensional databases is receiving much popularity. Thus, a couple of different multidimensional data models were proposed from the research as well as from the commercial product side, each emphasizing different perspectives. However, very little work has been done investigating guidelines for good schema design within such a multidimensional data model. Based on a logical reconstruction of multidimensional schema design, this paper proposes two multidimensional normal forms. These normal forms define modeling constraints for summary attributes describing the cells within a multidimensional data cube and constraints to model complex dimensional structures appropriately. Multidimensional schemas compliant to these normal forms do not only ensure the validity of analytical computations on the multidimensional database, but also favor an efficient physical database design.
|
224 |
Optimisation of water quality monitoring network design considering compliance criterion and trade-offs between monitoring information and costsNguyen, Thuy Hoang 03 February 2022 (has links)
Water quality monitoring (WQM) is crucial for managing and protecting riverine ecosystems. There has been a plethora of methods to select the monitoring sites, water quality parameters (WQPs), and monitoring frequencies; however, no standard method or strategy has been accepted for the river systems. Water managers have faced difficulties in adopting appropriate WQM network design methods to their local boundary conditions, monitoring objectives, monitoring costs, and legal regulations. With the elevated cost and time consumption of monitoring, approaches to evaluate and redesign the monitoring networks based on monitoring goal achievements are crucial for water managers. Hence, the overall aim of this thesis is to develop and employ a reliable yet straightforward approach to optimise and quantify the effectiveness of the WQM network in rivers. The objectives are to (i) identify the commonly used methods and the boundary conditions to apply these methods in assessing and designing of WQM networks in rivers; (ii) optimise river WQM network design based on compliance criteria; (iii) optimise river WQM network design based on the trade-offs between information provided by the monitoring network versus the monitoring expenses.
A systematic review of the commonly used design methods and their resulting monitoring setups in Chapter 2 shows that multivariate statistical analysis (MVA) is a promising tool to contract the number of monitoring sites and water quality parameters. Most of the reported studies often overlook small streams and trace pollutants such as heavy metals and organic microcontaminants in the analysis. Data availability and expertise’s judgments seem to affect the selection of design methods rather than river size and the extent of the monitoring networks.
The commonly found statistical methods are applied to the case study of the Freiberger Mulde (FM) river basin in eastern Germany to optimise its current monitoring network. Chapter 3 dedicates to redesign the monitoring network for compliance monitoring purposes. In Chapter 3, 82 non-biological parameters are initially screened and analysed for their violations to the environmental quality standards. The subsequent result suggests that polycyclic aromatic hydrocarbons, heavy metals, and phosphorus have been the abundant stressors that caused more than 50% of the streams in the FM river basin failing to achieve good status. The proposed approach using hierarchical cluster analysis and weighted violation factor from 22 relevant WQPs allows a reduction of 42 monitoring sites from the current 158 sites. The Mann-Kendall trend test recommends an increase in monitoring frequency of the priority substances by 12 times per annual, and a decrease in the number of sampling events for metals and general physicochemical parameters by quarterly. Overall, the results suggest that the authorities of the Saxony region should develop proper management measures targeting heavy metals and organic micropollutants to be able to achieve good WQ status by 2027 at the latest.
In Chapter 4, regularly monitoring parameters with less than 15% of censored data are analysed. A combination of principal component analysis and Pearson’s correlation analysis allows the identification of 14 critical parameters that are responsible for explaining 75.1% of data variability in the FM river basin. Weathering processes, historical mining, wastewater discharges, and seasonality have been the leading causes of water quality variability. Both sampling locations and periods are observed, with the resulting mineral contents vary between locations, and the organic and oxygen content differs depending on the time period that was monitored. The monitoring costs are estimated for one monitoring event and based on laboratory, transportation, and sampling costs. The results show that under the current monitoring-intense conditions, preserving monitoring variables rather than sites seems to be more economical than the opposite practice.
The current study provides and employs two statistical approaches to optimise the WQM network for the FM river basin in eastern Germany. The proposed methods can be of interests to other river basins where the historical data are available, and the monitoring costs become a constraint. The presented research also raises some concerns for future research regarding the applications of statistical methods to optimise WQM networks, which are presented in Chapter 5.
|
225 |
Investigation of the emergence of thermodynamic behavior in closed quantum systems and its relation to standard stochastic descriptionsSchmidtke, Daniel 20 August 2018 (has links)
Our everyday experiences teach us that any imbalance like temperature gradients, non-uniform particle-densities etc. will approach some equilibrium state if not subjected to any external force. Phenomenological descriptions of these empirical findings reach back to the 19th century where Fourier and Fick presented descriptions of relaxation for macroscopic systems by stochastic approaches. However, one of the main goals of thermodynamics remained the derivation of these phenomenological description from basic microscopic principles. This task has gained much attraction since the foundation of quantum mechanics about 100 years ago. However, up to now no such conclusive derivation is presented. In this dissertation we will investigate whether closed quantum systems may show equilibration, and if so, to what extend such dynamics are in accordance with standard thermodynamic behavior as described by stochastic approaches. To this end we consider i.a. Markovian dynamics, Fokker-Planck and diffusion equations. Furthermore, we consider fluctuation theorems as given e.g. by the Jarzynski relation beyond strict Gibbsian initial states. After all we find indeed good agreement for selected quantum systems.
|
226 |
Data-driven modeling and simulation of spatiotemporal processes with a view toward applications in biologyMaddu Kondaiah, Suryanarayana 11 January 2022 (has links)
Mathematical modeling and simulation has emerged as a fundamental means to understand physical process around us with countless real-world applications in applied science and engineering problems. However, heavy reliance on first principles, symmetry relations, and conservation laws has limited its applicability to a few scientific domains and even few real-world scenarios. Especially in disciplines like biology the underlying living constituents exhibit a myriad of complexities like non-linearities, non-equilibrium physics, self-organization and plasticity that routinely escape mathematical treatment based on governing laws. Meanwhile, recent decades have witnessed rapid advancement in computing hardware, sensing technologies, and algorithmic innovations in machine learning. This progress has helped propel data-driven paradigms to achieve unprecedented practical success in the fields of image
processing and computer vision, natural language processing, autonomous transport, and etc. In the current thesis, we explore, apply, and advance statistical and machine learning strategies that help bridge the gap between data and mathematical models, with a view toward modeling and simulation of spatiotemporal processes in biology.
As first, we address the problem of learning interpretable mathematical models of biologial process from limited and noisy data. For this, we propose a statistical learning framework called PDE-STRIDE based on the theory of stability selection and ℓ0-based sparse regularization for parsimonious model selection. The PDE-STRIDE framework enables model learning with relaxed dependencies on tuning parameters, sample-size and noise-levels. We demonstrate the practical applicability of our method on real-world data by considering a purely data-driven re-evaluation of the advective triggering hypothesis explaining the embryonic patterning event in the C. elegans zygote.
As a next natural step, we extend our PDE-STRIDE framework to leverage prior knowledge from physical principles to learn biologically plausible and physically consistent models rather than models that simply fit the data best. For this, we modify the PDE-STRIDE framework to handle structured sparsity constraints for grouping features which enables us to: 1) enforce conservation laws, 2) extract spatially varying non-observables, 3) encode symmetry relations associated with the underlying biological process. We show several applications from systems biology demonstrating the claim that enforcing priors dramatically enhances the robustness and consistency of the data-driven approaches.
In the following part, we apply our statistical learning framework for learning mean-field deterministic equations of active matter systems directly from stochastic self-propelled active particle simulations. We investigate two examples of particle models which differs in the microscopic interaction rules being used. First, we consider a self-propelled particle model endowed with density-dependent motility character. For the chosen hydrodynamic variables, our data-driven framework learns continuum partial differential equations that are in excellent agreement with analytical derived coarse-grain equations from Boltzmann approach. In addition, our structured sparsity framework is able to decode the hidden dependency between particle speed and the local density intrinsic to the self-propelled particle model. As a second example, the learning framework is applied for coarse-graining a popular stochastic particle model employed for studying the collective cell motion in epithelial sheets. The PDE-STRIDE framework is able to infer novel PDE model that quantitatively captures the flow statistics of the particle model in the regime of low density fluctuations.
Modern microscopy techniques produce GigaBytes (GB) and TeraBytes (TB) of data while imaging spatiotemporal developmental dynamics of living organisms. However, classical statistical learning based on penalized linear regression models struggle with issues like accurate computation of derivatives in the candidate library and problems with computational scalability for application to “big” and noisy data-sets. For this reason we exploit the rich parameterization of neural networks that can efficiently learn from large data-sets. Specifically, we explore the framework of Physics-Informed Neural Networks (PINN) that allow for seamless integration of physics priors with measurement data. We propose novel strategies for multi-objective optimization that allow for adapting PINN architecture to multi-scale modeling problems arising in biology. We showcase application examples for both forward and inverse modeling of mesoscale active turbulence phenomenon observed in dense bacterial suspensions. Employing our strategies, we demonstrate orders of magnitude gain in accuracy and convergence in comparison with conventional formulation for solving multi-objective optimization in PINNs.
In the concluding chapter of the thesis, we skip model interpretability and focus on learning computable models directly from noisy data for the purpose of pure dynamics forecasting. We propose STENCIL-NET, an artificial neural network architecture that learns solution adaptive spatial discretization of an unknown PDE model that can be stably integrated in time with negligible loss in accuracy. To support this claim, we present numerical experiments on long-term forecasting of chaotic PDE solutions on coarse spatio-temporal grids, and also showcase de-noising application that help decompose spatiotemporal dynamics from the noise in an equation-free manner.
|
227 |
SEM-Based Automated Mineralogy and Its Application in Geo- and Material SciencesSchulz, Bernhard, Sandmann, Dirk, Gilbricht, Sabine 17 January 2022 (has links)
Scanning electron microscopy based automated mineralogy (SEM-AM) is a combined analytical tool initially designed for the characterisation of ores and mineral processing products. Measurements begin with the collection of backscattered electron (BSE) images and their handling with image analysis software routines. Subsequently, energy dispersive X-ray spectra (EDS) are gained at selected points according to the BSE image adjustments. Classification of the sample EDS spectra against a list of approved reference EDS spectra completes the measurement. Different classification algorithms and four principal SEM-AM measurement routines for point counting modal analysis, particle analysis, sparse phase search and EDS spectral mapping are offered by the relevant software providers. Application of SEM-AM requires a high-quality preparation of samples. Suitable non-evaporating and electron-beam stable epoxy resin mixtures and polishing of relief-free surfaces in particles and materials with very different hardness are the main challenges. As demonstrated by case examples in this contribution, the EDS spectral mapping methods appear to have the most promising potential for novel applications in metamorphic, igneous and sedimentary petrology, ore fingerprinting, ash particle analysis, characterisation of slags, forensic sciences, archaeometry and investigations of stoneware and ceramics. SEM-AM allows the quantification of the sizes, geometries and liberation of particles with different chemical compositions within a bulk sample and without previous phase separations. In addition, a virtual filtering of bulk particle samples by application of numerous filter criteria is possible. For a complete mineral phase identification, X-ray diffraction data should accompany the EDS chemical analysis. Many of the materials which potentially could be characterised by SEM-AM consist of amorphous and glassy phases. In such cases, the generic labelling of reference EDS spectra and their subsequent target component grouping allow SEM-AM for interesting and novel studies on many kinds of solid and particulate matter which are not feasible by other analytical methods.
|
228 |
Models of Discrete-Time Stochastic Processes and Associated Complexity MeasuresLöhr, Wolfgang 12 May 2010 (has links)
Many complexity measures are defined as the size of a minimal representation in
a specific model class. One such complexity measure, which is important because
it is widely applied, is statistical complexity. It is defined for
discrete-time, stationary stochastic processes within a theory called
computational mechanics. Here, a mathematically rigorous, more general version
of this theory is presented, and abstract properties of statistical complexity
as a function on the space of processes are investigated. In particular, weak-*
lower semi-continuity and concavity are shown, and it is argued that these
properties should be shared by all sensible complexity measures. Furthermore, a
formula for the ergodic decomposition is obtained.
The same results are also proven for two other complexity measures that are
defined by different model classes, namely process dimension and generative
complexity. These two quantities, and also the information theoretic complexity
measure called excess entropy, are related to statistical complexity, and this
relation is discussed here.
It is also shown that computational mechanics can be reformulated in terms of
Frank Knight''s prediction process, which is of both conceptual and technical
interest. In particular, it allows for a unified treatment of different
processes and facilitates topological considerations. Continuity of the Markov
transition kernel of a discrete version of the prediction process is obtained as
a new result.
|
229 |
Aspects of Non-Equilibrium Behavior in Isolated Quantum SystemsHeveling, Robin 06 September 2022 (has links)
Based on the publications [P1–P6], the cumulative dissertation at hand addresses quite diverse
aspects of non-equilibrium behavior in isolated quantum systems. The works presented in
publications [P1, P2] concern the issue of finding generally valid upper bounds on equilibration
times, which ensure the eventual occurrence of equilibration in isolated quantum systems. Recently,
a particularly compelling bound for physically relevant observables has been proposed. Said
bound is examined analytically as well as numerically. It is found that the bound fails to give
meaningful results in a number of standard physical scenarios. Continuing, publication [P4]
examines a particular integral fluctuation theorem (IFT) for the total entropy production of a
small system coupled to a substantially larger but finite bath. While said IFT is known to hold
for canonical states, it is shown to be valid for microcanonical and even pure energy eigenstates
as well by invoking the physically natural conditions of “stiffness” and “smoothness” of transition
probabilities. The validity of the IFT and the existence of stiffness and smoothness are numerically
investigated for various lattice models. Furthermore, this dissertation puts emphasis on the issue
of the route to equilibrium, i.e., to explain the omnipresence of certain relaxation dynamics in
nature, while other, more exotic relaxation patterns are practically never observed, even though
they are a priori not disfavored by the microscopic laws of motion. Regarding this question, the
existence of stability in a larger class of dynamics consisting of exponentially damped oscillations
is corroborated in publication [P6]. In the same vein, existing theories on the ubiquity of certain
dynamics are numerically scrutinized in publication [P3]. Finally, in publication [P5], the recently
proposed “universal operator growth hypothesis”, which characterizes the complexity growth of
operators during unitary time evolution, is numerically probed for various spin-based systems in
the thermodynamic limit. The hypothesis is found to be valid within the limits of the numerical
approach.
|
230 |
Transparent Forecasting Strategies in Database Management SystemsFischer, Ulrike, Lehner, Wolfgang 02 February 2023 (has links)
Whereas traditional data warehouse systems assume that data is complete or has been carefully preprocessed, increasingly more data is imprecise, incomplete, and inconsistent. This is especially true in the context of big data, where massive amount of data arrives continuously in real-time from vast data sources. Nevertheless, modern data analysis involves sophisticated statistical algorithm that go well beyond traditional BI and, additionally, is increasingly performed by non-expert users. Both trends require transparent data mining techniques that efficiently handle missing data and present a complete view of the database to the user. Time series forecasting estimates future, not yet available, data of a time series and represents one way of dealing with missing data. Moreover, it enables queries that retrieve a view of the database at any point in time - past, present, and future. This article presents an overview of forecasting techniques in database management systems. After discussing possible application areas for time series forecasting, we give a short mathematical background of the main forecasting concepts. We then outline various general strategies of integrating time series forecasting inside a database and discuss some individual techniques from the database community. We conclude this article by introducing a novel forecasting-enabled database management architecture that natively and transparently integrates forecast models.
|
Page generated in 0.1128 seconds