251 |
Novel Transport in Quantum Phases and Entanglement Dynamics Beyond EquilibriumSzabo, Joseph Charles 06 September 2022 (has links)
No description available.
|
252 |
Objective Bayesian Analysis of Kullback-Liebler Divergence of two Multivariate Normal Distributions with Common Covariance Matrix and Star-shape Gaussian Graphical ModelLi, Zhonggai 22 July 2008 (has links)
This dissertation consists of four independent but related parts, each in a Chapter. The first part is an introductory. It serves as the background introduction and offer preparations for later parts. The second part discusses two population multivariate normal distributions with common covariance matrix. The goal for this part is to derive objective/non-informative priors for the parameterizations and use these priors to build up constructive random posteriors of the Kullback-Liebler (KL) divergence of the two multivariate normal populations, which is proportional to the distance between the two means, weighted by the common precision matrix. We use the Cholesky decomposition for re-parameterization of the precision matrix. The KL divergence is a true distance measurement for divergence between the two multivariate normal populations with common covariance matrix. Frequentist properties of the Bayesian procedure using these objective priors are studied through analytical and numerical tools. The third part considers the star-shape Gaussian graphical model, which is a special case of undirected Gaussian graphical models. It is a multivariate normal distribution where the variables are grouped into one "global" group of variable set and several "local" groups of variable set. When conditioned on the global variable set, the local variable sets are independent of each other. We adopt the Cholesky decomposition for re-parametrization of precision matrix and derive Jeffreys' prior, reference prior, and invariant priors for new parameterizations. The frequentist properties of the Bayesian procedure using these objective priors are also studied. The last part concentrates on the discussion of objective Bayesian analysis for partial correlation coefficient and its application to multivariate Gaussian models. / Ph. D.
|
253 |
PMU-Based Applications for Improved Monitoring and Protection of Power SystemsPal, Anamitra 07 May 2014 (has links)
Monitoring and protection of power systems is a task that has manifold objectives. Amongst others, it involves performing data mining, optimizing available resources, assessing system stresses, and doing data conditioning. The role of PMUs in fulfilling these four objectives forms the basis of this dissertation. Classification and regression tree (CART) built using phasor data has been extensively used in power systems. The splits in CART are based on a single attribute or a combination of variables chosen by CART itself rather than the user. But as PMU data consists of complex numbers, both the attributes, should be considered simultaneously for making critical decisions. An algorithm is proposed here that expresses high dimensional, multivariate data as a single attribute in order to successfully perform splits in CART.
In order to reap maximum benefits from placement of PMUs in the power grid, their locations must be selected judiciously. A gradual PMU placement scheme is developed here that ensures observability as well as protects critical parts of the system. In order to circumvent the computational burden of the optimization, this scheme is combined with a topology-based system partitioning technique to make it applicable to virtually any sized system.
A power system is a dynamic being, and its health needs to be monitored at all times. Two metrics are proposed here to monitor stress of a power system in real-time. Angle difference between buses located across the network and voltage sensitivity of buses lying in the middle are found to accurately reflect the static and dynamic stress of the system. The results indicate that by setting appropriate alerts/alarm limits based on these two metrics, a more secure power system operation can be realized.
A PMU-only linear state estimator is intrinsically superior to its predecessors with respect to performance and reliability. However, ensuring quality of the data stream that leaves this estimator is crucial. A methodology for performing synchrophasor data conditioning and validation that fits neatly into the existing linear state estimation formulation is developed here. The results indicate that the proposed methodology provides a computationally simple, elegant solution to the synchrophasor data quality problem. / Ph. D.
|
254 |
A general L-curve technique for ill-conditioned inverse problems based on the Cramer-Rao lower boundKattuparambil Sreenivasan, Sruthi, Farooqi, Simrah January 2024 (has links)
This project is associated with statistical methods to find the unknown parameters of a model. It is the statistical investigation of the algorithm with respect to accuracy (the Cramer-Rao bound and L-curve technique) and optimization of the algorithmic parameters. This project aims to estimate the true temperature (final temperature) of a certain liquid in a container by using initial measurements (readings) from a temperature probe with a known time constant. Basically, the final temperature of the liquid was estimated, before the probe reached its final reading. The probe obeys a simple first-order differential equation model. Based on the model of the probe and the measurement data the estimate was calculated of the ’true’ temperature in the container by using a maximum likelihood approach to parameter estimation. The initial temperature was also investigated. Modelling, analysis, calculations, and simulations of this problem were explored.
|
255 |
Cosmologia usando aglomerados de galáxias no Dark Energy Survey / Cosmology with Galaxy Clusters in the Dark Energy SurveySilva, Michel Aguena da 03 August 2017 (has links)
Aglomerados de galáxias são as maiores estruturas no Universo. Sua distribuição mapeia os halos de matéria escura formados nos potenciais profundos do campo de matéria escura. Consequentemente, a abundância de aglomerados é altamente sensível a expansão do Universo, assim como ao crescimento das perturbações de matéria escura, constituindo uma poderosa ferramenta para fins cosmológicos. Na era atual de grandes levantamentos observacionais que produzem uma quantidade gigantesca de dados, as propriedades estatísticas dos objetos observados (galáxias, aglomerados, supernovas, quasares, etc) podem ser usadas para extrair informações cosmológicas. Para isso, é necessária o estudo da formação de halos de matéria escura, da detecção dos halos e aglomerados, das ferramentas estatísticas usadas para o vínculos de parâmetros, e finalmente, dos efeitos da detecções ópticas. No contexto da formulação da predição teórica da contagem de halos, foi analisada a influência de cada parâmetro cosmológico na abundância dos halos, a importância do uso da covariância dos halos, e a eficácia da utilização dos halos para vincular cosmologia. Também foi analisado em detalhes os intervalos de redshift e o uso de conhecimento prévio dos parâmetros ({\\it priors}). A predição teórica foi testada um uma simulação de matéria escura, onde a cosmologia era conhecida e os halos de matéria escura já haviam sido detectados. Nessa análise, foi atestado que é possível obter bons vínculos cosmológicos para alguns parâmetros (Omega_m,w,sigma_8,n_s), enquanto outros parâmetros (h,Omega_b) necessitavam de conhecimento prévio de outros testes cosmológicos. Na seção dos métodos estatísticos, foram discutidos os conceitos de {\\it likelihood}, {\\it priors} e {\\it posterior distribution}. O formalismo da Matriz de Fisher, bem como sua aplicação em aglomerados de galáxias, foi apresentado e usado para a realização de predições dos vínculos em levantamentos atuais e futuros. Para a análise de dados, foram apresentados métodos de Cadeias de Markov de Monte Carlo (MCMC), que diferentemente da Matriz de Fisher não assumem Gaussianidade entre os parâmetros vinculados, porém possuem um custo computacional muito mais alto. Os efeitos observacionais também foram estudados em detalhes. Usando uma abordagem com a Matriz de Fisher, os efeitos de completeza e pureza foram extensivamente explorados. Como resultado, foi determinado em quais casos é vantajoso incluir uma modelagem adicional para que o limite mínimo de massa possa ser diminuído. Um dos principais resultados foi o fato que a inclusão dos efeitos de completeza e pureza na modelagem não degradam os vínculos de energia escura, se alguns outros efeitos já estão sendo incluídos. Também foi verificados que o uso de priors nos parâmetros não cosmológicos só afetam os vínculos de energia escura se forem melhores que 1\\%. O cluster finder(código para detecção de aglomerados) WaZp foi usado na simulação, produzindo um catálogo de aglomerados. Comparando-se esse catálogo com os halos de matéria escura da simulação, foi possível investigar e medir os efeitos observacionais. A partir dessas medidas, pôde-se incluir correções para a predição da abundância de aglomerados, que resultou em boa concordância com os aglomerados detectados. Os resultados a as ferramentas desenvolvidos ao longo desta tese podem fornecer um a estrutura para a análise de aglomerados com fins cosmológicos. Durante esse trabalho, diversos códigos foram desenvolvidos, dentre eles, estão um código eficiente para computar a predição teórica da abundância e covariância de halos de matéria escura, um código para estimar a abundância e covariância dos aglomerados de galáxias incluindo os efeitos observacionais, e um código para comparar diferentes catálogos de halos e aglomerados. Esse último foi integrado ao portal científico do Laboratório Interinstitucional de e-Astronomia (LIneA) e está sendo usado para avaliar a qualidade de catálogos de aglomerados produzidos pela colaboração do Dark Energy Survey (DES), assim como também será usado em levantamentos futuros. / Abstract Galaxy clusters are the largest bound structures of the Universe. Their distribution maps the dark matter halos formed in the deep potential wells of the dark matter field. As a result, the abundance of galaxy clusters is highly sensitive to the expansion of the universe as well as the growth of dark matter perturbations, representing a powerful tool for cosmological purposes. In the current era of large scale surveys with enormous volumes of data, the statistical quantities from the objects surveyed (galaxies, clusters, supernovae, quasars, etc) can be used to extract cosmological information. The main goal of this thesis is to explore the potential use of galaxy clusters for constraining cosmology. To that end, we study the halo formation theory, the detection of halos and clusters, the statistical tools required to quarry cosmological information from detected clusters and finally the effects of optical detection. In the composition of the theoretical prediction for the halo number counts, we analyze how each cosmological parameter of interest affects the halo abundance, the importance of the use of the halo covariance, and the effectiveness of halos on cosmological constraints. The redshift range and the use of prior knowledge of parameters are also investigated in detail. The theoretical prediction is tested on a dark matter simulation, where the cosmology is known and a dark matter halo catalog is available. In the analysis of the simulation we find that it is possible to obtain good constraints for some parameters such as (Omega_m,w,sigma_8,n_s) while other parameters (h,Omega_b) require external priors from different cosmological probes. In the statistical methods, we discuss the concept of likelihood, priors and the posterior distribution. The Fisher Matrix formalism and its application on galaxy clusters is presented, and used for making forecasts of ongoing and future surveys. For the real analysis of data we introduce Monte Carlo Markov Chain (MCMC) methods, which do not assume Gaussianity of the parameters distribution, but have a much higher computational cost relative to the Fisher Matrix. The observational effects are studied in detail. Using the Fisher Matrix approach, we carefully explore the effects of completeness and purity. We find in which cases it is worth to include extra parameters in order to lower the mass threshold. An interesting finding is the fact that including completeness and purity parameters along with cosmological parameters does not degrade dark energy constraints if other observational effects are already being considered. The use of priors on nuisance parameters does not seem to affect the dark energy constraints, unless these priors are better than 1\\%.The WaZp cluster finder was run on a cosmological simulation, producing a cluster catalog. Comparing the detected galaxy clusters to the dark matter halos, the observational effects were investigated and measured. Using these measurements, we were able to include corrections for the prediction of cluster counts, resulting in a good agreement with the detected cluster abundance. The results and tools developed in this thesis can provide a framework for the analysis of galaxy clusters for cosmological purposes. Several codes were created and tested along this work, among them are an efficient code to compute theoretical predictions of halo abundance and covariance, a code to estimate the abundance and covariance of galaxy clusters including multiple observational effects and a pipeline to match and compare halo/cluster catalogs. This pipeline has been integrated to the Science Portal of the Laboratório Interinstitucional de e-Astronomia (LIneA) and is being used to automatically assess the quality of cluster catalogs produced by the Dark Energy Survey (DES) collaboration and will be used in other future surveys.
|
256 |
Unsupervised 3D image clustering and extension to joint color and depth segmentation / Classification non supervisée d’images 3D et extension à la segmentation exploitant les informations de couleur et de profondeurHasnat, Md Abul 01 October 2014 (has links)
L'accès aux séquences d'images 3D s'est aujourd'hui démocratisé, grâce aux récentes avancées dans le développement des capteurs de profondeur ainsi que des méthodes permettant de manipuler des informations 3D à partir d'images 2D. De ce fait, il y a une attente importante de la part de la communauté scientifique de la vision par ordinateur dans l'intégration de l'information 3D. En effet, des travaux de recherche ont montré que les performances de certaines applications pouvaient être améliorées en intégrant l'information 3D. Cependant, il reste des problèmes à résoudre pour l'analyse et la segmentation de scènes intérieures comme (a) comment l'information 3D peut-elle être exploitée au mieux ? et (b) quelle est la meilleure manière de prendre en compte de manière conjointe les informations couleur et 3D ? Nous abordons ces deux questions dans cette thèse et nous proposons de nouvelles méthodes non supervisées pour la classification d'images 3D et la segmentation prenant en compte de manière conjointe les informations de couleur et de profondeur. A cet effet, nous formulons l'hypothèse que les normales aux surfaces dans les images 3D sont des éléments à prendre en compte pour leur analyse, et leurs distributions sont modélisables à l'aide de lois de mélange. Nous utilisons la méthode dite « Bregman Soft Clustering » afin d'être efficace d'un point de vue calculatoire. De plus, nous étudions plusieurs lois de probabilités permettant de modéliser les distributions de directions : la loi de von Mises-Fisher et la loi de Watson. Les méthodes de classification « basées modèles » proposées sont ensuite validées en utilisant des données de synthèse puis nous montrons leur intérêt pour l'analyse des images 3D (ou de profondeur). Une nouvelle méthode de segmentation d'images couleur et profondeur, appelées aussi images RGB-D, exploitant conjointement la couleur, la position 3D, et la normale locale est alors développée par extension des précédentes méthodes et en introduisant une méthode statistique de fusion de régions « planes » à l'aide d'un graphe. Les résultats montrent que la méthode proposée donne des résultats au moins comparables aux méthodes de l'état de l'art tout en demandant moins de temps de calcul. De plus, elle ouvre des perspectives nouvelles pour la fusion non supervisée des informations de couleur et de géométrie. Nous sommes convaincus que les méthodes proposées dans cette thèse pourront être utilisées pour la classification d'autres types de données comme la parole, les données d'expression en génétique, etc. Elles devraient aussi permettre la réalisation de tâches complexes comme l'analyse conjointe de données contenant des images et de la parole / Access to the 3D images at a reasonable frame rate is widespread now, thanks to the recent advances in low cost depth sensors as well as the efficient methods to compute 3D from 2D images. As a consequence, it is highly demanding to enhance the capability of existing computer vision applications by incorporating 3D information. Indeed, it has been demonstrated in numerous researches that the accuracy of different tasks increases by including 3D information as an additional feature. However, for the task of indoor scene analysis and segmentation, it remains several important issues, such as: (a) how the 3D information itself can be exploited? and (b) what is the best way to fuse color and 3D in an unsupervised manner? In this thesis, we address these issues and propose novel unsupervised methods for 3D image clustering and joint color and depth image segmentation. To this aim, we consider image normals as the prominent feature from 3D image and cluster them with methods based on finite statistical mixture models. We consider Bregman Soft Clustering method to ensure computationally efficient clustering. Moreover, we exploit several probability distributions from directional statistics, such as the von Mises-Fisher distribution and the Watson distribution. By combining these, we propose novel Model Based Clustering methods. We empirically validate these methods using synthetic data and then demonstrate their application for 3D/depth image analysis. Afterward, we extend these methods to segment synchronized 3D and color image, also called RGB-D image. To this aim, first we propose a statistical image generation model for RGB-D image. Then, we propose novel RGB-D segmentation method using a joint color-spatial-axial clustering and a statistical planar region merging method. Results show that, the proposed method is comparable with the state of the art methods and requires less computation time. Moreover, it opens interesting perspectives to fuse color and geometry in an unsupervised manner. We believe that the methods proposed in this thesis are equally applicable and extendable for clustering different types of data, such as speech, gene expressions, etc. Moreover, they can be used for complex tasks, such as joint image-speech data analysis
|
257 |
A narrative analysis of Captain America's new dealLedbetter, Forest L. 31 May 2012 (has links)
In response to the events on September the Eleventh, various media attempted to make sense of the seemingly radical altered political landscape. Comic books, though traditionally framed as low brow pulp, were no exception. This thesis is a work of rhetorical criticism. It applies Walter Fisher's Narrative Paradigm to a specific set of artifacts: John Ney Rieber and John Cassaday's six-part comic series, collectively titled Captain America: The New Deal (2010). The question that is the focus of this thesis is: Does The New Deal, framed as a response to the events surrounding September the Eleventh, form a rhetorically effective narrative? The analysis that follows demonstrates the importance of meeting audience expectations when presenting them with controversial viewpoints. / Graduation date: 2012
|
258 |
Univariate and Multivariate Symmetry: Statistical Inference and Distributional Aspects/Symétrie Univariée et Multivariée: Inférence Statistique et Aspects DistributionnelsLey, Christophe C. 26 November 2010 (has links)
This thesis deals with several statistical and probabilistic aspects of symmetry and asymmetry, both in a univariate and multivariate context, and is divided into three distinct parts.
The first part, composed of Chapters 1, 2 and 3 of the thesis, solves two conjectures associated with multivariate skew-symmetric distributions. Since the introduction in 1985 by Adelchi Azzalini of the most famous representative of that class of distributions, namely the skew-normal distribution, it is well-known that, in the vicinity of symmetry, the Fisher information matrix is singular and the profile log-likelihood function for skewness admits a stationary point whatever the sample under consideration. Since that moment, researchers have tried to determine the subclasses of skew-symmetric distributions who suffer from each of those problems, which has led to the aforementioned two conjectures. This thesis completely solves these two problems.
The second part of the thesis, namely Chapters 4 and 5, aims at applying and constructing extremely general skewing mechanisms. As such, in Chapter 4, we make use of the univariate mechanism of Ferreira and Steel (2006) to build optimal (in the Le Cam sense) tests for univariate symmetry which are very flexible. Actually, their mechanism allowing to turn a given symmetric distribution into any asymmetric distribution, the alternatives to the null hypothesis of symmetry can take any possible shape. These univariate mechanisms, besides that surjectivity property, enjoy numerous good properties, but cannot be extended to higher dimensions in a satisfactory way. For this reason, we propose in Chapter 5 different general mechanisms, sharing all the nice properties of their competitors in Ferreira and Steel (2006), but which moreover can be extended to any dimension. We formally prove that the surjectivity property holds in dimensions k>1 and we study the principal characteristics of these new multivariate mechanisms.
Finally, the third part of this thesis, composed of Chapter 6, proposes a test for multivariate central symmetry by having recourse to the concepts of statistical depth and runs. This test extends the celebrated univariate runs test of McWilliams (1990) to higher dimensions. We analyze its asymptotic behavior (especially in dimension k=2) under the null hypothesis and its invariance and robustness properties. We conclude by an overview of possible modifications of these new tests./
Cette thèse traite de différents aspects statistiques et probabilistes de symétrie et asymétrie univariées et multivariées, et est subdivisée en trois parties distinctes.
La première partie, qui comprend les chapitres 1, 2 et 3 de la thèse, est destinée à la résolution de deux conjectures associées aux lois skew-symétriques multivariées. Depuis l'introduction en 1985 par Adelchi Azzalini du plus célèbre représentant de cette classe de lois, à savoir la loi skew-normale, il est bien connu qu'en un voisinage de la situation symétrique la matrice d'information de Fisher est singulière et la fonction de vraisemblance profile pour le paramètre d'asymétrie admet un point stationnaire quel que soit l'échantillon considéré. Dès lors, des chercheurs ont essayé de déterminer les sous-classes de lois skew-symétriques qui souffrent de chacune de ces problématiques, ce qui a mené aux deux conjectures précitées. Cette thèse résoud complètement ces deux problèmes.
La deuxième partie, constituée des chapitres 4 et 5, poursuit le but d'appliquer et de proposer des méchanismes d'asymétrisation très généraux. Ainsi, au chapitre 4, nous utilisons le méchanisme univarié de Ferreira and Steel (2006) pour construire des tests de symétrie univariée optimaux (au sens de Le Cam) qui sont très flexibles. En effet, leur méchanisme permettant de transformer une loi symétrique donnée en n'importe quelle loi asymétrique, les contre-hypothèses à la symétrie peuvent prendre toute forme imaginable. Ces méchanismes univariés, outre cette propriété de surjectivité, possèdent de nombreux autres attraits, mais ne permettent pas une extension satisfaisante aux dimensions supérieures. Pour cette raison, nous proposons au chapitre 5 des méchanismes généraux alternatifs, qui partagent toutes les propriétés de leurs compétiteurs de Ferreira and Steel (2006), mais qui en plus sont généralisables à n'importe quelle dimension. Nous démontrons formellement que la surjectivité tient en dimension k > 1 et étudions les caractéristiques principales de ces nouveaux méchanismes multivariés.
Finalement, la troisième partie de cette thèse, composée du chapitre 6, propose un test de symétrie centrale multivariée en ayant recours aux concepts de profondeur statistique et de runs. Ce test étend le célèbre test de runs univarié de McWilliams (1990) aux dimensions supérieures. Nous en analysons le comportement asymptotique (surtout en dimension k = 2) sous l'hypothèse nulle et les propriétés d'invariance et de robustesse. Nous concluons par un aperçu sur des modifications possibles de ces nouveaux tests.
|
259 |
Information Geometry and the Wright-Fisher model of Mathematical Population GeneticsTran, Tat Dat 31 July 2012 (has links) (PDF)
My thesis addresses a systematic approach to stochastic models in population genetics; in particular, the Wright-Fisher models affected only by the random genetic drift. I used various mathematical methods such as Probability, PDE, and Geometry to answer an important question: \"How do genetic change factors (random genetic drift, selection, mutation, migration, random environment, etc.) affect the behavior of gene frequencies or genotype frequencies in generations?”.
In a Hardy-Weinberg model, the Mendelian population model of a very large number of individuals without genetic change factors, the answer is simple by the Hardy-Weinberg principle: gene frequencies remain unchanged from generation to generation, and genotype frequencies from the second generation onward remain also unchanged from generation to generation.
With directional genetic change factors (selection, mutation, migration), we will have a deterministic dynamics of gene frequencies, which has been studied rather in detail. With non-directional genetic change factors (random genetic drift, random environment), we will have a stochastic dynamics of gene frequencies, which has been studied with much more interests. A combination of these factors has also been considered.
We consider a monoecious diploid population of fixed size N with n + 1 possible alleles at a given locus A, and assume that the evolution of population was only affected by the random genetic drift. The question is that what the behavior of the distribution of relative frequencies of alleles in time and its stochastic quantities are.
When N is large enough, we can approximate this discrete Markov chain to a continuous Markov with the same characteristics. In 1931, Kolmogorov first introduced a nice relation between a continuous Markov process and diffusion equations. These equations called the (backward/forward) Kolmogorov equations which have been first applied in population genetics in 1945 by Wright.
Note that these equations are singular parabolic equations (diffusion coefficients vanish on boundary). To solve them, we use generalized hypergeometric functions. To know more about what will happen after the first exit time, or more general, the behavior of whole process, in joint work with J. Hofrichter, we define the global solution by moment conditions; calculate the component solutions by boundary flux method and combinatorics method.
One interesting property is that some statistical quantities of interest are solutions of a singular elliptic second order linear equation with discontinuous (or incomplete) boundary values. A lot of papers, textbooks have used this property to find those quantities. However, the uniqueness of these problems has not been proved. Littler, in his PhD thesis in 1975, took up the uniqueness problem but his proof, in my view, is not rigorous. In joint work with J. Hofrichter, we showed two different ways to prove the uniqueness rigorously. The first way is the approximation method. The second way is the blow-up method which is conducted by J. Hofrichter.
By applying the Information Geometry, which was first introduced by Amari in 1985, we see that the local state space is an Einstein space, and also a dually flat manifold with the Fisher metric; the differential operator of the Kolmogorov equation is the affine Laplacian which can be represented in various coordinates and on various spaces. Dynamics on the whole state space explains some biological phenomena.
|
260 |
From local to global: Complex behavior of spatiotemporal systems with fluctuating delay timesWang, Jian 17 April 2014 (has links) (PDF)
The aim of this thesis is to investigate the dynamical behaviors of spatially extended systems with fluctuating time delays. In recent years, the study of spatially extended systems and systems with fluctuating delays has experienced a fast growth. In ubiquitous natural and laboratory situations, understanding the action of time-delayed signals is a crucial for understanding the dynamical behavior of these systems. Frequently, the length of the delay is found to change with time. Spatially extended systems are widely studied in many fields, such as chemistry, ecology, and biology. Self-organization, turbulence, and related nonlinear dynamic phenomena in spatially extended systems have developed into one of the most exciting topics in modern science. The first part of this thesis considers the discrete system. Diffusively coupled map lattices with a fluctuating delay are used in the study. The uncoupled local dynamics of the considered system are represented by the delayed logistic map. In particular, the influences of diffusive coupling and fluctuating delay are studied. To observe and understand the influences, the results for the considered system are compared with coupled map lattices without delay and with a constant delay as well as with the uncoupled logistic map with fluctuating delays. Identifying different patterns, determining the existence of traveling wave solutions, and specifying the fully synchronized stable state are the focus of this part of the study. The Lyapunov exponent, the master stability function, spectrum analysis, and the structure factor are used to characterize the different states and the transitions between them. The second part examines the continuous system. The delay is introduced into the reactionterm of the Fisher-KPP equation. The focus of this part of study is the time-delay-induced Turing instability in one-component reaction-diffusion systems. Turing instability has previously only been found in multiple-component reaction-diffusion systems. However, this work demonstrates with the help of the stability exponent that fluctuating delay can result in Turing instability in one-component reaction-diffusion systems as well. / Ziel der vorliegenden Arbeit ist die Untersuchung der Einflüsse der zeitlich fluktuierenden Verzögerungen in räumlich ausgedehnten diffusiven Systemen. Durch den Vergleich von Systemen mit konstanter Verzögerung bzw. Systemen ohne räumliche Kopplung erhält man ein tieferes Verständnis und eine bessere Beschreibungsweise der Dynamik des räumlich ausgedehnten diffusiven Systems mit fluktuierenden Verzögerungen. Im ersten Teil werden diskrete Systeme in Form von diffusiven Coupled Map Lattices untersucht. Als die lokale iterierte Abbildung des betrachteten Systems wird die logistische Abbildung mit Verzögerung gewählt. In diesem Teil liegt der Fokus auf Musterbildung, Existenz von Multiattraktoren und laufenden Wellen sowie der Möglichkeit der vollen Synchronisation. Masterstabilitätsfunktion, Lyapunov Exponent und Spektrumsanalyse werden benutzt, um das dynamische Verhalten zu verstehen. Im zweiten Teil betrachten wir kontinuierliche Systeme. Hier wird die Fisher-KPP Gleichung mit Verzögerungen im Reaktionsteil untersucht. In diesem Teil liegt der Fokus auf der Existenz der Turing Instabilität. Mit Hilfe von analytischen und numerischen Berechnungen wird gezeigt, dass bei fluktuierenden Verzögerungen eine Turing Instabilität auch in 1-Komponenten-Reaktions-Diffusionsgleichungen gefunden werden kann
|
Page generated in 0.05 seconds