• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 161
  • 32
  • 32
  • 22
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 312
  • 61
  • 42
  • 38
  • 36
  • 34
  • 31
  • 29
  • 26
  • 24
  • 24
  • 24
  • 23
  • 22
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Outils statistiques pour le positionnement optimal de capteurs dans le contexte de la localisation de sources / Statistical tool for the array geometry optimization in the context of the sources localization

Vu, Dinh Thang 19 October 2011 (has links)
Cette thèse porte sur l’étude du positionnement optimale des réseaux de capteurs pour la localisation de sources. Nous avons étudié deux approches: l’approche basée sur les performances de l’estimation en termes d’erreur quadratique moyenne et l’approche basée sur le seuil statistique de résolution (SSR).Pour le première approche, nous avons considéré les bornes inférieures de l’erreur quadratique moyenne qui sont utilisés généralement pour évaluer la performance d’estimation indépendamment du type d’estimateur considéré. Nous avons étudié deux types de bornes: la borne Cramér-Rao (BCR) pour le modèle où les paramètres sont supposés déterministes et la borne Weiss-Weinstein (BWW) pour le modèle où les paramètres sont supposés aléatoires. Nous avons dérivé les expressions analytiques de ces bornes pour développer des outils statistiques afin d’optimiser la géométrie des réseaux de capteurs. Par rapport à la BCR, la borne BWW peut capturer le décrochement de l’EQM des estimateurs dans la zone non-asymptotique. De plus, les expressions analytiques de la BWW pour un modèle Gaussien général à moyenne paramétré ou à covariance matrice paramétré sont donnés explicitement. Basé sur ces expressions analytiques, nous avons étudié l’impact de la géométrie des réseaux de capteurs sur les performances d’estimation en utilisant les réseaux de capteurs 3D et 2D pour deux modèles des observations concernant les signaux sources: (i) le modèle déterministe et (ii) le modèle stochastique. Nous en avons ensuite déduit des conditions concernant les propriétés d’isotropie et de découplage.Pour la deuxième approche, nous avons considéré le seuil statistique de résolution qui caractérise la séparation minimale entre les deux sources. Dans cette thèse, nous avons étudié le SSR pour le contexte Bayésien moins étudié dans la littérature. Nous avons introduit un modèle des observations linéarisé basé sur le critère de probabilité d’erreur minimale. Ensuite, nous avons présenté deux approches Bayésiennes pour le SSR, l’une basée sur la théorie de l’information et l’autre basée sur la théorie de la détection. Ces approches pourront être utilisée pour améliorer la capacité de résolution des systèmes. / This thesis deals with the array geometry optimization problem in the context of sources localization. We have considered two approaches for the array geometry optimization: the performance estimation in terms of mean square error approach and the statistical resolution limit (SRL) approach. In the first approach, the lower bounds on the mean square error which are usually used in array processing to evaluate the estimation performance independently of the considered estimator have been considered. We have investigated two kinds of lower bounds: the well-known Cramér-Rao bound (CRB) for the deterministic model in which the parameters are assumed to be deterministic, and the Weiss-Weinstein bound (WWB) which is less studied, for the Bayesian model, in which, the parameters are assumed to be random with some prior distributions. We have proposed closed-form expressions of these bounds, which can be used as a statistical tool for array geometry design. Compared to the CRB, the WWB can predict the threshold effect of the MSE in the non-asymptotic area. Moreover, the closed-form expressions of the WWB proposed for a general Gaussian model with parameterized mean or parameterized covariance matrix can also be useful for other problems. Based on these closed-form expressions, the 3D array geometry and the classical planar array geometry have been investigated under (i) the conditional observation model in which the source signal is modeled as a deterministic sequence and under (ii) the unconditional observation model in which the source signal is modeled as a Gaussian random process. Conditions concerning the isotropic and uncoupling properties were then derived.In the second approach, we have considered the statistical resolution limit which characterizes the minimal separation between the two closed spaced sources which still allows to determine correctly the number of sources. In this thesis, we are interested in the SRL in the Bayesian context which is less studied in the literature. Based on the linearized observation model with the minimum probability of error, we have introduced the two Bayesian approaches of the SRL based on the detection and information theories which could lead to some interesting tools for the system design.
242

Performance bounds in terms of estimation and resolution and applications in array processing / Performances limites en termes d’estimation et de résolution et applications aux traitements d’antennes

Tran, Nguyen Duy 24 September 2012 (has links)
Cette thèse porte sur l'analyse des performances en traitement du signal et se compose de deux parties: Premièrement, nous étudions les bornes inférieures dans la caractérisation et la prédiction des performances en termes d'erreur quadratique moyenne (EQM). Les bornes inférieures de l'EQM donne la variance minimale qu'un estimateur peut atteindre et peuvent être divisées en deux catégories: les bornes déterministes pour le modèle où les paramètres sont supposés déterministes (mais inconnus), et les bornes Bayésiennes pour le modèle où les paramètres sont supposés aléatoires. En particulier, nous dérivons les expressions analytiques de ces bornes pour deux applications différentes: (i) La première est la localisation des sources en utilisant un radar multiple-input multiple-output (MIMO). Nous considérons les bornes inférieures dans deux contextes c'est-à-dire avec ou sans erreurs de modèle. (ii) La deuxième est l'estimation de phase d'impulsion de pulsars à rayon X qui est une solution potentielle pour la navigation autonome dans l'espace. Pour cette application, nous avons calculé plusieurs bornes inférieures de l'EQM dans le contexte de données modélisées par une loi de Poisson (complétant ainsi les travaux disponibles dans la littérature où les données sont modélisées par une loi gaussienne). Deuxièmement, nous étudions le seuil statistique de résolution limite (SRL), qui est la distance minimale en termes des paramètres d'intérêts entre les deux signaux permettant de séparer / estimer correctement les paramètres d'intérêt. Plus précisément, nous dérivons le SRL dans deux contextes: le traitement d'antenne et le radar MIMO en utilisant deux approches basées sur la théorie de l'estimation et sur la théorie de l'information. Finalement, nous proposons des expressions compactes du SRL dans le cas d'erreurs de modèle. / This manuscript concerns the performance analysis in signal processing and consists into two parts : First, we study the lower bounds in characterizing and predicting the estimation performance in terms of mean square error (MSE). The lower bounds on the MSE give the minimum variance that an estimator can expect to achieve and it can be divided into two categories depending on the parameter assumption: the so-called deterministic bounds dealing with the deterministic unknown parameters, and the so-called Bayesian bounds dealing with the random unknown parameter. Particularly, we derive the closed-form expressions of the lower bounds for two applications in two different fields: (i) The first one is the target localization using the multiple-input multiple-output (MIMO) radar in which we derive the lower bounds in the contexts with and without modeling errors, respectively. (ii) The other one is the pulse phase estimation of X-ray pulsars which is a potential solution for autonomous deep space navigation. In this application, we show the potential universality of lower bounds to tackle problems with parameterized probability density function (pdf) different from classical Gaussian pdf since in X-ray pulse phase estimation, observations are modeled with a Poisson distribution. Second, we study the statistical resolution limit (SRL) which is the minimal distance in terms of the parameter of interest between two signals allowing to correctly separate/estimate the parameters of interest. More precisely, we derive the SRL in two contexts: array processing and MIMO radar by using two approaches based on the estimation theory and information theory. We also present in this thesis the usefulness of SRL in optimizing the array system.
243

Robust aspects of hedging and valuation in incomplete markets and related backward SDE theory

Tonleu, Klebert Kentia 16 March 2016 (has links)
Diese Arbeit beginnt mit einer Analyse von stochastischen Rückwärtsdifferentialgleichungen (BSDEs) mit Sprüngen, getragen von zufälligen Maßen mit ggf. unendlicher Aktivität und zeitlich inhomogenem Kompensator. Unter konkreten, in Anwendungen leicht verifizierbaren Bedingungen liefern wir Existenz-, Eindeutigkeits- und Vergleichsergebnisse beschränkter Lösungen für eine Klasse von Generatorfunktionen, die nicht global Lipschitz-stetig im Sprungintegranden sein brauchen. Der übrige Teil der Arbeit behandelt robuste Bewertung und Hedging in unvollständigen Märkten. Wir verfolgen den No-Good-Deal-Ansatz, der Good-Deal-Grenzen liefert, indem nur eine Teilmenge der risikoneutralen Maße mit ökonomischer Bedeutung betrachtet wird (z.B. Grenzen für instantanen Sharpe-Ratio, optimale Wachstumsrate oder erwarteten Nutzen). Durchweg untersuchen wir ein Konzept des Good-Deal-Hedgings für welches Hedgingstrategien als Minimierer geeigneter dynamischer Risikomaße auftreten, was optimale Risikoteilung mit der Markt erlaubt. Wir zeigen, dass Hedging mindestens im-Mittel-selbstfinanzierend ist, also, dass Hedgefehler unter geeigneten A-priori-Bewertungsmaßen eine Supermartingaleigenschaft haben. Wir leiten konstruktive Ergebnisse zu Good-Deal-Bewertung und -Hedging im Rahmen von Prozessen mit Sprüngen durch BSDEs mit Sprüngen, sowie im Brown''schen Fall mit Driftunsicherheit durch klassische BSDEs und mit Volatilitätsunsicherheit durch BSDEs zweiter Ordnung her. Wir liefern neue Beispiele, die insbesondere für versicherungs- und finanzmathematische Anwendungen von Bedeutung sind. Bei Ungewissheit des Real-World-Maßes führt ein Worst-Case-Ansatz bei Annahme mehrerer Referenzmaße zu Good-Deal-Hedging, welches robust bzgl. Unsicherheit, im Sinne von gleichmäßig über alle Referenzmaße mindestens im-Mittel-selbstfinanzierend, ist. Daher ist bei hinreichend großer Driftunsicherheit Good-Deal-Hedging zur Risikominimierung äquivalent. / This thesis starts by an analysis of backward stochastic differential equations (BSDEs) with jumps driven by random measures possibly of infinite activity with time-inhomogeneous compensators. Under concrete conditions that are easy to verify in applications, we prove existence, uniqueness and comparison results for bounded solutions for a class of generators that are not required to be globally Lipschitz in the jump integrand. The rest of the thesis deals with robust valuation and hedging in incomplete markets. The focus is on the no-good-deal approach, which computes good-deal valuation bounds by using only a subset of the risk-neutral measures with economic meaning (e.g. bounds on instantaneous Sharpe ratios, optimal growth rates, or expected utilities). Throughout we study a notion of good-deal hedging consisting in minimizing some dynamic risk measures that allow for optimal risk sharing with the market. Hedging is shown to be at least mean-self-financing in that hedging errors satisfy a supermartingale property under suitable valuation measures. We derive constructive results on good-deal valuation and hedging in a jump framework using BSDEs with jumps, as well as in a Brownian setting with drift uncertainty using classical BSDEs and with volatility uncertainty using second-order BSDEs. We provide new examples which are particularly relevant for actuarial and financial applications. Under ambiguity about the real-world measure, a worst-case approach under multiple reference priors leads to good-deal hedging that is robust w.r.t. uncertainty in that it is at least mean-self-financing uniformly over all priors. This yields that good-deal hedging is equivalent to risk-minimization if drift uncertainty is sufficiently large.
244

On the numerical analysis of eigenvalue problems

Gedicke, Joscha Micha 05 November 2013 (has links)
Die vorliegende Arbeit zum Thema der numerischen Analysis von Eigenwertproblemen befasst sich mit fünf wesentlichen Aspekten der numerischen Analysis von Eigenwertproblemen. Der erste Teil präsentiert einen Algorithmus von asymptotisch quasi-optimaler Rechenlaufzeit, der die adaptive Finite Elemente Methode mit einem iterativen algebraischen Eigenwertlöser kombiniert. Der zweite Teil präsentiert explizite beidseitige Schranken für die Eigenwerte des Laplace Operators auf beliebig groben Gittern basierend auf einer Approximation der zugehörigen Eigenfunktion in dem nicht konformen Finite Elemente Raum von Crouzeix und Raviart und einem Postprocessing. Die Effizienz der garantierten Schranke des Eigenwertfehlers hängt von der globalen Gitterweite ab. Der dritte Teil betrachtet eine adaptive Finite Elemente Methode basierend auf Verfeinerungen von Knoten-Patchen. Dieser Algorithmus zeigt eine asymptotische Fehlerreduktion der adaptiven Sequenz von einfachen Eigenwerten und Eigenfunktionen des Laplace Operators. Die hier erstmals bewiesene Eigenschaft der Saturation des Eigenwertfehlers zeigt Zuverlässigkeit und Effizienz für eine Klasse von hierarchischen a posteriori Fehlerschätzern. Der vierte Teil betrachtet a posteriori Fehlerschätzer für Konvektion-Diffusion Eigenwertprobleme, wie sie von Heuveline und Rannacher (2001) im Kontext der dual-gewichteten residualen Methode (DWR) diskutiert wurden. Zwei neue dual-gewichtete a posteriori Fehlerschätzer werden vorgestellt. Der letzte Teil beschäftigt sich mit drei adaptiven Algorithmen für Eigenwertprobleme von nicht selbst-adjungierten Operatoren partieller Differentialgleichungen. Alle drei Algorithmen basieren auf einer Homotopie-Methode die vom einfacheren selbst-adjungierten Problem startet. Neben der Gitterverfeinerung wird der Prozess der Homotopie sowie die Anzahl der Iterationen des algebraischen Löser adaptiv gesteuert und die verschiedenen Anteile am gesamten Fehler ausbalanciert. / This thesis "on the numerical analysis of eigenvalue problems" consists of five major aspects of the numerical analysis of adaptive finite element methods for eigenvalue problems. The first part presents a combined adaptive finite element method with an iterative algebraic eigenvalue solver for a symmetric eigenvalue problem of asymptotic quasi-optimal computational complexity. The second part introduces fully computable two-sided bounds on the eigenvalues of the Laplace operator on arbitrarily coarse meshes based on some approximation of the corresponding eigenfunction in the nonconforming Crouzeix-Raviart finite element space plus some postprocessing. The efficiency of the guaranteed error bounds involves the global mesh-size and is proven for the large class of graded meshes. The third part presents an adaptive finite element method (AFEM) based on nodal-patch refinement that leads to an asymptotic error reduction property for the adaptive sequence of simple eigenvalues and eigenfunctions of the Laplace operator. The proven saturation property yields reliability and efficiency for a class of hierarchical a posteriori error estimators. The fourth part considers a posteriori error estimators for convection-diffusion eigenvalue problems as discussed by Heuveline and Rannacher (2001) in the context of the dual-weighted residual method (DWR). Two new dual-weighted a posteriori error estimators are presented. The last part presents three adaptive algorithms for eigenvalue problems associated with non-selfadjoint partial differential operators. The basis for the developed algorithms is a homotopy method which departs from a well-understood selfadjoint problem. Apart from the adaptive grid refinement, the progress of the homotopy as well as the solution of the iterative method are adapted to balance the contributions of the different error sources.
245

Monetary policy and economic growth : lessons from East African countries

Nyorekwa, Enock Twinoburyo 07 1900 (has links)
This study empirically examines the impact of monetary policy on economic growth in three East African countries (Uganda, Kenya and Tanzania). The role of monetary policy in promoting economic growth remains empirically an open research question, as both the empirical and theoretical underpinnings are not universal, and the results remain varying, inconsistent, and inconclusive. This study may be the first of its kind to examine in detail the impact of monetary policy on economic growth in Uganda, Kenya and Tanzania – using the autoregressive distributed lag (ARDL) bounds-testing approach. This study used two proxies of monetary policy, namely, money supply and interest rate, to examine this linkage. The results were found to differ from country to country and over time. The Uganda empirical results reveal that money supply has a positive impact on economic growth, both in the short run and in the long run. However, interest rate was found to have a positive impact on economic growth only in the short run. In the long run, interest rate has no significant impact on economic growth. In Kenya, both short-run and long-run empirical results support monetary policy neutrality, implying that monetary policy has no effect on economic growth – both in the short run and in the long run. The results from Tanzania also reveal no impact of monetary policy on economic growth in the long run – irrespective of the proxy used to measure monetary policy. However, the short-run results only reveal no impact of monetary policy on economic growth only when the interest rate is used as a proxy for monetary policy. When money supply is used to measure monetary policy, a negative relationship between monetary policy and economic growth is found to dominate. Overall, the study finds that monetary policy is only relevant for economic growth in Uganda and only when money supply is used as monetary policy variable. Therefore this study recommends a money supply based monetary policy framework for Uganda. The study findings also suggest that monetary policy may not be a panacea for economic growth in Kenya and Tanzania. / Economics / M. Com. (Economics)
246

Uncertainty in Aquatic Toxicological Exposure-Effect Models: the Toxicity of 2,4-Dichlorophenoxyacetic Acid and 4-Chlorophenol to Daphnia carinata

Dixon, William J., bill.dixon@dse.vic.gov.au January 2005 (has links)
Uncertainty is pervasive in risk assessment. In ecotoxicological risk assessments, it arises from such sources as a lack of data, the simplification and abstraction of complex situations, and ambiguities in assessment endpoints (Burgman 2005; Suter 1993). When evaluating and managing risks, uncertainty needs to be explicitly considered in order to avoid erroneous decisions and to be able to make statements about the confidence that we can place in risk estimates. Although informative, previous approaches to dealing with uncertainty in ecotoxicological modelling have been found to be limited, inconsistent and often based on assumptions that may be false (Ferson & Ginzburg 1996; Suter 1998; Suter et al. 2002; van der Hoeven 2004; van Straalen 2002a; Verdonck et al. 2003a). In this thesis a Generalised Linear Modelling approach is proposed as an alternative, congruous framework for the analysis and prediction of a wide range of ecotoxicological effects. This approach was used to investigate the results of toxicity experiments on the effect of 2,4-Dichlorophenoxyacetic Acid (2,4-D) formulations and 4-Chlorophenol (4-CP, an associated breakdown product) on Daphnia carinata. Differences between frequentist Maximum Likelihood (ML) and Bayesian Markov-Chain Monte-Carlo (MCMC) approaches to statistical reasoning and model estimation were also investigated. These approaches are inferentially disparate and place different emphasis on aleatory and epistemic uncertainty (O'Hagan 2004). Bayesian MCMC and Probability Bounds Analysis methods for propagating uncertainty in risk models are also compared for the first time. For simple models, Bayesian and frequentist approaches to Generalised Linear Model (GLM) estimation were found to produce very similar results when non-informative prior distributions were used for the Bayesian models. Potency estimates and regression parameters were found to be similar for identical models, signifying that Bayesian MCMC techniques are at least a suitable and objective replacement for frequentist ML for the analysis of exposureresponse data. Applications of these techniques demonstrated that Amicide formulations of 2,4-D are more toxic to Daphnia than their unformulated, Technical Acid parent. Different results were obtained from Bayesian MCMC and ML methods when more complex models and data structures were considered. In the analysis of 4-CP toxicity, the treatment of 2 different factors as fixed or random in standard and Mixed-Effect models was found to affect variance estimates to the degree that different conclusions would be drawn from the same model, fit to the same data. Associated discrepancies in the treatment of overdispersion between ML and Bayesian MCMC analyses were also found to affect results. Bayesian MCMC techniques were found to be superior to the ML ones employed for the analysis of complex models because they enabled the correct formulation of hierarchical (nested) datastructures within a binomial logistic GLM. Application of these techniques to the analysis of results from 4-CP toxicity testing on two strains of Daphnia carinata found that between-experiment variability was greater than that within-experiments or between-strains. Perhaps surprisingly, this indicated that long-term laboratory culture had not significantly affected the sensitivity of one strain when compared to cultures of another strain that had recently been established from field populations. The results from this analysis highlighted the need for repetition of experiments, proper model formulation in complex analyses and careful consideration of the effects of pooling data on characterising variability and uncertainty. The GLM framework was used to develop three dimensional surface models of the effects of different length pulse exposures, and subsequent delayed toxicity, of 4-CP on Daphnia. These models described the relationship between exposure duration and intensity (concentration) on toxicity, and were constructed for both pulse and delayed effects. Statistical analysis of these models found that significant delayed effects occurred following the full range of pulse exposure durations, and that both exposure duration and intensity interacted significantly and concurrently with the delayed effect. These results indicated that failure to consider delayed toxicity could lead to significant underestimation of the effects of pulse exposure, and therefore increase uncertainty in risk assessments. A number of new approaches to modelling ecotoxicological risk and to propagating uncertainty were also developed and applied in this thesis. In the first of these, a method for describing and propagating uncertainty in conventional Species Sensitivity Distribution (SSD) models was described. This utilised Probability Bounds Analysis to construct a nonparametric 'probability box' on an SSD based on EC05 estimates and their confidence intervals. Predictions from this uncertain SSD and the confidence interval extrapolation methods described by Aldenberg and colleagues (2000; 2002a) were compared. It was found that the extrapolation techniques underestimated the width of uncertainty (confidence) intervals by 63% and the upper bound by 65%, when compared to the Probability Bounds (P3 Bounds) approach, which was based on actual confidence estimates derived from the original data. An alternative approach to formulating ecotoxicological risk modelling was also proposed and was based on a Binomial GLM. In this formulation, the model is first fit to the available data in order to derive mean and uncertainty estimates for the parameters. This 'uncertain' GLM model is then used to predict the risk of effect from possible or observed exposure distributions. This risk is described as a whole distribution, with a central tendency and uncertainty bounds derived from the original data and the exposure distribution (if this is also 'uncertain'). Bayesian and P-Bounds approaches to propagating uncertainty in this model were compared using an example of the risk of exposure to a hypothetical (uncertain) distribution of 4-CP for the two Daphnia strains studied. This comparison found that the Bayesian and P-Bounds approaches produced very similar mean and uncertainty estimates, with the P-bounds intervals always being wider than the Bayesian ones. This difference is due to the different methods for dealing with dependencies between model parameters by the two approaches, and is confirmation that the P-bounds approach is better suited to situations where data and knowledge are scarce. The advantages of the Bayesian risk assessment and uncertainty propagation method developed are that it allows calculation of the likelihood of any effect occurring, not just the (probability)bounds, and that the same software (WinBugs) and model construction may be used to fit regression models and predict risks simultaneously. The GLM risk modelling approaches developed here are able to explain a wide range of response shapes (including hormesis) and underlying (non-normal) distributions, and do not involve expression of the exposure-response as a probability distribution, hence solving a number of problems found with previous formulations of ecotoxicological risk. The approaches developed can also be easily extended to describe communities, include modifying factors, mixed-effects, population growth, carrying capacity and a range of other variables of interest in ecotoxicological risk assessments. While the lack of data on the toxicological effects of chemicals is the most significant source of uncertainty in ecotoxicological risk assessments today, methods such as those described here can assist by quantifying that uncertainty so that it can be communicated to stakeholders and decision makers. As new information becomes available, these techniques can be used to develop more complex models that will help to bridge the gap between the bioassay and the ecosystem.
247

On some damage processes in risk and epidemic theories

Gathy, Maude 14 September 2010 (has links)
Cette thèse traite de processus de détérioration en théorie du risque et en biomathématique. En théorie du risque, le processus de détérioration étudié est celui des sinistres supportés par une compagnie d'assurance. Le premier chapitre examine la distribution de Markov-Polya comme loi possible pour modéliser le nombre de sinistres et établit certains liens avec la famille de lois de Katz/Panjer. Nous construisons la loi de Markov-Polya sur base d'un modèle de survenance des sinistres et nous montrons qu'elle satisfait une récurrence élégante. Celle-ci permet notamment de déduire un algorithme efficace pour la loi composée correspondante. Nous déduisons la famille de Katz/Panjer comme famille limite de la loi de Markov-Polya. Le second chapitre traite de la famille dite "Lagrangian Katz" qui étend celle de Katz/Panjer. Nous motivons par un problème de premier passage son utilisation comme loi du nombre de sinistres. Nous caractérisons toutes les lois qui en font partie et nous déduisons un algorithme efficace pour la loi composée. Nous examinons également son indice de dispersion ainsi que son comportement asymptotique. Dans le troisième chapitre, nous étudions la probabilité de ruine sur horizon fini dans un modèle discret avec taux d'intérêt positifs. Nous déterminons un algorithme ainsi que différentes bornes pour cette probabilité. Une borne particulière nous permet de construire deux mesures de risque. Nous examinons également la possibilité de faire appel à de la réassurance proportionelle avec des niveaux de rétention égaux ou différents sur les périodes successives. Dans le cadre de processus épidémiques, la détérioration étudiée consiste en la propagation d'une maladie de type SIE (susceptible - infecté - éliminé). La manière dont un infecté contamine les susceptibles est décrite par des distributions de survie particulières. Nous en déduisons la distribution du nombre total de personnes infectées à la fin de l'épidémie. Nous examinons en détails les épidémies dites de type Markov-Polya et hypergéométrique. Nous approximons ensuite cette loi par un processus de branchement. Nous étudions également un processus de détérioration similaire en théorie de la fiabilité où le processus de détérioration consiste en la propagation de pannes en cascade dans un système de composantes interconnectées.
248

Zero-energy states in supersymmetric matrix models

Lundholm, Douglas January 2010 (has links)
The work of this Ph.D. thesis in mathematics concerns the problem of determining existence, uniqueness, and structure of zero-energy states in supersymmetric matrix models, which arise from a quantum mechanical description of the physics of relativistic membranes, reduced Yang-Mills gauge theory, and of nonperturbative features of string theory, respectively M-theory. Several new approaches to this problem are introduced and considered in the course of seven scientific papers, including: construction by recursive methods (Papers A and D), deformations and alternative models (Papers B and C), averaging with respect to symmetries (Paper E), and weighted supersymmetry and index theory (Papers F and G). The mathematical tools used and developed for these approaches include Clifford algebras and associated representation theory, structure of supersymmetric quantum mechanics, as well as spectral theory of (matrix-) Schrödinger operators. / QC20100629
249

Quantum stabilizer codes and beyond

Sarvepalli, Pradeep Kiran 10 October 2008 (has links)
The importance of quantum error correction in paving the way to build a practical quantum computer is no longer in doubt. Despite the large body of literature in quantum coding theory, many important questions, especially those centering on the issue of "good codes" are unresolved. In this dissertation the dominant underlying theme is that of constructing good quantum codes. It approaches this problem from three rather different but not exclusive strategies. Broadly, its contribution to the theory of quantum error correction is threefold. Firstly, it extends the framework of an important class of quantum codes - nonbinary stabilizer codes. It clarifies the connections of stabilizer codes to classical codes over quadratic extension fields, provides many new constructions of quantum codes, and develops further the theory of optimal quantum codes and punctured quantum codes. In particular it provides many explicit constructions of stabilizer codes, most notably it simplifies the criteria by which quantum BCH codes can be constructed from classical codes. Secondly, it contributes to the theory of operator quantum error correcting codes also called as subsystem codes. These codes are expected to have efficient error recovery schemes than stabilizer codes. Prior to our work however, systematic methods to construct these codes were few and it was not clear how to fairly compare them with other classes of quantum codes. This dissertation develops a framework for study and analysis of subsystem codes using character theoretic methods. In particular, this work established a close link between subsystem codes and classical codes and it became clear that the subsystem codes can be constructed from arbitrary classical codes. Thirdly, it seeks to exploit the knowledge of noise to design efficient quantum codes and considers more realistic channels than the commonly studied depolarizing channel. It gives systematic constructions of asymmetric quantum stabilizer codes that exploit the asymmetry of errors in certain quantum channels. This approach is based on a Calderbank- Shor-Steane construction that combines BCH and finite geometry LDPC codes.
250

Range Searching Data Structures with Cache Locality

Hamilton, Christopher 17 March 2011 (has links)
This thesis focuses on range searching data structures, an elementary problem in computational geometry with research spanning decades. These problems often involve very large data sets. Processor speeds increase faster than memory speeds, thus the gap between the rate at which CPUs can process data and the rate at which it can be retrieved is increasing. To bridge this gap, various levels of cache are used. Since cache misses are costly, algorithms should be cache-friendly. The input-output (I/O) model was the first model for constructing cache-efficient algorithms, focusing on a two-level memory hierarchy. Algorithms for this model require manual tuning to determine optimal values for hardware dependent parameters, and are only optimal at a single level of a memory hierarchy. Cache-oblivious (CO) algorithms are built without knowledge of the hierarchy, allowing them to be optimal across all levels at once. There exist strong theoretical and practical results for I/O-efficient range searching. Recently, the CO model has received attention, but range searching remains poorly understood. This thesis explores data structures for CO range counting and reporting. It presents the first space and worst-case query-time optimal approximate range counting structure for a family of related problems, and associated O(N log N)-space query-optimal reporting structures. The approximate counting structure is the first of its kind in internal memory, I/O and CO models. Researchers have been trying to create linear-space query-optimal CO reporting structures. This thesis shows that for a variety of problems, linear space is in fact impossible. Heuristics are also used for building cache-friendly algorithms. Space-filling curves are continuous functions mapping multi-dimensional sets into one-dimensional ones. They are used to build search structures in the hopes that objects that were close in the original space remain close in the resulting ordering. This results in queries incurring fewer page swaps when traversing the structure. The Hilbert curve is notably good at this, but often imposes a space or time penalty. This thesis introduces compact Hilbert indices, which remove the ineffiency inherent for input point sets with bounding boxes smaller than their bounding hypercubes.

Page generated in 0.0359 seconds