• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 32
  • 32
  • 22
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 314
  • 61
  • 42
  • 39
  • 36
  • 34
  • 31
  • 29
  • 26
  • 24
  • 24
  • 24
  • 23
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Homogénéisation des composites linéaires : Etude des comportements apparents et effectif / Homogenization of linear elastic matrix-inclusion composites : a study of their apparent and effective behaviors

Salmi, Moncef 02 July 2012 (has links)
Les travaux effectués au cours de cette thèse portent principalement sur la construction de nouvelles bornes du comportement effectif des matériaux biphasés de type matrice-inclusions à comportement linéaire élastique. Dans un premier temps, afin d’encadrer le comportement effectif, nous présentons une nouvelle approche numérique, inspirée des travaux de Huet (J. Mech. Phys. Solids 1990 ; 38:813-41), qui repose sur le calcul des comportements apparents associés à des volumes élémentaires (VE) non-carrés construits à partir d'assemblages de cellules de Voronoï, chaque cellule contenant une inclusion entourée de matrice. De tels VE non-carrés permettent d'éviter l'application directe des CL sur les inclusions à l’origine d’une surestimation artificielle des comportements apparents. En utilisant les théorèmes énergétiques de l'élasticité linéaire et des procédures de moyennisation appropriées portant sur les comportements apparents, un nouvel encadrement du comportement effectif est obtenu. Son application au cas d'un composite biphasé, constitué d'une matrice isotrope et de fibres cylindriques parallèles et identiques distribuées aléatoirement dans le plan transverse, conduit à des bornes plus resserrées que celles obtenues par Huet. En nous appuyant sur cette nouvelle procédure numérique, nous avons ensuite réalisé une étude statistique des comportements apparents à l'aide de simulations de type Monté Carlo. Puis, à partir des tendances issues de cette étude statistique, nous avons proposé et mis en œuvre de nouveaux critères de tailles de VER. / This work is devoted to the derivation of improved bounds for the effective behavior of random linear elastic matrix-inclusions composites. In order to bounds their effective behavior, we present a new numerical approach, inspired by the works of Huet (J. Mech. Phys. Solids 1990 ; 38:813-41), which relies on the computation of the apparent behaviors associated to non square (or non cubic) volume elements (VEs) comprised of Voronoï cells assemblages, each cell being composed of a single inclusion surrounded by the matrix. Such non-square VEs forbid any direct application of boundary conditions to particles which is responsible for the artificial overestimation of the apparent behaviors observed for square VEs. By making used of the classical bounding theorems for linear elasticity and appropriate averaging procedures, new bounds are derived from ensemble averages of the apparent behavior associated with non square VEs. Their application to a two-phase composite composed of an isotropic matrix and aligned identical fibers randomly and isotropically distributed in the transverse plane leads to sharper bounds than those obtained by Huet. Then, by making use of this new numerical approach, a statistical study of the apparent behavior is carried out by means of Monte Carlo simulations. Subsequently, relying on the trends derived from this study, some proposals to define RVE criteria are presented.
242

Advanced Signal Processing Methods for GNSS Positioning with NLOS/Multipath Signals / Approches avancées de traitement de signal pour la navigation GNSS en présence des signaux multi-trajets ou sans ligne de vue directe (NLOS)

Kbayer, Nabil 09 October 2018 (has links)
Les avancées récentes dans le domaine de navigation par satellites (GNSS) ontconduit à une prolifération des applications de géolocalisation dans les milieux urbains. Pourde tels environnements, les applications GNSS souffrent d’une grande dégradation liée à laréception des signaux satellitaires en lignes indirectes (NLOS) et en multitrajets (MP). Cetravail de thèse propose une méthodologie originale pour l’utilisation constructive des signauxdégradés MP/NLOS, en appliquant des techniques avancées de traitement du signal ou àl’aide d’une assistance d’un simulateur 3D de propagation des signaux GNSS. D’abord, nousavons établi le niveau maximal réalisable sur la précision de positionnement par un systèmeGNSS "Stand-Alone" en présence de conditions MP/NLOS, en étudiant les bornes inférieuressur l’estimation en présence des signaux MP/NLOS. Pour mieux améliorer ce niveau deprécision, nous avons proposé de compenser les erreurs NLOS en utilisant un simulateur 3D dessignaux GNSS afin de prédire les biais MP/NLOS et de les intégrer comme des observationsdans l’estimation de la position, soit par correction des mesures dégradées ou par sélectiond’une position parmi une grille de positions candidates. L’application des approches proposéesdans un environnement urbain profond montre une bonne amélioration des performances depositionnement dans ces conditions. / Recent trends in Global Navigation Satellite System (GNSS) applications inurban environments have led to a proliferation of studies in this field that seek to mitigatethe adverse effect of non-line-of-sight (NLOS). For such harsh urban settings, this dissertationproposes an original methodology for constructive use of degraded MP/NLOS signals, insteadof their elimination, by applying advanced signal processing techniques or by using additionalinformation from a 3D GNSS simulator. First, we studied different signal processing frameworks,namely robust estimation and regularized estimation, to tackle this GNSS problemwithout using an external information. Then, we have established the maximum achievablelevel (lower bounds) of GNSS Stand-Alone positioning accuracy in presence of MP/NLOSconditions. To better enhance this accuracy level, we have proposed to compensate for theMP/NLOS errors using a 3D GNSS signal propagation simulator to predict the biases andintegrate them as observations in the estimation method. This could be either by correctingdegraded measurements or by scoring an array of candidate positions. Besides, new metricson the maximum acceptable errors on MP/NLOS errors predictions, using GNSS simulations,have been established. Experiment results using real GNSS data in a deep urban environmentshow that using these additional information provides good positioning performance enhancement,despite the intensive computational load of 3D GNSS simulation.
243

Outils statistiques pour le positionnement optimal de capteurs dans le contexte de la localisation de sources / Statistical tool for the array geometry optimization in the context of the sources localization

Vu, Dinh Thang 19 October 2011 (has links)
Cette thèse porte sur l’étude du positionnement optimale des réseaux de capteurs pour la localisation de sources. Nous avons étudié deux approches: l’approche basée sur les performances de l’estimation en termes d’erreur quadratique moyenne et l’approche basée sur le seuil statistique de résolution (SSR).Pour le première approche, nous avons considéré les bornes inférieures de l’erreur quadratique moyenne qui sont utilisés généralement pour évaluer la performance d’estimation indépendamment du type d’estimateur considéré. Nous avons étudié deux types de bornes: la borne Cramér-Rao (BCR) pour le modèle où les paramètres sont supposés déterministes et la borne Weiss-Weinstein (BWW) pour le modèle où les paramètres sont supposés aléatoires. Nous avons dérivé les expressions analytiques de ces bornes pour développer des outils statistiques afin d’optimiser la géométrie des réseaux de capteurs. Par rapport à la BCR, la borne BWW peut capturer le décrochement de l’EQM des estimateurs dans la zone non-asymptotique. De plus, les expressions analytiques de la BWW pour un modèle Gaussien général à moyenne paramétré ou à covariance matrice paramétré sont donnés explicitement. Basé sur ces expressions analytiques, nous avons étudié l’impact de la géométrie des réseaux de capteurs sur les performances d’estimation en utilisant les réseaux de capteurs 3D et 2D pour deux modèles des observations concernant les signaux sources: (i) le modèle déterministe et (ii) le modèle stochastique. Nous en avons ensuite déduit des conditions concernant les propriétés d’isotropie et de découplage.Pour la deuxième approche, nous avons considéré le seuil statistique de résolution qui caractérise la séparation minimale entre les deux sources. Dans cette thèse, nous avons étudié le SSR pour le contexte Bayésien moins étudié dans la littérature. Nous avons introduit un modèle des observations linéarisé basé sur le critère de probabilité d’erreur minimale. Ensuite, nous avons présenté deux approches Bayésiennes pour le SSR, l’une basée sur la théorie de l’information et l’autre basée sur la théorie de la détection. Ces approches pourront être utilisée pour améliorer la capacité de résolution des systèmes. / This thesis deals with the array geometry optimization problem in the context of sources localization. We have considered two approaches for the array geometry optimization: the performance estimation in terms of mean square error approach and the statistical resolution limit (SRL) approach. In the first approach, the lower bounds on the mean square error which are usually used in array processing to evaluate the estimation performance independently of the considered estimator have been considered. We have investigated two kinds of lower bounds: the well-known Cramér-Rao bound (CRB) for the deterministic model in which the parameters are assumed to be deterministic, and the Weiss-Weinstein bound (WWB) which is less studied, for the Bayesian model, in which, the parameters are assumed to be random with some prior distributions. We have proposed closed-form expressions of these bounds, which can be used as a statistical tool for array geometry design. Compared to the CRB, the WWB can predict the threshold effect of the MSE in the non-asymptotic area. Moreover, the closed-form expressions of the WWB proposed for a general Gaussian model with parameterized mean or parameterized covariance matrix can also be useful for other problems. Based on these closed-form expressions, the 3D array geometry and the classical planar array geometry have been investigated under (i) the conditional observation model in which the source signal is modeled as a deterministic sequence and under (ii) the unconditional observation model in which the source signal is modeled as a Gaussian random process. Conditions concerning the isotropic and uncoupling properties were then derived.In the second approach, we have considered the statistical resolution limit which characterizes the minimal separation between the two closed spaced sources which still allows to determine correctly the number of sources. In this thesis, we are interested in the SRL in the Bayesian context which is less studied in the literature. Based on the linearized observation model with the minimum probability of error, we have introduced the two Bayesian approaches of the SRL based on the detection and information theories which could lead to some interesting tools for the system design.
244

Performance bounds in terms of estimation and resolution and applications in array processing / Performances limites en termes d’estimation et de résolution et applications aux traitements d’antennes

Tran, Nguyen Duy 24 September 2012 (has links)
Cette thèse porte sur l'analyse des performances en traitement du signal et se compose de deux parties: Premièrement, nous étudions les bornes inférieures dans la caractérisation et la prédiction des performances en termes d'erreur quadratique moyenne (EQM). Les bornes inférieures de l'EQM donne la variance minimale qu'un estimateur peut atteindre et peuvent être divisées en deux catégories: les bornes déterministes pour le modèle où les paramètres sont supposés déterministes (mais inconnus), et les bornes Bayésiennes pour le modèle où les paramètres sont supposés aléatoires. En particulier, nous dérivons les expressions analytiques de ces bornes pour deux applications différentes: (i) La première est la localisation des sources en utilisant un radar multiple-input multiple-output (MIMO). Nous considérons les bornes inférieures dans deux contextes c'est-à-dire avec ou sans erreurs de modèle. (ii) La deuxième est l'estimation de phase d'impulsion de pulsars à rayon X qui est une solution potentielle pour la navigation autonome dans l'espace. Pour cette application, nous avons calculé plusieurs bornes inférieures de l'EQM dans le contexte de données modélisées par une loi de Poisson (complétant ainsi les travaux disponibles dans la littérature où les données sont modélisées par une loi gaussienne). Deuxièmement, nous étudions le seuil statistique de résolution limite (SRL), qui est la distance minimale en termes des paramètres d'intérêts entre les deux signaux permettant de séparer / estimer correctement les paramètres d'intérêt. Plus précisément, nous dérivons le SRL dans deux contextes: le traitement d'antenne et le radar MIMO en utilisant deux approches basées sur la théorie de l'estimation et sur la théorie de l'information. Finalement, nous proposons des expressions compactes du SRL dans le cas d'erreurs de modèle. / This manuscript concerns the performance analysis in signal processing and consists into two parts : First, we study the lower bounds in characterizing and predicting the estimation performance in terms of mean square error (MSE). The lower bounds on the MSE give the minimum variance that an estimator can expect to achieve and it can be divided into two categories depending on the parameter assumption: the so-called deterministic bounds dealing with the deterministic unknown parameters, and the so-called Bayesian bounds dealing with the random unknown parameter. Particularly, we derive the closed-form expressions of the lower bounds for two applications in two different fields: (i) The first one is the target localization using the multiple-input multiple-output (MIMO) radar in which we derive the lower bounds in the contexts with and without modeling errors, respectively. (ii) The other one is the pulse phase estimation of X-ray pulsars which is a potential solution for autonomous deep space navigation. In this application, we show the potential universality of lower bounds to tackle problems with parameterized probability density function (pdf) different from classical Gaussian pdf since in X-ray pulse phase estimation, observations are modeled with a Poisson distribution. Second, we study the statistical resolution limit (SRL) which is the minimal distance in terms of the parameter of interest between two signals allowing to correctly separate/estimate the parameters of interest. More precisely, we derive the SRL in two contexts: array processing and MIMO radar by using two approaches based on the estimation theory and information theory. We also present in this thesis the usefulness of SRL in optimizing the array system.
245

Monetary policy and economic growth : lessons from East African countries

Nyorekwa, Enock Twinoburyo 07 1900 (has links)
This study empirically examines the impact of monetary policy on economic growth in three East African countries (Uganda, Kenya and Tanzania). The role of monetary policy in promoting economic growth remains empirically an open research question, as both the empirical and theoretical underpinnings are not universal, and the results remain varying, inconsistent, and inconclusive. This study may be the first of its kind to examine in detail the impact of monetary policy on economic growth in Uganda, Kenya and Tanzania – using the autoregressive distributed lag (ARDL) bounds-testing approach. This study used two proxies of monetary policy, namely, money supply and interest rate, to examine this linkage. The results were found to differ from country to country and over time. The Uganda empirical results reveal that money supply has a positive impact on economic growth, both in the short run and in the long run. However, interest rate was found to have a positive impact on economic growth only in the short run. In the long run, interest rate has no significant impact on economic growth. In Kenya, both short-run and long-run empirical results support monetary policy neutrality, implying that monetary policy has no effect on economic growth – both in the short run and in the long run. The results from Tanzania also reveal no impact of monetary policy on economic growth in the long run – irrespective of the proxy used to measure monetary policy. However, the short-run results only reveal no impact of monetary policy on economic growth only when the interest rate is used as a proxy for monetary policy. When money supply is used to measure monetary policy, a negative relationship between monetary policy and economic growth is found to dominate. Overall, the study finds that monetary policy is only relevant for economic growth in Uganda and only when money supply is used as monetary policy variable. Therefore this study recommends a money supply based monetary policy framework for Uganda. The study findings also suggest that monetary policy may not be a panacea for economic growth in Kenya and Tanzania. / Economics / M. Com. (Economics)
246

Uncertainty in Aquatic Toxicological Exposure-Effect Models: the Toxicity of 2,4-Dichlorophenoxyacetic Acid and 4-Chlorophenol to Daphnia carinata

Dixon, William J., bill.dixon@dse.vic.gov.au January 2005 (has links)
Uncertainty is pervasive in risk assessment. In ecotoxicological risk assessments, it arises from such sources as a lack of data, the simplification and abstraction of complex situations, and ambiguities in assessment endpoints (Burgman 2005; Suter 1993). When evaluating and managing risks, uncertainty needs to be explicitly considered in order to avoid erroneous decisions and to be able to make statements about the confidence that we can place in risk estimates. Although informative, previous approaches to dealing with uncertainty in ecotoxicological modelling have been found to be limited, inconsistent and often based on assumptions that may be false (Ferson & Ginzburg 1996; Suter 1998; Suter et al. 2002; van der Hoeven 2004; van Straalen 2002a; Verdonck et al. 2003a). In this thesis a Generalised Linear Modelling approach is proposed as an alternative, congruous framework for the analysis and prediction of a wide range of ecotoxicological effects. This approach was used to investigate the results of toxicity experiments on the effect of 2,4-Dichlorophenoxyacetic Acid (2,4-D) formulations and 4-Chlorophenol (4-CP, an associated breakdown product) on Daphnia carinata. Differences between frequentist Maximum Likelihood (ML) and Bayesian Markov-Chain Monte-Carlo (MCMC) approaches to statistical reasoning and model estimation were also investigated. These approaches are inferentially disparate and place different emphasis on aleatory and epistemic uncertainty (O'Hagan 2004). Bayesian MCMC and Probability Bounds Analysis methods for propagating uncertainty in risk models are also compared for the first time. For simple models, Bayesian and frequentist approaches to Generalised Linear Model (GLM) estimation were found to produce very similar results when non-informative prior distributions were used for the Bayesian models. Potency estimates and regression parameters were found to be similar for identical models, signifying that Bayesian MCMC techniques are at least a suitable and objective replacement for frequentist ML for the analysis of exposureresponse data. Applications of these techniques demonstrated that Amicide formulations of 2,4-D are more toxic to Daphnia than their unformulated, Technical Acid parent. Different results were obtained from Bayesian MCMC and ML methods when more complex models and data structures were considered. In the analysis of 4-CP toxicity, the treatment of 2 different factors as fixed or random in standard and Mixed-Effect models was found to affect variance estimates to the degree that different conclusions would be drawn from the same model, fit to the same data. Associated discrepancies in the treatment of overdispersion between ML and Bayesian MCMC analyses were also found to affect results. Bayesian MCMC techniques were found to be superior to the ML ones employed for the analysis of complex models because they enabled the correct formulation of hierarchical (nested) datastructures within a binomial logistic GLM. Application of these techniques to the analysis of results from 4-CP toxicity testing on two strains of Daphnia carinata found that between-experiment variability was greater than that within-experiments or between-strains. Perhaps surprisingly, this indicated that long-term laboratory culture had not significantly affected the sensitivity of one strain when compared to cultures of another strain that had recently been established from field populations. The results from this analysis highlighted the need for repetition of experiments, proper model formulation in complex analyses and careful consideration of the effects of pooling data on characterising variability and uncertainty. The GLM framework was used to develop three dimensional surface models of the effects of different length pulse exposures, and subsequent delayed toxicity, of 4-CP on Daphnia. These models described the relationship between exposure duration and intensity (concentration) on toxicity, and were constructed for both pulse and delayed effects. Statistical analysis of these models found that significant delayed effects occurred following the full range of pulse exposure durations, and that both exposure duration and intensity interacted significantly and concurrently with the delayed effect. These results indicated that failure to consider delayed toxicity could lead to significant underestimation of the effects of pulse exposure, and therefore increase uncertainty in risk assessments. A number of new approaches to modelling ecotoxicological risk and to propagating uncertainty were also developed and applied in this thesis. In the first of these, a method for describing and propagating uncertainty in conventional Species Sensitivity Distribution (SSD) models was described. This utilised Probability Bounds Analysis to construct a nonparametric 'probability box' on an SSD based on EC05 estimates and their confidence intervals. Predictions from this uncertain SSD and the confidence interval extrapolation methods described by Aldenberg and colleagues (2000; 2002a) were compared. It was found that the extrapolation techniques underestimated the width of uncertainty (confidence) intervals by 63% and the upper bound by 65%, when compared to the Probability Bounds (P3 Bounds) approach, which was based on actual confidence estimates derived from the original data. An alternative approach to formulating ecotoxicological risk modelling was also proposed and was based on a Binomial GLM. In this formulation, the model is first fit to the available data in order to derive mean and uncertainty estimates for the parameters. This 'uncertain' GLM model is then used to predict the risk of effect from possible or observed exposure distributions. This risk is described as a whole distribution, with a central tendency and uncertainty bounds derived from the original data and the exposure distribution (if this is also 'uncertain'). Bayesian and P-Bounds approaches to propagating uncertainty in this model were compared using an example of the risk of exposure to a hypothetical (uncertain) distribution of 4-CP for the two Daphnia strains studied. This comparison found that the Bayesian and P-Bounds approaches produced very similar mean and uncertainty estimates, with the P-bounds intervals always being wider than the Bayesian ones. This difference is due to the different methods for dealing with dependencies between model parameters by the two approaches, and is confirmation that the P-bounds approach is better suited to situations where data and knowledge are scarce. The advantages of the Bayesian risk assessment and uncertainty propagation method developed are that it allows calculation of the likelihood of any effect occurring, not just the (probability)bounds, and that the same software (WinBugs) and model construction may be used to fit regression models and predict risks simultaneously. The GLM risk modelling approaches developed here are able to explain a wide range of response shapes (including hormesis) and underlying (non-normal) distributions, and do not involve expression of the exposure-response as a probability distribution, hence solving a number of problems found with previous formulations of ecotoxicological risk. The approaches developed can also be easily extended to describe communities, include modifying factors, mixed-effects, population growth, carrying capacity and a range of other variables of interest in ecotoxicological risk assessments. While the lack of data on the toxicological effects of chemicals is the most significant source of uncertainty in ecotoxicological risk assessments today, methods such as those described here can assist by quantifying that uncertainty so that it can be communicated to stakeholders and decision makers. As new information becomes available, these techniques can be used to develop more complex models that will help to bridge the gap between the bioassay and the ecosystem.
247

On some damage processes in risk and epidemic theories

Gathy, Maude 14 September 2010 (has links)
Cette thèse traite de processus de détérioration en théorie du risque et en biomathématique. En théorie du risque, le processus de détérioration étudié est celui des sinistres supportés par une compagnie d'assurance. Le premier chapitre examine la distribution de Markov-Polya comme loi possible pour modéliser le nombre de sinistres et établit certains liens avec la famille de lois de Katz/Panjer. Nous construisons la loi de Markov-Polya sur base d'un modèle de survenance des sinistres et nous montrons qu'elle satisfait une récurrence élégante. Celle-ci permet notamment de déduire un algorithme efficace pour la loi composée correspondante. Nous déduisons la famille de Katz/Panjer comme famille limite de la loi de Markov-Polya. Le second chapitre traite de la famille dite "Lagrangian Katz" qui étend celle de Katz/Panjer. Nous motivons par un problème de premier passage son utilisation comme loi du nombre de sinistres. Nous caractérisons toutes les lois qui en font partie et nous déduisons un algorithme efficace pour la loi composée. Nous examinons également son indice de dispersion ainsi que son comportement asymptotique. Dans le troisième chapitre, nous étudions la probabilité de ruine sur horizon fini dans un modèle discret avec taux d'intérêt positifs. Nous déterminons un algorithme ainsi que différentes bornes pour cette probabilité. Une borne particulière nous permet de construire deux mesures de risque. Nous examinons également la possibilité de faire appel à de la réassurance proportionelle avec des niveaux de rétention égaux ou différents sur les périodes successives. Dans le cadre de processus épidémiques, la détérioration étudiée consiste en la propagation d'une maladie de type SIE (susceptible - infecté - éliminé). La manière dont un infecté contamine les susceptibles est décrite par des distributions de survie particulières. Nous en déduisons la distribution du nombre total de personnes infectées à la fin de l'épidémie. Nous examinons en détails les épidémies dites de type Markov-Polya et hypergéométrique. Nous approximons ensuite cette loi par un processus de branchement. Nous étudions également un processus de détérioration similaire en théorie de la fiabilité où le processus de détérioration consiste en la propagation de pannes en cascade dans un système de composantes interconnectées.
248

Zero-energy states in supersymmetric matrix models

Lundholm, Douglas January 2010 (has links)
The work of this Ph.D. thesis in mathematics concerns the problem of determining existence, uniqueness, and structure of zero-energy states in supersymmetric matrix models, which arise from a quantum mechanical description of the physics of relativistic membranes, reduced Yang-Mills gauge theory, and of nonperturbative features of string theory, respectively M-theory. Several new approaches to this problem are introduced and considered in the course of seven scientific papers, including: construction by recursive methods (Papers A and D), deformations and alternative models (Papers B and C), averaging with respect to symmetries (Paper E), and weighted supersymmetry and index theory (Papers F and G). The mathematical tools used and developed for these approaches include Clifford algebras and associated representation theory, structure of supersymmetric quantum mechanics, as well as spectral theory of (matrix-) Schrödinger operators. / QC20100629
249

Quantum stabilizer codes and beyond

Sarvepalli, Pradeep Kiran 10 October 2008 (has links)
The importance of quantum error correction in paving the way to build a practical quantum computer is no longer in doubt. Despite the large body of literature in quantum coding theory, many important questions, especially those centering on the issue of "good codes" are unresolved. In this dissertation the dominant underlying theme is that of constructing good quantum codes. It approaches this problem from three rather different but not exclusive strategies. Broadly, its contribution to the theory of quantum error correction is threefold. Firstly, it extends the framework of an important class of quantum codes - nonbinary stabilizer codes. It clarifies the connections of stabilizer codes to classical codes over quadratic extension fields, provides many new constructions of quantum codes, and develops further the theory of optimal quantum codes and punctured quantum codes. In particular it provides many explicit constructions of stabilizer codes, most notably it simplifies the criteria by which quantum BCH codes can be constructed from classical codes. Secondly, it contributes to the theory of operator quantum error correcting codes also called as subsystem codes. These codes are expected to have efficient error recovery schemes than stabilizer codes. Prior to our work however, systematic methods to construct these codes were few and it was not clear how to fairly compare them with other classes of quantum codes. This dissertation develops a framework for study and analysis of subsystem codes using character theoretic methods. In particular, this work established a close link between subsystem codes and classical codes and it became clear that the subsystem codes can be constructed from arbitrary classical codes. Thirdly, it seeks to exploit the knowledge of noise to design efficient quantum codes and considers more realistic channels than the commonly studied depolarizing channel. It gives systematic constructions of asymmetric quantum stabilizer codes that exploit the asymmetry of errors in certain quantum channels. This approach is based on a Calderbank- Shor-Steane construction that combines BCH and finite geometry LDPC codes.
250

Range Searching Data Structures with Cache Locality

Hamilton, Christopher 17 March 2011 (has links)
This thesis focuses on range searching data structures, an elementary problem in computational geometry with research spanning decades. These problems often involve very large data sets. Processor speeds increase faster than memory speeds, thus the gap between the rate at which CPUs can process data and the rate at which it can be retrieved is increasing. To bridge this gap, various levels of cache are used. Since cache misses are costly, algorithms should be cache-friendly. The input-output (I/O) model was the first model for constructing cache-efficient algorithms, focusing on a two-level memory hierarchy. Algorithms for this model require manual tuning to determine optimal values for hardware dependent parameters, and are only optimal at a single level of a memory hierarchy. Cache-oblivious (CO) algorithms are built without knowledge of the hierarchy, allowing them to be optimal across all levels at once. There exist strong theoretical and practical results for I/O-efficient range searching. Recently, the CO model has received attention, but range searching remains poorly understood. This thesis explores data structures for CO range counting and reporting. It presents the first space and worst-case query-time optimal approximate range counting structure for a family of related problems, and associated O(N log N)-space query-optimal reporting structures. The approximate counting structure is the first of its kind in internal memory, I/O and CO models. Researchers have been trying to create linear-space query-optimal CO reporting structures. This thesis shows that for a variety of problems, linear space is in fact impossible. Heuristics are also used for building cache-friendly algorithms. Space-filling curves are continuous functions mapping multi-dimensional sets into one-dimensional ones. They are used to build search structures in the hopes that objects that were close in the original space remain close in the resulting ordering. This results in queries incurring fewer page swaps when traversing the structure. The Hilbert curve is notably good at this, but often imposes a space or time penalty. This thesis introduces compact Hilbert indices, which remove the ineffiency inherent for input point sets with bounding boxes smaller than their bounding hypercubes.

Page generated in 0.0652 seconds