• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 837
  • 93
  • 87
  • 86
  • 34
  • 15
  • 14
  • 11
  • 9
  • 8
  • 8
  • 6
  • 6
  • 6
  • 5
  • Tagged with
  • 1521
  • 266
  • 261
  • 242
  • 213
  • 190
  • 188
  • 170
  • 169
  • 168
  • 163
  • 157
  • 147
  • 138
  • 131
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

GHOST IMAGE ANALYSIS FOR OPTICAL SYSTEMS

Abd El-Maksoud, Rania Hassan January 2009 (has links)
Ghost images are caused by the inter-reflections of light from optical surfaces that have transmittances less than unity. Ghosts can reduce contrast, provide misleading information, and if severe can veil parts of the nominal image. This dissertation develops several methodologies to simulate ghost effects arising from an even number of light reflections between the surfaces of multi-element lens systems. We present an algorithm to generate the ghost layout that is generated by two, four and up to N (even) reflections. For each possible ghost layout, paraxial ray tracing is performed to calculate the locations of the Gaussian cardinal points, the locations and diameters of the ghost entrance and exit pupils, the locations and diameters of the ghost entrance and exit windows, and the ghost chief and marginal ray heights and angles at each surface in the ghost layout. The paraxial ray trace data is used to estimate the fourth order ghost aberration coefficients. Petzval, tangential, and sagittal ghost image surfaces are introduced. Potential ghosts are formed at the intersection points between the ghost image surfaces and the Gaussian nominal image plane. Paraxial radiometric methodology is developed to estimate the ghost irradiance point spread function at the nominal image plane. Contrast reduction by ghosts can cause a reduction in the depth of field, and a simulation model and experimental technique that can be used to measure the depth of field is presented. Finally, ghost simulation examples are provided and discussed.
372

Bayesian numerical analysis : global optimization and other applications

Fowkes, Jaroslav Mrazek January 2011 (has links)
We present a unifying framework for the global optimization of functions which are expensive to evaluate. The framework is based on a Bayesian interpretation of radial basis function interpolation which incorporates existing methods such as Kriging, Gaussian process regression and neural networks. This viewpoint enables the application of Bayesian decision theory to derive a sequential global optimization algorithm which can be extended to include existing algorithms of this type in the literature. By posing the optimization problem as a sequence of sampling decisions, we optimize a general cost function at each stage of the algorithm. An extension to multi-stage decision processes is also discussed. The key idea of the framework is to replace the underlying expensive function by a cheap surrogate approximation. This enables the use of existing branch and bound techniques to globally optimize the cost function. We present a rigorous analysis of the canonical branch and bound algorithm in this setting as well as newly developed algorithms for other domains including convex sets. In particular, by making use of Lipschitz continuity of the surrogate approximation, we develop an entirely new algorithm based on overlapping balls. An application of the framework to the integration of expensive functions over rectangular domains and spherical surfaces in low dimensions is also considered. To assess performance of the framework, we apply it to canonical examples from the literature as well as an industrial model problem from oil reservoir simulation.
373

On probabilistic inference approaches to stochastic optimal control

Rawlik, Konrad Cyrus January 2013 (has links)
While stochastic optimal control, together with associate formulations like Reinforcement Learning, provides a formal approach to, amongst other, motor control, it remains computationally challenging for most practical problems. This thesis is concerned with the study of relations between stochastic optimal control and probabilistic inference. Such dualities { exempli ed by the classical Kalman Duality between the Linear-Quadratic-Gaussian control problem and the filtering problem in Linear-Gaussian dynamical systems { make it possible to exploit advances made within the separate fields. In this context, the emphasis in this work lies with utilisation of approximate inference methods for the control problem. Rather then concentrating on special cases which yield analytical inference problems, we propose a novel interpretation of stochastic optimal control in the general case in terms of minimisation of certain Kullback-Leibler divergences. Although these minimisations remain analytically intractable, we show that natural relaxations of the exact dual lead to new practical approaches. We introduce two particular general iterative methods ψ-Learning, which has global convergence guarantees and provides a unifying perspective on several previously proposed algorithms, and Posterior Policy Iteration, which allows direct application of inference methods. From these, practical algorithms for Reinforcement Learning, based on a Monte Carlo approximation to ψ-Learning, and model based stochastic optimal control, using a variational approximation of posterior policy iteration, are derived. In order to overcome the inherent limitations of parametric variational approximations, we furthermore introduce a new approach for none parametric approximate stochastic optimal control based on a reproducing kernel Hilbert space embedding of the control problem. Finally, we address the general problem of temporal optimisation, i.e., joint optimisation of controls and temporal aspects, e.g., duration, of the task. Specifically, we introduce a formulation of temporal optimisation based on a generalised form of the finite horizon problem. Importantly, we show that the generalised problem has a dual finite horizon problem of the standard form, thus bringing temporal optimisation within the reach of most commonly used algorithms. Throughout, problems from the area of motor control of robotic systems are used to evaluate the proposed methods and demonstrate their practical utility.
374

Improving Misfire Detection Using Gaussian Processes and Flywheel Error Compensation

Romeling, Gustav January 2016 (has links)
The area of misfire detection is important because of the effects of misfires on both the environment and the exhaust system. Increasing requirements on the detection performance means that improvements are always of interest. In this thesis, potential improvements to an existing misfire detection algorithm are evaluated. The improvements evaluated are: using Gaussian processes to model the classifier, alternative signal treatments for detection of multiple misfires, and effects of where flywheel tooth angle error estimation is performed. The improvements are also evaluated for their suitability for use on-line. Both the use of Gaussian processes and the detection of multiple misfires are hard problems to solve while maintaining detection performance. Gaussian processes most likely loses performance due to loss of dependence between the weights of the classifier. It can give performance similar to the original classifier, but with greatly increased complexity. For multiple misfires, the performance can be slightly improved without loss of single misfire performance. Greater improvements are possible, but at the cost of single misfire performance. The decision is in the end down to the desired trade-off. The flywheel tooth angle error compensation gives nearly identical performance regardless of where it is estimated. Consequently the error estimation can be separated from the signal processing, allowing the implementation to be modular. Using an EKF for estimating the flywheel errors on-line is found to be both feasible and give good performance. Combining the separation of the error estimation from the signal treatment with a, after initial convergence, heavily restricted EKF gives a vastly reduced computational load for only a moderate loss of performance.
375

Graphical Gaussian models with symmetries

Gehrmann, Helene January 2011 (has links)
This thesis is concerned with graphical Gaussian models with equality constraints on the concentration or partial correlation matrix introduced by Højsgaard and Lauritzen (2008) as RCON and RCOR models. The models can be represented by vertex and edge coloured graphs G = (V,ε), where parameters associated with equally coloured vertices or edges are restricted to being identical. In the first part of this thesis we study the problem of estimability of a non-zero model mean μ if the covariance structure Σ is restricted to satisfy the constraints of an RCON or RCOR model but is otherwise unknown. Exploiting results in Kruskal (1968), we obtain a characterisation of suitable linear spaces Ω such that if Σ is restricted as above, the maximum likelihood estimator μ(with circumflex) and the least squares estimator μ* of μ coincide for μ ∈ Ω, thus allowing μ and Σ to be estimated independently. For the special case of Ω being specified by equality relations among the entries of μ according to a partition M of the model variables V, our characterisation translates into a necessary and sufficient regularity condition on M and (V,ε). In the second part we address model selection of RCON and RCOR models. Due to the large number of models, we study the structure of four model classes lying strictly within the sets of RCON and RCOR models, each of which is defined by desirable statistical properties corresponding to colouring regularity conditions. Two of these appear in Højsgaard and Lauritzen (2008), while the other two arise from the regularity condition ensuring equality of estimators μ(with circumflex) = μ* we find in the first part. We show each of the colouring classes to form complete lattices, which qualifies the corresponding model spaces for an Edwards-Havránek model selection procedure (Edwards and Havránek, 1987). We develop a coresponding algorithm for one of the model classes and give an algorithm for a systematic search in accordance with the Edwards-Havránek principles for a second class. Both are applied to data sets previously analysed in the literature, with very encouraging performances.
376

Analysis of sparse systems

Duff, Iain Spencer January 1972 (has links)
The aim of this thesis is to conduct a general investigation in the field of sparse matrices, to investigate and compare various techniques for handling sparse systems suggested in the literature, to develop some new techniques, and to discuss the feasibility of using sparsity techniques in the solution of overdetermined equations and the eigenvalue problem.
377

Méthodes statistiques pour la détection de QTL : nouveaux développements et applications chez le canard mulard / *

Kileh Wais, Mohamed 06 September 2012 (has links)
La recherche de QTL par régression des phénotypes sur les probabilités de transmission (modèle Haley-Knott) est une méthode très largement utilisée quand on dispose de grandes familles phénotypées par des caractères gaussiens. L'objectif de cette thèse d'un point de vue méthodologique, est de proposer une méthode de détection de QTL qui prend en compte des effectifs de familles petits d'une part, et l'existence de caractères discrets d'autre part. Ainsi, nous proposons, pour répondre à la première question, une approche de détection de QTL intégrant dans le calcul du mérite génétique des individus marqués, les performances calculées sur n générations de descendants. L'obtention d'un mérite génétique dérégressé comme substitut de phénotypes, proposé notamment par Weller et al (1990) et Tribout et al (2008), est donc généralisée. Ensuite, sont présentés les résultats de comparaisons d'un modèle supposant la normalité des données à un modèle à seuils faisant l'hypothèse d'une distribution continue sous jacente à la distribution observée dans la détection de QTL des caractères discrets. Nous démontrons ici que le modèle discret est plus précis et plus puissant quand le caractère étudié possède trois modalités distribuées de façon déséquilibrée dans la population.Dans la deuxième partie de la thèse, l'analyse des données du protocole GENECAN a été réalisée. Il s'agit d'identifier les régions du génome ou locus à caractère quantitatif (QTL), associées à des caractères d'intérêt mesurés sur des canards mulards gavés. Le canard mulard est un hybride interspécifique obtenu par croisement d'une cane commune (Anas platyrhynchos) et d'un canard de Barbarie (Cairina moschata). Trois cents quarante deux canes communes conçues en back-cross (BC) ont été générées par croisement d'une lignée de canard Kaiya et d'une lignée de canard Pékin lourd. Ces femelles BC ont été accouplées avec des canards de Barbarie pour produire 1600 canards mulards sur lesquels sont effectuées des mesures de croissance, de métabolisme au cours de la période de croissance et du gavage, d'aptitude au gavage et de qualités du magret et du foie gras. La valeur phénotypique des femelles BC marquées a été estimée, pour chaque caractère, comme étant la valeur moyenne des phénotypes de sa progéniture et pondérée par un coefficient de détermination (CD) fonction du nombre de descendants et de l'héritabilité du caractère étudié. Une carte génétique de 91 marqueurs microsatellites réparties sur 16 groupes de liaison (GL) et couvrant un total de 778 cM a été utilisée. Dans le cadre de l'analyse uni-caractère, vingt-deux QTL significatifs à 1% au niveau du chromosome ont été cartographiés. Ces QTLs sont pour la plupart impliqués dans la variabilité de la qualité du magret et du foie gras. Les zones chromosomiques d'intérêt, identifiées dans le cadre de cette étude devront dans le futur, être densifiées en marqueurs pour faire l'objet d'une cartographie fine. / QTL detection using the regression of phenotypes on transmission probability is largely used when large families phenotyped for Gaussian trait are available. The aim of this thesis from a methodological point of view, is to propose a method for detection of QTL that takes into account the small number of families on the one hand, and the existence of discrete traits on the other. Thus, we propose to answer the first question, an QTL detection approach, integrating in the calculation of genetic merit of genotyped individuals, the performances calculated over n generations of descendants. The use of a ‘de-regressed proof' as a phenotype to be analysed, proposed by Weller et al. (1990) and Tribout et al. (2008) is generalized. Next, we present the results of comparisons of a model assuming normality of the data to a thresholds model assuming a continuous distribution underlying the observed distribution in the QTL detection of discrete traits. Here we demonstrate that the discrete model is more accurate and more powerful when the studied trait has three modalities distributed unevenly in the population.In the second part of the thesis, the data analysis of GENECAN protocol was performed. This is to identify genomic regions or quantitative trait locus (QTL) associated with interest traits measured on over-feed mule ducks. The mule duck is an hybrid duck from a female Common duck (Anas Platyrhynchos) and a Muscovy drake (Cairina moschata). Three hundred forty two common ducks designed by back-cross (BC) were generated by crossing a line of Kaiya duck and a heavy line of Pekin duck. These BC females were mated with Muscovy ducks to produce 1600 mules ducks which undergo measures of growth, metabolism during the growth and over-feeding periods, over-feeding, of breast muscle and fatty liver qualities. The phenotypic value of genotyped BC females was estimated for each trait as the average phenotypes of their offspring and weighted by a coefficient of determination (CD) function on the number of offspring and heritability of the studied trait. The genetic map comprised 91 microsatellite markers aggregated into 16 linkage groups (LG) and representing 778 cM. For the uni-trait analysis, twenty-two QTL significant at 1% threshold in chromosome-wide have been mapped. These QTLs are mostly involved in the variability of the breast muscle and fatty liver qualities. Chromosomal regions of interest identified in the framework of this study should be in the future be densified to markers to do the fine mapping.
378

Inégalités géométriques et fonctionnelles / Geometric and Functional Inequalities

Lehec, Joseph 03 December 2008 (has links)
La majeure partie de cette thèse est consacrée à l'inégalité de Blaschke-Santaló, qui s'énonce ainsi : parmi les ensembles symétriques, la boule euclidienne maximise le produit vol(K) vol(K°), K° désignant le polaire de K. Il existe des versions fonctionnelles de cette inégalité, découvertes par plusieurs auteurs (Ball, Artstein, Klartag, Milman, Fradelizi, Meyer. . .), mais elles sont toutes dérivées de l'inégalité ensembliste. L'objet de cette thèse est de proposer des démonstrations directes de ces inégalités fonctionnelles. On obtient ainsi de nouvelles preuves de l'inégalité de Santaló, parfois très simples. La dernière partie est un peu à part et concerne le chaos gaussien : on démontre une majoration précise des moments du chaos gaussien due à Lataªa par des arguments de chaînage à la Talagrand / This thesis is mostly about the Blaschke-Santaló inequality, which states that among symmetric sets, the Euclidean ball maximises the product vol(K) vol(K°), where K° is the polar body of K. Several authors (Ball, Artstein, Klartag, Milman, Fradelizi, Meyer. . .) were able to derive functional inequalities from this inequality. The purpose of this thesis is to give direct proofs of these functional Santaló inequalities. This provides new proofs of Santaló, some of which are very simple. The last chapter is about Gaussian chaoses. We obtain a sharp bound for moments of Gaussian chaoses due to Lataªa, using the generic chaining of Talagrand
379

Gaussian Process Kernels for Cross-Spectrum Analysis in Electrophysiological Time Series

Ulrich, Kyle Richard January 2016 (has links)
<p>Multi-output Gaussian processes provide a convenient framework for multi-task problems. An illustrative and motivating example of a multi-task problem is multi-region electrophysiological time-series data, where experimentalists are interested in both power and phase coherence between channels. Recently, the spectral mixture (SM) kernel was proposed to model the spectral density of a single task in a Gaussian process framework. This work develops a novel covariance kernel for multiple outputs, called the cross-spectral mixture (CSM) kernel. This new, flexible kernel represents both the power and phase relationship between multiple observation channels. The expressive capabilities of the CSM kernel are demonstrated through implementation of 1) a Bayesian hidden Markov model, where the emission distribution is a multi-output Gaussian process with a CSM covariance kernel, and 2) a Gaussian process factor analysis model, where factor scores represent the utilization of cross-spectral neural circuits. Results are presented for measured multi-region electrophysiological data.</p> / Dissertation
380

Bayesovská optimalizace hyperparametrů pomocí Gaussovských procesů / Bayesian Optimization of Hyperparameters Using Gaussian Processes

Arnold, Jakub January 2019 (has links)
The goal of this thesis was to implement a practical tool for optimizing hy- perparameters of neural networks using Bayesian optimization. We show the theoretical foundations of Bayesian optimization, including the necessary math- ematical background for Gaussian Process regression, and some extensions to Bayesian optimization. In order to evaluate the performance of Bayesian op- timization, we performed multiple real-world experiments with different neural network architectures. In our comparison to a random search, Bayesian opti- mization usually obtained a higher objective function value, and achieved lower variance in repeated experiments. Furthermore, in three out of four experi- ments, the hyperparameters discovered by Bayesian optimization outperformed the manually designed ones. We also show how the underlying Gaussian Process regression can be a useful tool for visualizing the effects of each hyperparameter, as well as possible relationships between multiple hyperparameters. 1

Page generated in 0.0833 seconds