• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 70
  • 32
  • 18
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 177
  • 34
  • 32
  • 31
  • 26
  • 25
  • 25
  • 22
  • 20
  • 19
  • 18
  • 18
  • 17
  • 16
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Bayesian state estimation in partially observable Markov processes / Estimation bayésienne dans les modèles de Markov partiellement observés

Gorynin, Ivan 13 December 2017 (has links)
Cette thèse porte sur l'estimation bayésienne d'état dans les séries temporelles modélisées à l'aide des variables latentes hybrides, c'est-à-dire dont la densité admet une composante discrète-finie et une composante continue. Des algorithmes généraux d'estimation des variables d'états dans les modèles de Markov partiellement observés à états hybrides sont proposés et comparés avec les méthodes de Monte-Carlo séquentielles sur un plan théorique et appliqué. Le résultat principal est que ces algorithmes permettent de réduire significativement le coût de calcul par rapport aux méthodes de Monte-Carlo séquentielles classiques / This thesis addresses the Bayesian estimation of hybrid-valued state variables in time series. The probability density function of a hybrid-valued random variable has a finite-discrete component and a continuous component. Diverse general algorithms for state estimation in partially observable Markov processesare introduced. These algorithms are compared with the sequential Monte-Carlo methods from a theoretical and a practical viewpoint. The main result is that the proposed methods require less processing time compared to the classic Monte-Carlo methods
42

Some approximation schemes in polynomial optimization / Quelques schémas d'approximation en optimisation polynomiale

Hess, Roxana 28 September 2017 (has links)
Cette thèse est dédiée à l'étude de la hiérarchie moments-sommes-de-carrés, une famille de problèmes de programmation semi-définie en optimisation polynomiale, couramment appelée hiérarchie de Lasserre. Nous examinons différents aspects de ses propriétés et applications. Comme application de la hiérarchie, nous approchons certains objets potentiellement compliqués, comme l'abscisse polynomiale et les plans d'expérience optimaux sur des domaines semi-algébriques. L'application de la hiérarchie de Lasserre produit des approximations par des polynômes de degré fixé et donc de complexité bornée. En ce qui concerne la complexité de la hiérarchie elle-même, nous en construisons une modification pour laquelle un taux de convergence amélioré peut être prouvé. Un concept essentiel de la hiérarchie est l'utilisation des modules quadratiques et de leurs duaux pour appréhender de manière flexible le cône des polynômes positifs et le cône des moments. Nous poursuivons cette idée pour construire des approximations étroites d'ensembles semi-algébriques à l'aide de séparateurs polynomiaux. / This thesis is dedicated to investigations of the moment-sums-of-squares hierarchy, a family of semidefinite programming problems in polynomial optimization, commonly called the Lasserre hierarchy. We examine different aspects of its properties and purposes. As applications of the hierarchy, we approximate some potentially complicated objects, namely the polynomial abscissa and optimal designs on semialgebraic domains. Applying the Lasserre hierarchy results in approximations by polynomials of fixed degree and hence bounded complexity. With regard to the complexity of the hierarchy itself, we construct a modification of it for which an improved convergence rate can be proved. An essential concept of the hierarchy is to use quadratic modules and their duals as a tractable characterization of the cone of positive polynomials and the moment cone, respectively. We exploit further this idea to construct tight approximations of semialgebraic sets with polynomial separators.
43

Fast and accurate lithography simulation and optical proximity correction for nanometer design for manufacturing

Yu, Peng 23 October 2009 (has links)
As semiconductor manufacture feature sizes scale into the nanometer dimension, circuit layout printability is significantly reduced due to the fundamental limit of lithography systems. This dissertation studies related research topics in lithography simulation and optical proximity correction. A recursive integration method is used to reduce the errors in transmission cross coefficient (TCC), which is an important factor in the Hopkins Equation in aerial image simulation. The runtime is further reduced, without increasing the errors, by using the fact that TCC is usually computed on uniform grids. A flexible software framework, ELIAS, is also provided, which can be used to compute TCC for various lithography settings, such as different illuminations. Optimal coherent approximations (OCAs), which are used for full-chip image simulation, can be speeded up by considering the symmetric properties of lithography systems. The runtime improvement can be doubled without loss of accuracy. This improvement is applicable to vectorial imaging models as well. Even in the case where the symmetric properties do not hold strictly, the new method can be generalized such that it could still be faster than the old method. Besides new numerical image simulation algorithms, variations in lithography systems are also modeled. A Variational LIthography Model (VLIM) as well as its calibration method are provided. The Variational Edge Placement Error (V-EPE) metrics, which is an improvement of the original Edge Placement Error (EPE) metrics, is introduced based on the model. A true process-variation aware OPC (PV-OPC) framework is proposed using the V-EPE metric. Due to the analytical nature of VLIM, our PV-OPC is only about 2-3× slower than the conventional OPC, but it explicitly considers the two main sources of process variations (exposure dose and focus variations) during OPC. The EPE metrics have been used in conventional OPC algorithms, but it requires many intensity simulations and takes the majority of the OPC runtime. By making the OPC algorithm intensity based (IB-OPC) rather than EPE based, we can reduce the number of intensity simulations and hence reduce the OPC runtime. An efficient intensity derivative computation method is also provided, which makes the new algorithm converge faster than the EPE based algorithm. Our experimental results show a runtime speedup of more than 10× with comparable result quality compared to the EPE based OPC. The above mentioned OPC algorithms are vector based. Other categories of OPC algorithms are pixel based. Vector based algorithms in general generate less complex masks than those of pixel based ones. But pixel based algorithms produce much better results than vector based ones in terms of contour fidelity. Observing that vector based algorithms preserve mask shape topologies, which leads to lower mask complexities, we combine the strengths of both categories—the topology invariant property and the pixel based mask representation. A topological invariant pixel based OPC (TIP-OPC) algorithm is proposed, with lithography friendly mask topological invariant operations and an efficient Fast Fourier Transform (FFT) based cost function sensitivity computation. The experimental results show that TIP-OPC can achieve much better post-OPC contours compared with vector based OPC while maintaining the mask shape topologies. / text
44

Algorithms for polynomial and rational approximation

Pachon, Ricardo January 2010 (has links)
Robust algorithms for the approximation of functions are studied and developed in this thesis. Novel results and algorithms on piecewise polynomial interpolation, rational interpolation and best polynomial and rational approximations are presented. Algorithms for the extension of Chebfun, a software system for the numerical computation with functions, are described. These algorithms allow the construction and manipulation of piecewise smooth functions numerically with machine precision. Breakpoints delimiting subintervals are introduced explicitly, implicitly or automatically, the latter method combining recursive subdivision and edge detection techniques. For interpolation by rational functions with free poles, a novel method is presented. When the interpolation nodes are roots of unity or Chebyshev points the algorithm is particularly simple and relies on discrete Fourier transform matrices, which results in a fast implementation using the Fast Fourier Transform. The method is generalised for arbitrary grids, which requires the construction of polynomials orthogonal on the set of interpolation nodes. The new algorithm has connections with other methods, particularly the work of Jacobi and Kronecker, Berrut and Mittelmann, and Egecioglu and Koc. Computed rational interpolants are compared with the behaviour expected from the theory of convergence of these approximants, and the difficulties due to truncated arithmetic are explained. The appearance of common factors in the numerator and denominator due to finite precision arithmetic is characterised by the behaviour of the singular values of the linear system associated with the rational interpolation problem. Finally, new Remez algorithms for the computation of best polynomial and rational approximations are presented. These algorithms rely on interpolation, for the computation of trial functions, and on Chebfun, for the location of trial references. For polynomials, the algorithm is particularly robust and efficient, and we report experiments with degrees in the thousands. For rational functions, we clarify the numerical issues that affect its application.
45

Modelling the transition from channel-veins to PSBs in the early stage of fatigue tests

Zhu, Yichao January 2012 (has links)
Dislocation channel-veins and persistent slip bands (PSBs) are characteristic dislocation configurations that are of interest to both industry and academia. However, existing mathematical models are not adequate to describe the mechanism of the transition between these two states. In this thesis, a series of models are proposed to give a quantitative description to such a transition. The full problem has been considered from two angles. Firstly, the general motion and instabilities of arbitrary curved dislocations have been studied both analytically and numerically. Then the law of motion and local expansions are used to track the shapes of screw segments moving through channels, which are believed to induce dislocation multiplication by cross-slip. The second approach has been to investigate the collective behavior of a large number of dislocations, both geometrically necessary and otherwise. The traditional method of multiple scales does not apply well to describe the pile-up of two arrays of dislocations of opposite signs on a pair of neighbouring glide planes in two dimensional space. Certain quantities have to be more accurately defined under the multiple-scale coordinates to capture the much more localised resultant stress caused by these dislocation pairs. Through detailed calculations, one-dimensional dipoles can be homogenised to obtain some insightful results both on a local scale where the dipole pattern is the key diagnostic and on a macroscopic scale on which density variations are of most interest. Equilibria of dislocation dipoles in a two-dimensional regular lattice have been also studied. Some natural transitions between different patterns can be found as a result of geometrical instabilities.
46

Analysis of traveling wave propagation in one-dimensional integrate-and-fire neural networks

Zhang, Jie 15 December 2016 (has links)
One-dimensional neural networks comprised of large numbers of Integrate-and-Fire neurons have been widely used to model electrical activity propagation in neural slices. Despite these efforts, the vast majority of these computational models have no analytical solutions. Consequently, my Ph.D. research focuses on a specific class of homogeneous Integrate-and-Fire neural network, for which analytical solutions of network dynamics can be derived. One crucial analytical finding is that the traveling wave acceleration quadratically depends on the instantaneous speed of the activity propagation, which means that two speed solutions exist in the activities of wave propagation: one is fast-stable and the other is slow-unstable. Furthermore, via this property, we analytically compute temporal-spatial spiking dynamics to help gain insights into the stability mechanisms of traveling wave propagation. Indeed, the analytical solutions are in perfect agreement with the numerical solutions. This analytical method also can be applied to determine the effects induced by a non-conductive gap of brain tissue and extended to more general synaptic connectivity functions, by converting the evolution equations for network dynamics into a low-dimensional system of ordinary differential equations. Building upon these results, we investigate how periodic inhomogeneities affect the dynamics of activity propagation. In particular, two types of periodic inhomogeneities are studied: alternating regions of additional fixed excitation and inhibition, and cosine form inhomogeneity. Of special interest are the conditions leading to propagation failure. With similar analytical procedures, explicit expressions for critical speeds of activity propagation are obtained under the influence of additional inhibition and excitation. However, an explicit formula for speed modulations is difficult to determine in the case of cosine form inhomogeneity. Instead of exact solutions from the system of equations, a series of speed approximations are constructed, rendering a higher accuracy with a higher order approximation of speed.
47

Interpolation and Approximation

Lal, Ram 05 1900 (has links)
In this paper, there are three chapters. The first chapter discusses interpolation. Here a theorem about the uniqueness of the solution to the general interpolation problem is proven. Then the problem of how to represent this unique solution is discussed. Finally, the error involved in the interpolation and the convergence of the interpolation process is developed. In the second chapter a theorem about the uniform approximation to continuous functions is proven. Then the best approximation and the least squares approximation (a special case of best approximation) is discussed. In the third chapter orthogonal polynomials as discussed as well as bounded linear functionals in Hilbert spaces, interpolation and approximation and approximation in Hilbert space.
48

Numerical techniques for the American put

Randell, Sean David 11 December 2008 (has links)
This dissertation considers an American put option written on a single underlying which does not pay dividends, for which no closed form solution exists. As a conse- quence, numerical techniques have been developed to estimate the value of the Amer- ican put option. These include analytical approximations, tree or lattice methods, ¯nite di®erence methods, Monte Carlo simulation and integral representations. We ¯rst present the mathematical descriptions underlying these numerical techniques. We then provide an examination of a selection of algorithms from each technique, including implementation details, possible enhancements and a description of the convergence behaviour. Finally, we compare the estimates and the execution times of each of the algorithms considered.
49

K-Shell Ionization Cross Sections For Elements Se To Pd: 0.4 To 2.0 MeV

Criswell, Tommy L. 12 1900 (has links)
K-Shell ionization cross section for protons over the energy range of 0.4 to 2.0 MeV have been measured on thin targets of the elements Se, Br, Rb, Sr, Y, Mo and Pd. Total x-ray and ionization cross sections for the K-shell are reported. The experimental values of the ionization cross sections are compared to the non-relativistic plane-wave Born approximation, the binary-encounter approximation, the constrained binary-encounter approximation, and the plane-wave Born approximation with corrections for Coulomb-deflection and binding energy effects.
50

Automated Discovery of Numerical Approximation Formulae Via Genetic Programming

Streeter, Matthew J 26 April 2001 (has links)
This thesis describes the use of genetic programming to automate the discovery of numerical approximation formulae. Results are presented involving rediscovery of known approximations for Harmonic numbers and discovery of rational polynomial approximations for functions of one or more variables, the latter of which are compared to Padé approximations obtained through a symbolic mathematics package. For functions of a single variable, it is shown that evolved solutions can be considered superior to Padé approximations, which represent a powerful technique from numerical analysis, given certain tradeoffs between approximation cost and accuracy, while for functions of more than one variable, we are able to evolve rational polynomial approximations where no Padé approximation can be computed. Furthermore, it is shown that evolved approximations can be iteratively improved through the evolution of approximations to their error function. Based on these results, we consider genetic programming to be a powerful and effective technique for the automated discovery of numerical approximation formulae.

Page generated in 0.1183 seconds