• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 11
  • 6
  • 5
  • 5
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 87
  • 87
  • 45
  • 45
  • 27
  • 22
  • 20
  • 16
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Variance reduction and outlier identification for IDDQ testing of integrated chips using principal component analysis

Balasubramanian, Vijay 25 April 2007 (has links)
Integrated circuits manufactured in current technology consist of millions of transistors with dimensions shrinking into the nanometer range. These small transistors have quiescent (leakage) currents that are increasingly sensitive to process variations, which have increased the variation in good-chip quiescent current and consequently reduced the effectiveness of IDDQ testing. This research proposes the use of a multivariate statistical technique known as principal component analysis for the purpose of variance reduction. Outlier analysis is applied to the reduced leakage current values as well as the good chip leakage current estimate, to identify defective chips. The proposed idea is evaluated using IDDQ values from multiple wafers of an industrial chip fabricated in 130 nm technology. It is shown that the proposed method achieves significant variance reduction and identifies many outliers that escape identification by other established techniques. For example, it identifies many of the absolute outliers in bad neighborhoods, which are not detected by Nearest Neighbor Residual and Nearest Current Ratio. It also identifies many of the spatial outliers that pass when using Current Ratio. The proposed method also identifies both active and passive defects.
2

Dependence concepts and selection criteria for lattice rules

Taniguchi, Yoshihiro January 2014 (has links)
Lemieux recently proposed a new approach that studies randomized quasi-Monte Carlothrough dependency concepts. By analyzing the dependency structure of a rank-1 lattice,Lemieux proposed a copula-based criterion with which we can find a ???good generator??? for the lattice. One drawback of the criterion is that it assumes that a given function can be well approximated by a bilinear function. It is not clear if this assumption holds in general. In this thesis, we assess the validity and robustness of the copula-based criterion. We dothis by working with bilinear functions, some practical problems such as Asian option pricing, and perfectly non-bilinear functions. We use the quasi-regression technique to study how bilinear a given function is. Beside assessing the validity of the bilinear assumption, we proposed the bilinear regression based criterion which combines the quasi-regression and the copula-based criterion. We extensively test the two criteria by comparing them to other well known criteria, such as the spectral test through numerical experiments. We find that the copula criterion can reduce the error size by a factor of 2 when the functionis bilinear. We also find that the copula-based criterion shows competitive results evenwhen a given function does not satisfy the bilinear assumption. We also see that our newly introduced BR criterion is competitive compared to well-known criteria.
3

Monte Carlo Simulation of Boundary Crossing Probabilities for a Brownian Motion and Curved Boundaries

Drabeck, Florian January 2005 (has links) (PDF)
We are concerned with the probability that a standard Brownian motion W(t) crosses a curved boundary c(t) on a finite interval [0, T]. Let this probability be denoted by Q(c(t); T). Due to recent advances in research a new way of estimating Q(c(t); T) seems feasible: Monte Carlo Simulation. Wang and Pötzelberger (1997) derived an explicit formula for the boundary crossing probability of piecewise linear functions which has the form of an expectation. Based on this formula we proceed as follows: First we approximate the general boundary c(t) by a piecewise linear function cm(t) on a uniform partition. Then we simulate Brownian sample paths in order to evaluate the expectation in the formula of the authors for cm(t). The bias resulting when estimating Q(c_m(t); T) rather than Q(c(t); T) can be bounded by a formula of Borovkov and Novikov (2005). Here the standard deviation - or the variance respectively - is the main limiting factor when increasing the accuracy. The main goal of this dissertation is to find and evaluate variance reducing techniques in order to enhance the quality of the Monte Carlo estimator for Q(c(t); T). Among the techniques we discuss are: Antithetic Sampling, Stratified Sampling, Importance Sampling, Control Variates, Transforming the original problem. We analyze each of these techniques thoroughly from a theoretical point of view. Further, we test each technique empirically through simulation experiments on several carefully chosen boundaries. In order to asses our results we set them in relation to a previously established benchmark. As a result of this dissertation we derive some very potent techniques that yield a substantial improvement in terms of accuracy. Further, we provide a detailed record of our simulation experiments. (author's abstract)
4

Policy Gradient Methods: Variance Reduction and Stochastic Convergence

Greensmith, Evan, evan.greensmith@gmail.com January 2005 (has links)
In a reinforcement learning task an agent must learn a policy for performing actions so as to perform well in a given environment. Policy gradient methods consider a parameterized class of policies, and using a policy from the class, and a trajectory through the environment taken by the agent using this policy, estimate the performance of the policy with respect to the parameters. Policy gradient methods avoid some of the problems of value function methods, such as policy degradation, where inaccuracy in the value function leads to the choice of a poor policy. However, the estimates produced by policy gradient methods can have high variance.¶ In Part I of this thesis we study the estimation variance of policy gradient algorithms, in particular, when augmenting the estimate with a baseline, a common method for reducing estimation variance, and when using actor-critic methods. A baseline adjusts the reward signal supplied by the environment, and can be used to reduce the variance of a policy gradient estimate without adding any bias. We find the baseline that minimizes the variance. We also consider the class of constant baselines, and find the constant baseline that minimizes the variance. We compare this to the common technique of adjusting the rewards by an estimate of the performance measure. Actor-critic methods usually attempt to learn a value function accurate enough to be used in a gradient estimate without adding much bias. In this thesis we propose that in learning the value function we should also consider the variance. We show how considering the variance of the gradient estimate when learning a value function can be beneficial, and we introduce a new optimization criterion for selecting a value function.¶ In Part II of this thesis we consider online versions of policy gradient algorithms, where we update our policy for selecting actions at each step in time, and study the convergence of the these online algorithms. For such online gradient-based algorithms, convergence results aim to show that the gradient of the performance measure approaches zero. Such a result has been shown for an algorithm which is based on observing trajectories between visits to a special state of the environment. However, the algorithm is not suitable in a partially observable setting, where we are unable to access the full state of the environment, and its variance depends on the time between visits to the special state, which may be large even when only few samples are needed to estimate the gradient. To date, convergence results for algorithms that do not rely on a special state are weaker. We show that, for a certain algorithm that does not rely on a special state, the gradient of the performance measure approaches zero. We show that this continues to hold when using certain baseline algorithms suggested by the results of Part I.
5

Variance Reduction for Asian Options

Galda, Galina Unknown Date (has links)
<p>Asian options are an important family of derivative contracts with a wide variety of applications in commodity, currency, energy, interest rate, equity and insurance markets. In this master's thesis, we investigate methods for evaluating the price of the Asian call options with a fixed strike. One of them is the Monte Carlo method. The accurancy of this method can be observed through variance of the price. We will see that the variance with using Monte Carlo method has to be decreased. The Variance Reduction technique is useful for this aim. We will give evidence of the efficiency of one of the Variance Reduction thechniques - Control Variate method - in a mathematical context and a numerical comparison with the ordinary Monte Carlo method.</p>
6

Cycle to Cycle Manufacturing Process Control

Hardt, David E., Siu, Tsz-Sin 01 1900 (has links)
Most manufacturing processes produce parts that can only be correctly measured after the process cycle has been completed. Even if in-process measurement and control is possible, it is often too expensive or complex to practically implement. In this paper, a simple control scheme based on output measurement and input change after each processing cycle is proposed. It is shown to reduce the process dynamics to a simple gain with a delay, and reduce the control problem to a SISO discrete time problem. The goal of the controller is to both reduce mean output errors and reduce their variance. In so doing the process capability (e.g. Cpk) can be increased without additional investment in control hardware or in-process sensors. This control system is analyzed for two types of disturbance processes: independent (uncorrelated) and dependent (correlated). For the former the closed-loop control increased the output variance, whereas for the latter it can decrease it significantly. In both cases, proper controller design can reduce the mean error to zero without introducing poor transient performance. These finding were demonstrated by implementing Cycle to Cycle (CtC) control on a simple bending process (uncorrelated disturbance) and on an injection molding process (correlated disturbance). The results followed closely those predicted by the analysis. / Singapore-MIT Alliance (SMA)
7

Variance Reduction for Asian Options

Galda, Galina Unknown Date (has links)
Asian options are an important family of derivative contracts with a wide variety of applications in commodity, currency, energy, interest rate, equity and insurance markets. In this master's thesis, we investigate methods for evaluating the price of the Asian call options with a fixed strike. One of them is the Monte Carlo method. The accurancy of this method can be observed through variance of the price. We will see that the variance with using Monte Carlo method has to be decreased. The Variance Reduction technique is useful for this aim. We will give evidence of the efficiency of one of the Variance Reduction thechniques - Control Variate method - in a mathematical context and a numerical comparison with the ordinary Monte Carlo method.
8

Coupled Sampling Methods For Filtering

Yu, Fangyuan 13 March 2022 (has links)
More often than not, we cannot directly measure many phenomena that are crucial to us. However, we usually have access to certain partial observations on the phenomena of interest as well as a mathematical model of them. The filtering problem seeks estimation of the phenomena given all the accumulated partial information. In this thesis, we study several topics concerning the numerical approximation of the filtering problem. First, we study the continuous-time filtering problem. Given high-frequency ob- servations in discrete-time, we perform double discretization of the non-linear filter to allow for filter estimation with particle filter. By using the multilevel strategy, given any ε > 0, our algorithm achieve an MSE level of O(ε2) with a cost of O(ε−3), while the particle filter requires a cost of O(ε−4). Second, we propose a de-bias scheme for the particle filter under the partially observed diffusion model. The novel scheme is free of innate particle filter bias and discretization bias, through a double randomization method of [14]. Our estimator is perfectly parallel and achieves a similar cost reduction to the multilevel particle filter. Third, we look at a high-dimensional linear Gaussian state-space model in con- tinuous time. We propose a novel multilevel estimator which requires a cost of O(ε−2 log(ε)2) compared to ensemble Kalman-Bucy filters (EnKBFs) which requiresO(ε−3) for an MSE target of O(ε2). Simulation results verify our theory for models of di- mension ∼ 106. Lastly, we consider the model estimation through learning an unknown parameter that characterizes the partially observed diffusions. We propose algorithms to provide unbiased estimates of the Hessian and the inverse Hessian, which allows second-order optimization parameter learning for the model.
9

Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters

Hanzely, Filip 20 August 2020 (has links)
Many key problems in machine learning and data science are routinely modeled as optimization problems and solved via optimization algorithms. With the increase of the volume of data and the size and complexity of the statistical models used to formulate these often ill-conditioned optimization tasks, there is a need for new efficient algorithms able to cope with these challenges. In this thesis, we deal with each of these sources of difficulty in a different way. To efficiently address the big data issue, we develop new methods which in each iteration examine a small random subset of the training data only. To handle the big model issue, we develop methods which in each iteration update a random subset of the model parameters only. Finally, to deal with ill-conditioned problems, we devise methods that incorporate either higher-order information or Nesterov’s acceleration/momentum. In all cases, randomness is viewed as a powerful algorithmic tool that we tune, both in theory and in experiments, to achieve the best results. Our algorithms have their primary application in training supervised machine learning models via regularized empirical risk minimization, which is the dominant paradigm for training such models. However, due to their generality, our methods can be applied in many other fields, including but not limited to data science, engineering, scientific computing, and statistics.
10

Judgement post-stratification for designed experiments

Du, Juan 07 August 2006 (has links)
No description available.

Page generated in 0.056 seconds