• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 57
  • 11
  • 6
  • 5
  • 5
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 92
  • 92
  • 47
  • 47
  • 27
  • 23
  • 21
  • 16
  • 12
  • 12
  • 11
  • 11
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Variance reduction and outlier identification for IDDQ testing of integrated chips using principal component analysis

Balasubramanian, Vijay 25 April 2007 (has links)
Integrated circuits manufactured in current technology consist of millions of transistors with dimensions shrinking into the nanometer range. These small transistors have quiescent (leakage) currents that are increasingly sensitive to process variations, which have increased the variation in good-chip quiescent current and consequently reduced the effectiveness of IDDQ testing. This research proposes the use of a multivariate statistical technique known as principal component analysis for the purpose of variance reduction. Outlier analysis is applied to the reduced leakage current values as well as the good chip leakage current estimate, to identify defective chips. The proposed idea is evaluated using IDDQ values from multiple wafers of an industrial chip fabricated in 130 nm technology. It is shown that the proposed method achieves significant variance reduction and identifies many outliers that escape identification by other established techniques. For example, it identifies many of the absolute outliers in bad neighborhoods, which are not detected by Nearest Neighbor Residual and Nearest Current Ratio. It also identifies many of the spatial outliers that pass when using Current Ratio. The proposed method also identifies both active and passive defects.
2

A general framework for reducing variance in agent evaluation

White, Martha Unknown Date
No description available.
3

Dependence concepts and selection criteria for lattice rules

Taniguchi, Yoshihiro January 2014 (has links)
Lemieux recently proposed a new approach that studies randomized quasi-Monte Carlothrough dependency concepts. By analyzing the dependency structure of a rank-1 lattice,Lemieux proposed a copula-based criterion with which we can find a ???good generator??? for the lattice. One drawback of the criterion is that it assumes that a given function can be well approximated by a bilinear function. It is not clear if this assumption holds in general. In this thesis, we assess the validity and robustness of the copula-based criterion. We dothis by working with bilinear functions, some practical problems such as Asian option pricing, and perfectly non-bilinear functions. We use the quasi-regression technique to study how bilinear a given function is. Beside assessing the validity of the bilinear assumption, we proposed the bilinear regression based criterion which combines the quasi-regression and the copula-based criterion. We extensively test the two criteria by comparing them to other well known criteria, such as the spectral test through numerical experiments. We find that the copula criterion can reduce the error size by a factor of 2 when the functionis bilinear. We also find that the copula-based criterion shows competitive results evenwhen a given function does not satisfy the bilinear assumption. We also see that our newly introduced BR criterion is competitive compared to well-known criteria.
4

A general framework for reducing variance in agent evaluation

White, Martha 06 1900 (has links)
In this work, we present a unified, general approach to variance reduction in agent evaluation using machine learning to minimize variance. Evaluating an agent's performance in a stochastic setting is necessary for agent development, scientific evaluation, and competitions. Traditionally, evaluation is done using Monte Carlo estimation (sample averages); the magnitude of the stochasticity in the domain or the high cost of sampling, however, can often prevent the approach from resulting in statistically significant conclusions. Recently, an advantage sum technique based on control variates has been proposed for constructing unbiased, low variance estimates of agent performance. The technique requires an expert to define a value function over states of the system, essentially a guess of the state's unknown value. In this work, we propose learning this value function from past interactions between agents in some target population. Our learned value functions have two key advantages: they can be applied in domains where no expert value function is available and they can result in tuned evaluation for a specific population of agents (e.g., novice versus advanced agents). This work has three main contributions. First, we consolidate previous work in using control variates for variance reduction into one unified, general framework and summarize the connections between this previous work. Second, our framework makes variance reduction practically possible in any sequential decision making task where designing the expert value function is time-consuming, difficult or essentially impossible. We prove the optimality of our approach and extend the theoretical understanding of advantage sum estimators. In addition, we significantly extend the applicability of advantage sum estimators and discuss practical methods for using our framework in real-world scenarios. Finally, we provide low-variance estimators for three poker domains previously without variance reduction and improve strategy selection in the expert-level University of Alberta poker bot.
5

A variance reduction technique for production cost simulation

Wise, Michael Anthony January 1989 (has links)
No description available.
6

Monte Carlo Simulation of Boundary Crossing Probabilities for a Brownian Motion and Curved Boundaries

Drabeck, Florian January 2005 (has links) (PDF)
We are concerned with the probability that a standard Brownian motion W(t) crosses a curved boundary c(t) on a finite interval [0, T]. Let this probability be denoted by Q(c(t); T). Due to recent advances in research a new way of estimating Q(c(t); T) seems feasible: Monte Carlo Simulation. Wang and Pötzelberger (1997) derived an explicit formula for the boundary crossing probability of piecewise linear functions which has the form of an expectation. Based on this formula we proceed as follows: First we approximate the general boundary c(t) by a piecewise linear function cm(t) on a uniform partition. Then we simulate Brownian sample paths in order to evaluate the expectation in the formula of the authors for cm(t). The bias resulting when estimating Q(c_m(t); T) rather than Q(c(t); T) can be bounded by a formula of Borovkov and Novikov (2005). Here the standard deviation - or the variance respectively - is the main limiting factor when increasing the accuracy. The main goal of this dissertation is to find and evaluate variance reducing techniques in order to enhance the quality of the Monte Carlo estimator for Q(c(t); T). Among the techniques we discuss are: Antithetic Sampling, Stratified Sampling, Importance Sampling, Control Variates, Transforming the original problem. We analyze each of these techniques thoroughly from a theoretical point of view. Further, we test each technique empirically through simulation experiments on several carefully chosen boundaries. In order to asses our results we set them in relation to a previously established benchmark. As a result of this dissertation we derive some very potent techniques that yield a substantial improvement in terms of accuracy. Further, we provide a detailed record of our simulation experiments. (author's abstract)
7

Policy Gradient Methods: Variance Reduction and Stochastic Convergence

Greensmith, Evan, evan.greensmith@gmail.com January 2005 (has links)
In a reinforcement learning task an agent must learn a policy for performing actions so as to perform well in a given environment. Policy gradient methods consider a parameterized class of policies, and using a policy from the class, and a trajectory through the environment taken by the agent using this policy, estimate the performance of the policy with respect to the parameters. Policy gradient methods avoid some of the problems of value function methods, such as policy degradation, where inaccuracy in the value function leads to the choice of a poor policy. However, the estimates produced by policy gradient methods can have high variance.¶ In Part I of this thesis we study the estimation variance of policy gradient algorithms, in particular, when augmenting the estimate with a baseline, a common method for reducing estimation variance, and when using actor-critic methods. A baseline adjusts the reward signal supplied by the environment, and can be used to reduce the variance of a policy gradient estimate without adding any bias. We find the baseline that minimizes the variance. We also consider the class of constant baselines, and find the constant baseline that minimizes the variance. We compare this to the common technique of adjusting the rewards by an estimate of the performance measure. Actor-critic methods usually attempt to learn a value function accurate enough to be used in a gradient estimate without adding much bias. In this thesis we propose that in learning the value function we should also consider the variance. We show how considering the variance of the gradient estimate when learning a value function can be beneficial, and we introduce a new optimization criterion for selecting a value function.¶ In Part II of this thesis we consider online versions of policy gradient algorithms, where we update our policy for selecting actions at each step in time, and study the convergence of the these online algorithms. For such online gradient-based algorithms, convergence results aim to show that the gradient of the performance measure approaches zero. Such a result has been shown for an algorithm which is based on observing trajectories between visits to a special state of the environment. However, the algorithm is not suitable in a partially observable setting, where we are unable to access the full state of the environment, and its variance depends on the time between visits to the special state, which may be large even when only few samples are needed to estimate the gradient. To date, convergence results for algorithms that do not rely on a special state are weaker. We show that, for a certain algorithm that does not rely on a special state, the gradient of the performance measure approaches zero. We show that this continues to hold when using certain baseline algorithms suggested by the results of Part I.
8

Variance Reduction for Asian Options

Galda, Galina Unknown Date (has links)
<p>Asian options are an important family of derivative contracts with a wide variety of applications in commodity, currency, energy, interest rate, equity and insurance markets. In this master's thesis, we investigate methods for evaluating the price of the Asian call options with a fixed strike. One of them is the Monte Carlo method. The accurancy of this method can be observed through variance of the price. We will see that the variance with using Monte Carlo method has to be decreased. The Variance Reduction technique is useful for this aim. We will give evidence of the efficiency of one of the Variance Reduction thechniques - Control Variate method - in a mathematical context and a numerical comparison with the ordinary Monte Carlo method.</p>
9

Cycle to Cycle Manufacturing Process Control

Hardt, David E., Siu, Tsz-Sin 01 1900 (has links)
Most manufacturing processes produce parts that can only be correctly measured after the process cycle has been completed. Even if in-process measurement and control is possible, it is often too expensive or complex to practically implement. In this paper, a simple control scheme based on output measurement and input change after each processing cycle is proposed. It is shown to reduce the process dynamics to a simple gain with a delay, and reduce the control problem to a SISO discrete time problem. The goal of the controller is to both reduce mean output errors and reduce their variance. In so doing the process capability (e.g. Cpk) can be increased without additional investment in control hardware or in-process sensors. This control system is analyzed for two types of disturbance processes: independent (uncorrelated) and dependent (correlated). For the former the closed-loop control increased the output variance, whereas for the latter it can decrease it significantly. In both cases, proper controller design can reduce the mean error to zero without introducing poor transient performance. These finding were demonstrated by implementing Cycle to Cycle (CtC) control on a simple bending process (uncorrelated disturbance) and on an injection molding process (correlated disturbance). The results followed closely those predicted by the analysis. / Singapore-MIT Alliance (SMA)
10

Variance Reduction for Asian Options

Galda, Galina Unknown Date (has links)
Asian options are an important family of derivative contracts with a wide variety of applications in commodity, currency, energy, interest rate, equity and insurance markets. In this master's thesis, we investigate methods for evaluating the price of the Asian call options with a fixed strike. One of them is the Monte Carlo method. The accurancy of this method can be observed through variance of the price. We will see that the variance with using Monte Carlo method has to be decreased. The Variance Reduction technique is useful for this aim. We will give evidence of the efficiency of one of the Variance Reduction thechniques - Control Variate method - in a mathematical context and a numerical comparison with the ordinary Monte Carlo method.

Page generated in 0.1194 seconds