Spelling suggestions: "subject:"fonte carlo."" "subject:"fonte sarlo.""
331 |
Robustness, semiparametric estimation and goodness-of-fit of latent trait modelsTzamourani, Panagiota January 1999 (has links)
This thesis studies the one-factor latent trait model for binary data. In examines the sensitivity of the model when the assumptions about the model are violated, it investigates the information about the prior distribution when the model is estimated semi-parametrically and it also examines the goodness-of-fit of the model using Monte-Carlo simulations. Latent trait models are applied to data arising from psychometric tests, ability tests or attitude surveys. The data are often contaminated by guessing, cheating, unwillingness to give the true answer or gross errors. To study the sensitivity of the model when the data are contaminated we derive the Influence Function of the parameters and the posterior means, a tool developed in the frame of robust statistics theory. We study the behaviour of the Influence Function for changes in the data and also the behaviour of the parameters and the posterior means when the data are artificially contaminated. We further derive the Influence Function of the parameters and the posterior means for changes in the prior distribution and study empirically the behaviour of the model when the prior is a mixture of distributions. Semiparametric estimation involves estimation of the prior together with the item parameters. A new algorithm for fully semiparametric estimation of the model is given. The bootstrap is then used to study the information on the latent distribution than can be extracted from the data when the model is estimated semiparametrically. The use of the usual goodness-of-fit statistics has been hampered for latent trait models because of the sparseness of the tables. We propose the use of Monte-Carlo simulations to derive the empirical distribution of the goodness-of-fit statistics and also the examination of the residuals as they may pinpoint to the sources of bad fit.
|
332 |
Monte Carlo Simulation of Optical Coherence Tomography of Media with Arbitrary Spatial DistributionsMalektaji, Siavash 02 September 2014 (has links)
Optical Coherence Tomography (OCT) is a sub-surface imaging modality with growing number of applications. An accurate and practical OCT simulator could be an important tool to understand the physics underlying OCT and to design OCT systems with improved performance. All available OCT simulators are restricted to imaging planar multilayered media or non-planar multilayered media. In this work I developed a novel Monte Carlo based simulator of OCT imaging for turbid media with arbitrary spatial distributions. This simulator allows computation of both Class I diffusive reflectance, due to ballistic and quasi-ballistic scattered photons, and Class II diffusive reflectance due to multiple scattered photons. A tetrahedron-based mesh is used to model any arbitrary-shaped medium to be simulated. I have also implemented a known importance sampling method to significantly reduce computational time of simulations by up to two orders of magnitude. The simulator is verified by comparing its results to results from previously validated OCT simulators for multilayered media. I present sample simulation results for OCT imaging of non-layered media which would not have been possible with earlier simulators.
|
333 |
Evaluation and analysis of total flexibility in the production using Monte Carlo simulationTaudes, Alfred, Natter, Martin, Schauerhuber, Markus January 1999 (has links) (PDF)
Nearly unpredictable turbulence on an overall economic level, radical changes in the legal framework and a shift in the moral concepts prevailing in the general public emphasize the importance of increased corporate flexibility. Usually, most flexibility measurements suffer from the defect that they are not pecuniary, that interactions between different flexibility dimensions are not considered and that they lack the required relatedness to the respective context. These problems contribute to a large extent to the fact that, when making investment decisions, the value of flexibility is considered but intuitively or insufficiently. Frequently, the results are irrational myopic pseudo decisions. The present work can be regarded as an attempt to design a pecuniary and context-related flexibility measure of three single flexibility dimensions in an extremely simplified framework and under restrictive assumptions. The primary method used is Monte Carlo Simulation. The present study shows that the value of flexibility can be substantive and that taking into account the interactions of various single flexibilities when strategic investments are made can be of great importance. In this paper, we work out the connection between "environmental volatility" and the "value of flexibility". Our work shows a numerically strong positive relation between these two properties. (author's abstract) / Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
|
334 |
Dependence concepts and selection criteria for lattice rulesTaniguchi, Yoshihiro January 2014 (has links)
Lemieux recently proposed a new approach that studies randomized quasi-Monte Carlothrough dependency concepts.
By analyzing the dependency structure of a rank-1 lattice,Lemieux proposed a copula-based criterion with which we can find a ???good generator??? for
the lattice.
One drawback of the criterion is that it assumes that a given function can be
well approximated by a bilinear function. It is not clear if this assumption holds in general.
In this thesis, we assess the validity and robustness of the copula-based criterion. We dothis by working with bilinear functions, some practical problems such as Asian option pricing, and perfectly non-bilinear functions. We use the quasi-regression technique to study
how bilinear a given function is.
Beside assessing the validity of the bilinear assumption,
we proposed the bilinear regression based criterion which combines the quasi-regression and the copula-based criterion. We extensively test the two criteria by comparing them to other well known criteria, such as the spectral test through numerical experiments. We
find that the copula criterion can reduce the error size by a factor of 2 when the functionis bilinear. We also find that the copula-based criterion shows competitive results evenwhen a given function does not satisfy the bilinear assumption. We also see that our newly introduced BR criterion is competitive compared to well-known criteria.
|
335 |
On large deviations and design of efficient importance sampling algorithmsNyquist, Pierre January 2014 (has links)
This thesis consists of four papers, presented in Chapters 2-5, on the topics large deviations and stochastic simulation, particularly importance sampling. The four papers make theoretical contributions to the development of a new approach for analyzing efficiency of importance sampling algorithms by means of large deviation theory, and to the design of efficient algorithms using the subsolution approach developed by Dupuis and Wang (2007). In the first two papers of the thesis, the random output of an importance sampling algorithm is viewed as a sequence of weighted empirical measures and weighted empirical processes, respectively. The main theoretical results are a Laplace principle for the weighted empirical measures (Paper 1) and a moderate deviation result for the weighted empirical processes (Paper 2). The Laplace principle for weighted empirical measures is used to propose an alternative measure of efficiency based on the associated rate function.The moderate deviation result for weighted empirical processes is an extension of what can be seen as the empirical process version of Sanov's theorem. Together with a delta method for large deviations, established by Gao and Zhao (2011), we show moderate deviation results for importance sampling estimators of the risk measures Value-at-Risk and Expected Shortfall. The final two papers of the thesis are concerned with the design of efficient importance sampling algorithms using subsolutions of partial differential equations of Hamilton-Jacobi type (the subsolution approach). In Paper 3 we show a min-max representation of viscosity solutions of Hamilton-Jacobi equations. In particular, the representation suggests a general approach for constructing subsolutions to equations associated with terminal value problems and exit problems. Since the design of efficient importance sampling algorithms is connected to such subsolutions, the min-max representation facilitates the construction of efficient algorithms. In Paper 4 we consider the problem of constructing efficient importance sampling algorithms for a certain type of Markovian intensity model for credit risk. The min-max representation of Paper 3 is used to construct subsolutions to the associated Hamilton-Jacobi equation and the corresponding importance sampling algorithms are investigated both theoretically and numerically. The thesis begins with an informal discussion of stochastic simulation, followed by brief mathematical introductions to large deviations and importance sampling. / <p>QC 20140424</p>
|
336 |
A short-time dynamics study of Heisenberg non-collinear magnetsZelli, Mirsaeed 14 September 2007 (has links)
A generalized model which describes a family of antiferromagnetic Heisenberg magnets on a three-dimensional stacked triangular lattice is introduced. The model contains a constraint parameter which changes the details of the interactions but not the symmetry of the model. We investigate the question of whether a first or second order phase transition occurs in these systems using a short time dynamics method. This method does not suffer from the problem of critical slowing down which occurs in the usual equilibrium Monte Carlo simulations. The effective critical exponents are determined as a function of the constraint parameter. Our results provide strong evidence that the phase transition is first order. In addition, for a particular value of the constraint parameter, the model corresponds to an antiferromagnet on a stacked Kagome lattice. In this case, our results are not inconsistent with the existence of a finite temperature first order phase transition.
|
337 |
A test of lepton universality using the Tau [formula] decayLawson, Ian Timothy 10 November 2011 (has links)
Graduate
|
338 |
SHARP: Sustainable Hardware Acceleration for Rapidly-evolving Pre-existing systems.Beeston, Julie 13 September 2012 (has links)
The goal of this research is to present a framework to accelerate the execution of software
legacy systems without having to redesign them or limit future changes. The speedup is
accomplished through hardware acceleration, based on a semi-automatic infrastructure which
supports design decisions and simulate their impact.
Many programs are available for translating code written in C into VHDL (Very High Speed
Integrated Circuit Hardware Description Language). What is missing is simpler and more
direct strategies to incorporate encapsulatable portions of the code, translate them to VHDL
and to allow the VHDL code and the C code to communicate through a flexible interface.
SHARP is a streamlined, easily understood infrastructure which facilitates this process in
two phases. In the first part, the SHARP GUI (An interactive Graphical User Interface)
is used to load a program written in a high level general purpose programming language,
to scan the code for SHARP POINTs (Portions Only Including Non-interscoping Types)
based on user defined constraints, and then automatically translate such POINTs to a HDL.
Finally the infrastructure needed to co-execute the updated program is generated. SHARP
POINTs have a clearly defined interface and can be used by the SHARP scheduler.
In the second part, the SHARP scheduler allows the SHARP POINTs to run on the chosen
reconfigurable hardware, here an FPGA (Field Programmable Gate Array) and to commu-
nicate cleanly with the original processor (for the software).
The resulting system will be a good (though not necessarily optimal) acceleration of the
original software application, that is easily maintained as the code continues to develop and
evolve. / Graduate
|
339 |
UAV swarm attack: protection system alternatives for DestroyersPham, Loc V, Dickerson, Brandon, Sanders, James, Casserly, Michael, Maldonado, Vicente, Balbuena, Demostenes, Graves, Stephen, Pandya, Bhavisha January 2012 (has links)
Systems Engineering Project Report / The Navy needs to protect Destroyers (DDGs) from Unmanned Aerial Vehicle (UAV) attacks. The team, focusing on improving the DDG’s defenses against small radar cross section UAVs making suicide attacks, established a DRM, identified current capability gaps, established a functional flow, created requirements, modeled the DDG’s current sensing and engagement capabilities in Microsoft Excel, and used Monte Carlo analysis of 500 simulation runs to determine that four out of eight incoming IED UAVs are likely to hit the ship. Sensitivity analysis showed that improving weapon systems is more effec-tive than improving sensor systems, inspiring the generation of alternatives for improving UAV defense. For the eight feasible alternatives the team estimated cost, assessed risk in accordance with the requirements, simulated performance against the eight incoming UAVs, and performed cost benefit analysis. Adding CIWS mounts is the most cost effec-tive alternative, reducing the average number of UAV hits from a baseline of 3.82 to 2.50, costing $816M to equip the 62-DDG fleet for a 12-year life cycle. Combining that with upgraded EW capabilities to jam remote-controlled UAVs reduces the hits to 1.56 for $1844M, and combining those with decoy launchers to defeat the radar-seeking Har-py UAVs reduces the hits to 1.12 for $2862M.
|
340 |
High-temperature superconductors and the two-dimensional hubbard modelCullen, Peter H. January 1996 (has links)
No description available.
|
Page generated in 0.063 seconds