771 |
Ab Initio Simulations of Transition Metal Alloys: Towards the Multiscale ModelingPourovskii, Leonid January 2003 (has links)
The present thesis concerns applications of first principles electronic structure calculations in conjunction with methods of statistical mechanics for simulations of transition metal alloys both in the bulk and at surfaces. A fully relativistic generalization of the exact muffin-tin orbitals (EMTO) method has been developed. The method accurately takes into account spin-orbit coupling and allows one to calculate orbital polarization and magneto-crystalline anisotropy in magnetic systems as well as increasing the range of applicability of the EMTO method to heavy elements. A new direct-exchange Monte Carlo (DEMC) method has been proposed, which is capable to tackling effectively statistical simulations of surface segregations in disordered and ordered alloys. The applications of relativistic methods include calculations of spin and orbital magnetization in iron-cobalt disordered and partially ordered alloys, as well as computation of the core-level shifts (CLS) in transition metal alloys. It has been found, that relativistic corrections are important for CLS calculations in 5-d metal alloys. Properties of a Ni monolayer deposited on a Cu surface have been studied. The monolayer is found to be unstable in the top layer, and its magnetization depends greatly on the surface orientation. Two distinct energy levels have been found to exist Co/Cu/Ni trilayers deposited on the (100) Cu surface, which correspond to a completely paramagnetic trilayer and the case when only Ni is paramagnetic. Vacancy ordering in substoichometric titanium carbides TiCx have been simulated. Existence of three ordered phases in the range of carbon concentration x=0.5 ÷1.0 has been revealed and a theoretical phase diagram has been proposed. Surface segregations have been calculated in disordered Ni50Pt50 and Ni50Pd50 as well as in ordered NiPt alloys. Segregation reversal has been observed in the Ni50Pt50 alloy with Pt segregation at the (111) surface and Ni segregation at the (110). In the ordered NiPt alloys segregation behaviour is found to be affected greatly by small deviations from the exact stoichiometric composition in bulk. Surface magnetization in PdV and MoV bcc alloys have been studied. It has been found, that in PdV alloys surface segregations suppress magnetic order at the surface, while in MoV alloys magnetization is substantially enhanced due to the segregation.
|
772 |
Quasi Importance SamplingHörmann, Wolfgang, Leydold, Josef January 2005 (has links) (PDF)
There arise two problems when the expectation of some function with respect to a nonuniform multivariate distribution has to be computed by (quasi-) Monte Carlo integration: the integrand can have singularities when the domain of the distribution is unbounded and it can be very expensive or even impossible to sample points from a general multivariate distribution. We show that importance sampling is a simple method to overcome both problems. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
|
773 |
Advances in Bayesian Modelling and Computation: Spatio-Temporal Processes, Model Assessment and Adaptive MCMCJi, Chunlin January 2009 (has links)
<p>The modelling and analysis of complex stochastic systems with increasingly large data sets, state-spaces and parameters provides major stimulus to research in Bayesian nonparametric methods and Bayesian computation. This dissertation presents advances in both nonparametric modelling and statistical computation stimulated by challenging problems of analysis in complex spatio-temporal systems and core computational issues in model fitting and model assessment. The first part of the thesis, represented by chapters 2 to 4, concerns novel, nonparametric Bayesian mixture models for spatial point processes, with advances in modelling, computation and applications in biological contexts. Chapter 2 describes and develops models for spatial point processes in which the point outcomes are latent, where indirect observations related to the point outcomes are available, and in which the underlying spatial intensity functions are typically highly heterogenous. Spatial intensities of inhomogeneous Poisson processes are represented via flexible nonparametric Bayesian mixture models. Computational approaches are presented for this new class of spatial point process mixtures and extended to the context of unobserved point process outcomes. Two examples drawn from a central, motivating context, that of immunofluorescence histology analysis in biological studies generating high-resolution imaging data, demonstrate the modelling approach and computational methodology. Chapters 3 and 4 extend this framework to define a class of flexible Bayesian nonparametric models for inhomogeneous spatio-temporal point processes, adding dynamic models for underlying intensity patterns. Dependent Dirichlet process mixture models are introduced as core components of this new time-varying spatial model. Utilizing such nonparametric mixture models for the spatial process intensity functions allows the introduction of time variation via dynamic, state-space models for parameters characterizing the intensities. Bayesian inference and model-fitting is addressed via novel particle filtering ideas and methods. Illustrative simulation examples include studies in problems of extended target tracking and substantive data analysis in cell fluorescent microscopic imaging tracking problems.</p><p>The second part of the thesis, consisting of chapters 5 and chapter 6, concerns advances in computational methods for some core and generic Bayesian inferential problems. Chapter 5 develops a novel approach to estimation of upper and lower bounds for marginal likelihoods in Bayesian modelling using refinements of existing variational methods. Traditional variational approaches only provide lower bound estimation; this new lower/upper bound analysis is able to provide accurate and tight bounds in many problems, so facilitates more reliable computation for Bayesian model comparison while also providing a way to assess adequacy of variational densities as approximations to exact, intractable posteriors. The advances also include demonstration of the significant improvements that may be achieved in marginal likelihood estimation by marginalizing some parameters in the model. A distinct contribution to Bayesian computation is covered in Chapter 6. This concerns a generic framework for designing adaptive MCMC algorithms, emphasizing the adaptive Metropolized independence sampler and an effective adaptation strategy using a family of mixture distribution proposals. This work is coupled with development of a novel adaptive approach to computation in nonparametric modelling with large data sets; here a sequential learning approach is defined that iteratively utilizes smaller data subsets. Under the general framework of importance sampling based marginal likelihood computation, the proposed adaptive Monte Carlo method and sequential learning approach can facilitate improved accuracy in marginal likelihood computation. The approaches are exemplified in studies of both synthetic data examples, and in a real data analysis arising in astro-statistics.</p><p>Finally, chapter 7 summarizes the dissertation and discusses possible extensions of the specific modelling and computational innovations, as well as potential future work.</p> / Dissertation
|
774 |
Design of a Boron Neutron Capture Enhanced Fast Neutron Therapy AssemblyWang, Zhonglu 22 August 2006 (has links)
A boron neutron capture enhanced fast neutron therapy assembly has been designed for the Fermilab Neutron Therapy Facility (NTF). This assembly uses a tungsten filter and collimator near the patient¡¯s head, with a graphite reflector surrounding the head to significantly increase the dose due to boron neutron capture reactions. The assembly was designed using Monte Carlo radiation transport code MCNP version 5 for a standard 20x20 cm2 treatment beam. The calculated boron dose enhancement at 5.7-cm depth in a water-filled head phantom in the assembly with a 5x5 cm2 collimation was 21.9% per 100-ppm B-10 for a 5.0-cm tungsten filter and 29.8% for an 8.5-cm tungsten filter. The corresponding dose rate for the 5.0-cm and 8.5-cm thick filters were 0.221 and 0.127 Gy/min, respectively.
To validate the design calculations, a simplified BNCEFNT assembly was built using four lead bricks to form a 5x5 cm2 collimator. Five 1.0-cm thick 20x20 cm2 tungsten plates were used to obtain different filter thicknesses and graphite bricks/blocks were used to form a reflector. Measurements of the dose enhancement of the simplified assembly in a water-filled head phantom were performed using a pair of tissue-equivalent ion chambers. One of the ion chambers is loaded with 1000-ppm natural boron (184-ppm 10B) to measure dose due to boron neutron capture. The measured dose enhancement at 5.0-cm depth in the head phantom for the 5.0-cm thick tungsten filter is (16.6 ¡À 1.8)%, which agrees well with the MCNP simulation of the simplified BNCEFNT assembly, (16.4¡À 0.5)%. The error in the calculated dose enhancement only considers the statistical uncertainties. The total dose rate measured at 5.0-cm depth using the non-borated ion chamber is (0.765 ¡À 0.076) Gy/MU, about 61% of the fast neutron standard dose rate (1.255Gy/MU) at 5.0-cm depth for the standard 10x10 cm2 treatment beam.
The increased doses to other organs due to the use of the BNCEFNT assembly were calculated using MCNP5 and a MIRD phantom.
|
775 |
Multistage decisions and risk in Markov decision processes: towards effective approximate dynamic programming architecturesPratikakis, Nikolaos 28 October 2008 (has links)
The scientific domain of this thesis is optimization under uncertainty for discrete event stochastic systems. In particular, this thesis focuses on the practical implementation of the Dynamic Programming (DP) methodology to discrete event stochastic systems. Unfortunately DP in its crude form suffers from three severe computational obstacles that make its imple-mentation to such systems an impossible task. This thesis addresses these obstacles by developing and executing practical Approximate Dynamic Programming (ADP) techniques.
Specifically, for the purposes of this thesis we developed the following ADP techniques. The first one is inspired from the Reinforcement Learning (RL) literature and is termed as Real Time Approximate Dynamic Programming (RTADP). The RTADP algorithm is meant for active learning while operating the stochastic system. The basic idea is that the agent while constantly interacts with the uncertain environment accumulates experience, which enables him to react more optimal in future similar situations. While the second one is an off-line ADP procedure
These ADP techniques are demonstrated on a variety of discrete event stochastic systems such as: i) a three stage queuing manufacturing network with recycle, ii) a supply chain of the light aromatics of a typical refinery, iii) several stochastic shortest path instances with a single starting and terminal state and iv) a general project portfolio management problem.
Moreover, this work addresses, in a systematic way, the issue of multistage risk within the DP framework by exploring the usage of intra-period and inter-period risk sensitive utility functions. In this thesis we propose a special structure for an intra-period utility and compare the derived policies in several multistage instances.
|
776 |
Multi-tree Monte Carlo methods for fast, scalable machine learningHolmes, Michael P. 09 January 2009 (has links)
As modern applications of machine learning and data mining are forced to deal with ever more massive quantities of data, practitioners quickly run into difficulty with the scalability of even the most basic and fundamental methods. We propose to provide scalability through a marriage between classical, empirical-style Monte Carlo approximation and deterministic multi-tree techniques. This union entails a critical compromise: losing determinism in order to gain speed. In the face of large-scale data, such a compromise is arguably often not only the right but the only choice. We refer to this new approximation methodology as Multi-Tree Monte Carlo. In particular, we have developed the following fast approximation methods:
1. Fast training for kernel conditional density estimation, showing speedups as high as 10⁵ on up to 1 million points.
2. Fast training for general kernel estimators (kernel density estimation, kernel regression, etc.), showing speedups as high as 10⁶ on tens of millions of points.
3. Fast singular value decomposition, showing speedups as high as 10⁵ on matrices containing billions of entries.
The level of acceleration we have shown represents improvement over the prior state of the art by several orders of magnitude. Such improvement entails a qualitative shift, a commoditization, that opens doors to new applications and methods that were previously invisible, outside the realm of practicality. Further, we show how these particular approximation methods can be unified in a Multi-Tree Monte Carlo meta-algorithm which lends itself as scaffolding to the further development of new fast approximation methods. Thus, our contribution includes not just the particular algorithms we have derived but also the Multi-Tree Monte Carlo methodological framework, which we hope will lead to many more fast algorithms that can provide the kind of scalability we have shown here to other important methods from machine learning and related fields.
|
777 |
A coarse-mesh transport method for time-dependent reactor problemsPounders, Justin Michael 06 April 2010 (has links)
A new solution technique is derived for the time-dependent transport equation.
This approach extends the steady-state coarse-mesh transport method that is based on
global-local decompositions of large (i.e. full-core) neutron transport problems. The new
method is based on polynomial expansions of the space, angle and time variables in a
response-based formulation of the transport equation. The local problem (coarse mesh)
solutions, which are entirely decoupled from each other, are characterized by space-,
angle- and time-dependent response functions. These response functions are, in turn, used
to couple an arbitrary sequence of local problems to form the solution of a much larger
global problem. In the current work, the local problem (response function) computations
are performed using the Monte Carlo method, while the global (coupling) problem is
solved deterministically. The spatial coupling is performed by orthogonal polynomial
expansions of the partial currents on the local problem surfaces, and similarly, the timedependent
response of the system (i.e. the time-varying flux) is computed by convolving
the time-dependent surface partial currents and time-dependent volumetric sources
against pre-computed time-dependent response kernels.
|
778 |
A pareto frontier intersection-based approach for efficient multiobjective optimization of competing concept alternativesRousis, Damon 01 July 2011 (has links)
The expected growth of civil aviation over the next twenty years places significant emphasis on revolutionary technology development aimed at mitigating the environmental impact of commercial aircraft. As the number of technology alternatives grows along with model complexity, current methods for Pareto finding and multiobjective optimization quickly become computationally infeasible. Coupled with the large uncertainty in the early stages of design, optimal designs are sought while avoiding the computational burden of excessive function calls when a single design change or technology assumption could alter the results. This motivates the need for a robust and efficient evaluation methodology for quantitative assessment of competing concepts.
This research presents a novel approach that combines Bayesian adaptive sampling with surrogate-based optimization to efficiently place designs near Pareto frontier intersections of competing concepts. Efficiency is increased over sequential multiobjective optimization by focusing computational resources specifically on the location in the design space where optimality shifts between concepts. At the intersection of Pareto frontiers, the selection decisions are most sensitive to preferences place on the objectives, and small perturbations can lead to vastly different final designs. These concepts are incorporated into an evaluation methodology that ultimately reduces the number of failed cases, infeasible designs, and Pareto dominated solutions across all concepts.
A set of algebraic samples along with a truss design problem are presented as canonical examples for the proposed approach. The methodology is applied to the design of ultra-high bypass ratio turbofans to guide NASA's technology development efforts for future aircraft. Geared-drive and variable geometry bypass nozzle concepts are explored as enablers for increased bypass ratio and potential alternatives over traditional configurations. The method is shown to improve sampling efficiency and provide clusters of feasible designs that motivate a shift towards revolutionary technologies that reduce fuel burn, emissions, and noise on future aircraft.
|
779 |
Entwicklung eines 3D Neutronentransportcodes auf der Basis der Ray-Tracing-Methode und Untersuchungen zur Aufbereitung effektiver Gruppenquerschnitte für heterogene LWR-ZellenRohde, Ulrich [Projektleiter], Beckert, Carsten 31 March 2010 (has links) (PDF)
Standardmäßig erfolgt die Datenaufbereitung der Neutronenwirkungsquerschnitte für Reaktorkernrechnungen mit 2D-Zellcodes. Ziel dieser Arbeit war es, einen 3D-Zellcode zu entwickeln, mit diesem Code 3D-Effekte zu untersuchen und die Notwendigkeit einer 3D-Datenaufbereitung der Neutronenwirkungsquerschnitte zu bewerten. Zur Berechnung des Neutronentransports wurde die Methode der Erststoßwahrscheinlichkeiten, die mit der Ray-Tracing-Methode berechnet werden, gewählt. Die mathematischen Algorithmen wurden in den 2D/3D-Zellcode TransRay umgesetzt. Für den Geometrieteil des Programms wurde das Geometriemodul eines Monte-Carlo-Codes genutzt. Das Ray-Tracing wurde auf Grund der hohen Rechenzeiten parallelisiert. Das Programm TransRay wurde an 2D-Testaufgaben verifiziert. Für einen Druckwasser-Referenzreaktor wurden folgende 3D-Probleme untersucht: Ein teilweise eingetauchter Regelstab und Void (bzw. Moderator mit geringerer Dichte) um einen Brennstab als Modell einer Dampfblase. Alle Probleme wurden zum Vergleich auch mit den Programmen HELIOS (2D) und MCNP (3D) nachgerechnet. Die Abhängigkeit des Multiplikationsfaktors und der gemittelten Zweigruppenquerschnitte von der Eintauchtiefe des Regelstabes bzw. von der Höhe der Dampfblase wurden untersucht. Die 3D berechneten Zweigruppenquerschnitte wurden mit drei üblichen Näherungen verglichen: linearer Interpolation, Interpolation mit Flusswichtung und Homogenisierung. Am 3D-Problem des Regelstabes zeigte sich, dass die Interpolation mit Flusswichtung eine gute Näherung ist. Demnach ist hier eine 3D-Datenaufbereitung nicht notwendig. Beim Testfall des einzelnen Brennstabs, der von Void (bzw. Moderator geringerer Dichte) umgeben ist, erwiesen sich die drei Näherungen für die Zweigruppenquerschnitte als unzureichend. Demnach ist eine 3D-Datenaufbereitung notwendig. Die einzelne Brennstabzelle mit Void kann als der Grenzfall eines Reaktors angesehen werden, in dem sich eine Phasengrenzfläche herausgebildet hat.
|
780 |
Méthodes numériques pour la valorisation d'options swings et autres problèmes sur les matières premièresKourouvakalis, Stylianos Geman, Hélyette. January 2008 (has links)
Thèse de doctorat : Sciences de gestion : Université Paris-Dauphine : 2008. / L'introduction générale est en français, les différents chapitres sont en anglais. bibliogr.54 ref. Index.
|
Page generated in 0.0238 seconds