301 |
Applications of quasi-Monte Carlo methods in model-robust response surface designsYue, Rong-xian 01 January 1997 (has links)
No description available.
|
302 |
Aplikace metod používaných při analýze rizika a podpoře rozhodováníNowak, Ondřej January 2007 (has links)
Téma analýzy rizika je relativně novou oblastí, která poslední dobou nabývá na významu nejen ve světě, ale i v České republice. Tomu však zdaleka neodpovídá zdejší zázemí. V českém jazyce dosud nebyly vydány žádné tituly zabývající se komplexně touto problematikou, a pokud se již podaří nějakou literaturu najít, pak není určena čtenáři, který s analýzou rizika dosud není příliš obeznámen. Proto si tato diplomová práce klade za cíl rozšířit povědomí o této problematice. Informace v tomto dokumentu jsou založeny na renomovaných anglicky psaných publikacích z této oblasti a dále vycházejí z praktických zkušeností, které jsem načerpal při vývoji vlastního nástroje pro analýzu rizika. Záměrem tak není jen plnit roli akademického textu, ale zároveň praktického návodu, který pomůže každému s tvorbou jeho prvních modelů. Diplomová práce se skládá z teoretické a praktické části. Kapitoly teorie jsou děleny na jednotlivé sekce, které představují dílčí kroky celého procesu analýzy rizika. V praktické části pak je krátce popsána aplikace nástroje Profeta na reálném projektu. Celý dokument je koncipován pro využití širokým spektrem čtenářů se zájmem o úvod do analýzy rizika. Zájemce je postupně seznámen se základy tvorby modelů, identifikování klíčových faktorů úspěchu, definování předpokladů, nalezení předpovědí a interpretování výsledků simulace. Dále pak text zprostředkovává kvalitativní i kvantitativní rozbor rizika a poskytuje instrukce k aplikaci jednotlivých metod k identifikaci, kvantifikaci, předpovídání, ohodnocování, zajištění, diversifikování a managementu rizika. Jednotlivé metody jsou představeny z pohledu jejich reálné aplikace prostřednictvím vybraných z nástrojů pro analýzu rizika a jejich významu pro celý rozhodovací proces.
|
303 |
Stochastické metody v řízení projektůZemenová, Hana January 2007 (has links)
Každý projekt je ze své povahy spojen s jistou dávkou rizika a nejistoty, kterou je nutné zohlednit při volbě adekvátních metod pro jeho řízení. Cílem práce je tyto metody klasifikovat, porovnat a aplikovat na případové studii z podnikové praxe. Podrobněji jsou přitom rozebrány právě ty metody, které byly vhodné pro zkoumaný projekt z případové studie: jedná se o metodu CPM/PERT, simulaci Monte Carlo a analýzu projektu prostřednictvím bayesovských sítí.
|
304 |
Avaliação de estratégias para reconciliação de dados e detecção de erros grosseirosFarias, Andrea Cabral January 2009 (has links)
O sistema de reconciliação de dados trata de um problema advindo da evolução das técnicas de medição, aquisição e armazenamento de dados. Este tem o papel de garantir a consistência destes dados, utilizando a redundância das variáveis medidas e um modelo estatístico da medição para aumentar a precisão dos dados. O procedimento completo tem por objetivo que as equações de conservação sejam satisfeitas, tratando dos erros aleatórios inerentes ao processo de medição e que eventuais erros grosseiros sejam detectados e corrigidos. Estas duas últimas atribuições referem-se aos dois problemas tratados neste trabalho: avaliação de técnicas de reconciliação de dados e para a detecção de erros grosseiros. O objetivo deste trabalho é comparar diferentes técnicas por meio de um estudo completo de reconciliação de dados e detecção de erros grosseiros. Este foi baseado em simulações determinísticas e simulações Monte Carlo para verificar o desempenho das estratégias frente aos parâmetros que influenciam cada etapa do procedimento. Em reconciliação de dados foi avaliada a influência da topologia e do pré-tratamento de dados na qualidade final da estimação. Já para etapa de detecção de erros grosseiros foram avaliadas sete estratégias diferentes, realizando uma comparação entre as mesmas com base em um estudo combinatorial completo. Avaliou-se a influência da topologia e foram levantadas as curvas de poder de detecção. Com base nestes resultados escolheu-se um critério para que os algoritmos fossem sintonizados de maneira que a comparação entre eles fosse justa. Após a sintonia avaliou-se a utilização do pré-tratamento de dados. Além das estratégias de detecção tradicionais utilizaram-se também técnicas de reconciliação robusta. O desempenho destas foi comparado com os resultados obtidos nas etapas anteriores. Como conseqüência deste estudo completo, foi proposta uma nova estratégia de detecção de erros grosseiros, baseada em estatística robusta. O seu desenvolvimento foi demonstrado e a validação foi realizada por comparação com os resultados obtidos neste trabalho e com um caso reportado na literatura. / The data reconciliation system deal with a problem originated from the evolution of measurement techniques, data acquisition, and data storage. This system plays the role of guaranteeing the data consistency; it uses the measured variables redundancy and a statistical measurement model to improve accuracy. The main goal of the complete procedure is to satisfy the conservation equations, treating the random errors inherent of the measurement process and detecting eventual gross errors. The last two attributes are the issues of this work: the evaluation of data reconciliation techniques and the problem of gross error detection. The goal of this work is to compare different techniques by a complete data reconciliation and gross error detection study, based on deterministic and Monte Carlo simulations to verify the performances of the strategies as functions of the parameters that influence each step of the procedure. In data reconciliation, the influence of the topology and data pre-treatment in the quality of the estimates were investigated. Furthermore, dealing with the gross error detection step, seven different strategies were compared by means of a complete combinatorial study. The influence of topology was studied and the power curves were obtained. Based on these results, a criterion to tune the algorithms was chosen in the manner of guaranteeing a fair comparison between them. After the tuning step, the use of data pre-treatment was investigated. To complete the study, robust data reconciliation techniques were also used, and their performances were compared with the results attained in the precedent sections. As a product of this study, a new gross error detection strategy was proposed, based on robust statistics. The development steps were showed and the new method was validated based on comparison with the results obtained in this work and with a case study from the literature.
|
305 |
Modelling Advection and Diffusion in MicrochannelsBeutel, Dan 01 June 2003 (has links)
This project will investigate mixing in microchannels. Specifically, the advection and diffusion of a passive scalar, using a split step Monte Carlo method. Numerically the implementation of this method is well understood. The current experimental geometry is a rectangular pipe with grooves on one wall. Mixing results with straight walls agree closely with experiment. The velocity field over grooves is also studied.
|
306 |
Optimizing light delivery for photoacoustic imaging using Monte Carlo simulationsJanuary 2019 (has links)
archives@tulane.edu / Photoacoustic imaging functions via two foundations: light delivery and acoustic signal reception. In order for acoustic signal to be received and processed into an image, photons must first penetrate the tissue. However, biological media highly attenuates light, and the maximum imaging depth for photoacoustic images lies between 2-3 centimeters. Thus, models and simulations are integral to approach this problem, and they can be used to easily change imaging parameters and simulate various conditions. This study used a MATLAB Monte Carlo simulation algorithm to model and simulate a homogeneous placental tissue sample. The simulated data was compared to experimental ex vivo placental images taken under identical conditions of the simulation. These two data sets were used to gauge the simulation’s accuracy to predicting fluence trends in tissue, and the results were then applied to a heterogeneous tissue model simulating in vivo placental imaging. It was found that to maximize fluence in the placenta during in vivo imaging, 808 nm and 950 nm both offer different benefits to maximize fluence in the placenta. This simulation toolbox can be used to determine which experimental setup can maximize fluence in photoacoustic images, resulting in high-quality, high-contrast images. / 1 / Adam Kolkin
|
307 |
A coupled modeling-experimental study of Li-air BatteriesYin, Yinghui 22 February 2018 (has links)
En raison de leur capacité théorique élevée, les batteries Li-air ont été considérées comme des dispositifs de stockage d'énergie prometteurs depuis leur invention. Cependant, la grande complexité de ces dispositifs a entravé leur application pratique. En plus, les résultats expérimentaux et les théories mécanistes rapportés dans la littérature sont épars et ajoutent des difficultés pour développer une compréhension globale de leurs principes de fonctionnement. Le travail accompli dans cette thèse repose sur la combinaison de deux approches : la modélisation et l'expérimentation, non pas dans le but d'avoir une adéquation parfaite entre simulation et expérience mais afin de mieux comprendre le lien entre les différents mécanismes mis en jeux. Un modèle de déchargé, basé sur une approche continuum et rassemblant théorie de la nucléation, description des réactions cinétiques et du transport de masse, a été développé. Le modèle permet d'étudier simultanément l'impact de la densité de courant, des propriétés de l'électrolyte et des propriétés de surface de l'électrode sur le procédé de décharge des batteries Li-air permettant ainsi une meilleure compréhension. De plus, le modèle de charge développé lors de cette thèse, met en lumière la corrélation entre la distribution des tailles de particules de Li2O2 et le profil de recharge obtenu. Finalement, afin d'étudier ces batteries au niveau mésoscopique, un modèle de cinétique Monte-Carlo a été créé et permet de comprendre les processus de décharge dans des espaces confinés / Due to their high theoretical capacity, Li-air batteries (LABs) have been considered as promising energy storage devices since their invention. However, the high complexity of these devices has impeded their practical application. Moreover, the scattered experimental results and mechanistic theories reported in literature, add difficulties to develop a comprehensive understanding of their operation principles. The work accomplished in this thesis constitutes an effort to entangle the complexity of LABs through the combination of modeling approaches with experiments, with the focus on getting better understanding about the mechanisms interplays, rather than pursuing a perfect quantitative match between simulation and experimental results. Based on continuum approach, a discharge model has been developed combining the nucleation theory, reaction kinetics and mass transport. This model converged the impacts of current density, electrolyte property and electrode surface property on the discharge process of LABs to a comprehensive theory. Furthermore, a charge model has been developed to address the important role of Li2O2 particle size distribution in determining the shape of recharge profile. In addition, to investigate the LAB system at mesoscale, a kinetic Monte Carlo (KMC) model has been build and the simulation results provided insights into the discharge process in confined environment at local level
|
308 |
Atomic scale investigation of ageing in metals / Étude à l'échelle atomique du vieillissement dans les métauxWaseda, Osamu 13 December 2016 (has links)
Selon la théorie de Cottrell et Bilby, les dislocations à travers leur champ de contrainte interagissent avec les atomes de soluté qui s’agrègent au cœur et autour des dislocations (atmosphère de Cottrell). Ces atmosphères « bloquent » les dislocations et fragilisent le matériau. Dans cette thèse, les techniques de simulations à l’échelle atomique telles que la Dynamique Moléculaire, les simulations Monte Carlo Cinétique, Monte Carlo Métropolis ont été développées qui permettent de prendre en compte les interactions entre plusieurs centaines d’atomes de carbone et la dislocation, pour étudier la cinétique de formation ainsi que la structure d’une atmosphère de Cottrell. Par ailleurs, la technique de simulation est appliquée à deux autres problématiques: premièrement, il est connu que les atomes de C dans la ferrite se mettent en ordre (mise en ordre de Zener). La stabilité de cette phase est étudiée en fonction de la température et la concentration de C. Deuxièmement, la ségrégation des atomes de soluté dans les nano-cristaux de Ni ainsi que la stabilité des nano-cristaux avec les atomes de soluté dans les joints de grain à haute température est étudiée. / The objective of the thesis was to understand the microscopic features at the origin of ageing in metals. The originality of this contribution was the com- bination of three complementary computational techniques : (1) Metropolis Monte Carlo (MMC), (2) Atomic Kinetic Monte Carlo (AKMC), and, (3) Molecular Dynamics (MD). It consisted of four main sections : Firstly the ordering occurring in bulk alpha-iron via MMC and MD was studied. Various carbon contents and temperatures were investigated in order to obtain a “phase diagram”. Secondly, the generation of systems containing a dislocation interacting with many carbon atoms, namely a Cottrell Atmosphere, with MMC technique was described. The equilibrium structure of the atmosphere and the stress field around the atmospheres proves that the stress field around the dislocation was affected but not cancelled out by the atmosphere. Thirdly, the kinetics of the carbon migration and Cottrell atmosphere evolution were investigated via AKMC. The activation energies for carbon atom migration were calculated from the local stress field and the arrangement of the neigh- bouring carbon atoms. Lastly, an application of the combined use of MMC and MD to describe grain boundary segregation of solute atoms in fcc nickel was presented. The grain growth was inhibited due to the solute atoms in the grain boundary.
|
309 |
New developments in the construction of lattice rules: applications of lattice rules to high-dimensional integration problems from mathematical finance.Waterhouse, Benjamin James, School of Mathematics, UNSW January 2007 (has links)
There are many problems in mathematical finance which require the evaluation of a multivariate integral. Since these problems typically involve the discretisation of a continuous random variable, the dimension of the integrand can be in the thousands, tens of thousands or even more. For such problems the Monte Carlo method has been a powerful and popular technique. This is largely related to the fact that the performance of the method is independent of the number of dimensions. Traditional quasi-Monte Carlo techniques are typically not independent of the dimension and as such have not been suitable for high-dimensional problems. However, recent work has developed new types of quasi-Monte Carlo point sets which can be used in practically limitless dimension. Among these types of point sets are Sobol' sequences, Faure sequences, Niederreiter-Xing sequences, digital nets and lattice rules. In this thesis, we will concentrate on results concerning lattice rules. The typical setting for analysis of these new quasi-Monte Carlo point sets is the worst-case error in a weighted function space. There has been much work on constructing point sets with small worst-case errors in the weighted Korobov and Sobolev spaces. However, many of the integrands which arise in the area of mathematical finance do not lie in either of these spaces. One common problem is that the integrands are unbounded on the boundaries of the unit cube. In this thesis we construct function spaces which admit such integrands and present algorithms to construct lattice rules where the worst-case error in this new function space is small. Lattice rules differ from other quasi-Monte Carlo techniques in that the points can not be used sequentially. That is, the entire lattice is needed to keep the worst-case error small. It has been shown that there exist generating vectors for lattice rules which are good for many different numbers of points. This is a desirable property for a practitioner, as it allows them to keep increasing the number of points until some error criterion is met. In this thesis, we will develop fast algorithms to construct such generating vectors. Finally, we apply a similar technique to show how a particular type of generating vector known as the Korobov form can be made extensible in dimension.
|
310 |
Modelling the impact of treatment uncertainties in radiotherapyBooth, Jeremy T January 2002 (has links)
Uncertainties are inevitably part of the radiotherapy process. Uncertainty in the dose deposited in the tumour exists due to organ motion, patient positioning errors, fluctuations in machine output, delineation of regions of interest, the modality of imaging used, and treatment planning algorithm assumptions among others; there is uncertainty in the dose required to eradicate a tumour due to interpatient variations in patient-specific variables such as their sensitivity to radiation; and there is uncertainty in the dose-volume restraints that limit dose to normal tissue. This thesis involves three major streams of research including investigation of the actual dose delivered to target and normal tissue, the effect of dose uncertainty on radiobiological indices, and techniques to display the dose uncertainty in a treatment planning system. All of the analyses are performed with the dose distribution from a four-field box treatment using 6 MV photons. The treatment fields include uniform margins between the clinical target volume and planning target volume of 0.5 cm, 1.0 cm, and 1.5 cm. The major work is preceded by a thorough literature review on the size of setup and organ motion errors for various organs and setup techniques used in radiotherapy. A Monte Carlo (MC) code was written to simulate both the treatment planning and delivery phases of the radiotherapy treatment. Using MC, the mean and the variation in treatment dose are calculated for both an individual patient and across a population of patients. In particular, the possible discrepancy in tumour position located from a single CT scan and the magnitude of reduction in dose variation following multiple CT scans is investigated. A novel convolution kernel to include multiple pretreatment CT scans in the calculation of mean treatment dose is derived. Variations in dose deposited to prostate and rectal wall are assessed for each of the margins and for various magnitudes of systematic and random error, and penumbra gradients. The linear quadratic model is used to calculate prostate Tumour Control Probability (TCP) incorporating an actual (modelled) delivered prostate dose. The Kallman s-model is used to calculate the normal tissue complication probability (NTCP), incorporating actual (modelled) fraction dose in the deforming rectal wall. The impact of each treatment uncertainty on the variation in the radiobiological index is calculated for the margin sizes. / Thesis (Ph.D.)--Department of Physics and Mathematical Physics, 2002.
|
Page generated in 0.0565 seconds