• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2331
  • 1076
  • 695
  • 186
  • 125
  • 119
  • 68
  • 57
  • 53
  • 53
  • 35
  • 33
  • 30
  • 24
  • 23
  • Tagged with
  • 5768
  • 5620
  • 1669
  • 1371
  • 572
  • 536
  • 533
  • 528
  • 427
  • 414
  • 395
  • 380
  • 334
  • 329
  • 311
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Simulación de la dinámica del micromáser más allá de la RWA

García-Calderón Palomino, Leandro 09 May 2011 (has links)
Mediante técnicas de simulación Montecarlo, prescindiendo de la aproximación de la onda rotante (RWA), se predice la aparición de efectos medibles en los llamados "estados atrapados", rasgo eminentemente cuántico del micromáser o máser monoatómico. / Tesis
122

Particle filter using acceptance-rejection method with emphasis on the target tracking problem.

January 2006 (has links)
Tsang Yuk Fung. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 59-62). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Sequential Monte Carlo --- p.5 / Chapter 2.1 --- Recursive Bayesian estimation --- p.7 / Chapter 2.2 --- Bayesian sequential importance sampling --- p.8 / Chapter 2.3 --- Sclcction of iiiipoitance function --- p.10 / Chapter 2.4 --- Particle filter --- p.12 / Chapter 3 --- Target tracking and data association --- p.15 / Chapter 3.1 --- Target tracking and its applications --- p.16 / Chapter 3.2 --- Data association and JPDA method --- p.16 / Chapter 4 --- Particle filter using the acceptance-rejection method --- p.21 / Chapter 4.1 --- Particle Filter using the acceptance-rejection method --- p.22 / Chapter 4.2 --- Modified accoptance-rcjoction algorithm --- p.24 / Chapter 4.3 --- Examples --- p.26 / Chapter 4.3.1 --- Example 1: One dimensional non-linear case --- p.26 / Chapter 4.3.2 --- Example 2: Bearings-only tracking example --- p.27 / Chapter 4.3.3 --- Example 3: Single-target tracking --- p.31 / Chapter 4.3.4 --- Example 4: Multi-target tracking --- p.33 / Chapter 4.4 --- A new importance weight for bearings-only tracking problem --- p.34 / Chapter 5 --- Conclusion --- p.41
123

Weighted Markov chain Monte Carlo and optimization. / CUHK electronic theses & dissertations collection

January 1997 (has links)
by Liang Fa Ming. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1997. / Includes bibliographical references (p. 150-161). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Mode of access: World Wide Web.
124

Empirical investigation of the performance of Mplus for analyzing structural equation model with mixed continuous and ordered categorical variables.

January 2003 (has links)
Lam Ho-Suen Joffee. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2003. / Includes bibliographical references (leaf 40). / Abstracts in English and Chinese. / Chapter 1 --- Introduction --- p.1 / Chapter 2 --- Review of Mplus --- p.3 / Chapter 3 --- Design of the Simulation Study --- p.6 / Chapter 3.1 --- Simulation Design --- p.6 / Chapter 3.2 --- Covariance Structure Analysis and Mplus Restriction --- p.10 / Chapter 3.3 --- Implementation --- p.10 / Chapter 4 --- Method of Evalution --- p.12 / Chapter 4.1 --- Accuracy of Parameter Estimates --- p.12 / Chapter 4.2 --- Distribution of the Goodness-of-fit Statistic --- p.13 / Chapter 4.3 --- Precision of Standard Errors --- p.14 / Chapter 4.4 --- Number of Replications --- p.15 / Chapter 5 --- Results of the Simulation Study --- p.17 / Chapter 5.1 --- Accuracy of the Parameter Estimates --- p.17 / Chapter 5.2 --- Distribution of the Goodness-of-fit Statistic --- p.18 / Chapter 5.3 --- Precision of the Standard Error --- p.19 / Chapter 5.4 --- Results when the Sample Size is Extremely Large --- p.20 / Chapter 5.5 --- Conclusion --- p.21 / Chapter 6 --- Additional Simulation Study --- p.27 / Chapter 6.1 --- Precision of Standard Error when the Model Consists of Only Con- tinuous and Only Ordinal Variables --- p.28 / Chapter 6.2 --- Comparison of the Simulation Results of Mplus and LISREL --- p.29 / Chapter 6.3 --- Conclusion --- p.31 / Chapter 7 --- Conclusion and Discussion --- p.33 / Chapter A --- Mplus Sample Program (Condition C1 S2 N=500) --- p.36 / Chapter B --- PRELIS Sample Program (Condition C1 S1 N=500) --- p.37
125

KINETIC MONTE CARLO SIMULATION OF BINARY ALLOYS

Marshall, Timothy Craig 01 January 2018 (has links)
There are many tools to simulate physical phenomena. Generally, the simulation technique is defined by the size of the simulation area. Two well know techniques for simulating atom dynamics are kinetic Monte Carlo (kMC) and molecular dynamics (MD). In this work we simulate physical vapor deposition of binary metallic systems using the kMC technique. A sufficient quantity of atoms are deposited so that morphological features can be observed. Where kMC has fallen short we have used MD to supplement our results.
126

First photon detection in transillumination imaging : a theoretical evaluation / Setayesh Behin-Ain.

Behin-Ain, Setayesh January 2003 (has links)
"February 2003" / Bibliography: p. 121-135. / xii, 135 p. : ill. ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / This thesis is a theoretical evaluation of the (single) first photon detection (FPD) technique as a limiting case of time-resolved transillumination (TI) for diagnostic purposes. / Thesis (Ph.D.)--University of Adelaide, Dept. of Physics and Mathematical Physics, 2003
127

Monte Carlo simulation techniques : The development of a general framework

Nilsson, Emma January 2009 (has links)
<p>Algorithmica Research AB develops software application for the financial markets. One of their products is Quantlab that is a tool for quantitative analyses. An effective method to value several financial instruments is Monte Carlo simulation. Since it is a common method Algorithmica is interesting in investigating if it is possible to create a Monte Carlo framework.</p><p>A requirement from Algorithmica is that the framework is general and this is the main problem to solve. It is difficult to generate a generalized framework because financial derivatives have very different appearances. To simplify the framework the thesis will be delimitated to European style derivatives where the underlying asset is following a Geometric Brownian Motion.</p><p>The definition of the problem and delimitation were defined gradually, in parallel with the review of literature, this to be able to decide what purpose, and delimitations that is reasonable to treat. Standard Monte Carlo requires a large number of trials and is therefore slow. To speed up the process there exist different variance reduction techniques and also Quasi Monte Carlo simulation, where deterministic numbers (low discrepancy sequences) is used instead of random. The thesis investigated the variance reduction techniques; control variate technique, antithetic variate technique, and the low discrepancy sequences; Sobol, Faure and Halton.</p><p>Three test instruments were chosen to test the framework, an Asian option and a Barrier option where the purpose is to conclude which Monte Carle method that performs best, and also a structured product; Smart Start, that is more complex and the purpose is to test that the framework can handle it.</p><p>To increase the understanding of the theory the Halton, Faure and Sobol sequence were implemented in Quantlab in parallel with the review of literature. The Halton and Faure sequences also seemed to perform worse than Sobol so they were not further analyzed.</p><p>The developing of the framework was an iterative process. The chosen solution is to design a general framework by using five function pointers; the path generator, the payoff function, the stop criterion function and the volatility and interest rates. The user specifies these functions by him/her given some obligatory input and output values. It is not a problem-free solution to use function pointers and several conflicts and issues are defined, therefore it is not recommended to implement the framework as it is designed today.</p><p>In parallel with the developing of the framework several experiments on the Asian and Barrier options were performed with varying result and it is not possible to draw a conclusion on which method that is best. Often Sobol seems to converge better and fluctuates less than standard Monte Carlo. The literature indicates that it is important that the user has an understanding of the instrument that should be valued, the stochastic process it follows and the advantages and disadvantages of different Monte Carlo methods. It is recommended to evaluate the different method with experiments, before deciding which method to use when valuing a new derivative.</p>
128

Improving confidence for IMRT and helical tomotherapy treatments using accurately benchmarked Monte Carlo simulations

Sterpin, Edmond 05 December 2008 (has links)
Le rêve ultime du radiothérapeute a toujours été d’avoir à disposition des technologies capables de délivrer avec une parfaite précision des doses élevées aux volumes tumoraux sans irradier les tissus sains avoisinants. Ce rêve ne deviendra jamais réalité, mais tous les efforts des physiciens, médecins et industriels doivent être combinés pour que la réalité s’approche le plus possible de cette utopie. Depuis le début des années 60 et l’avènement des accélérateurs linéaires d’électrons montés sur des unités de traitement, la technologie en thérapie par photons a évolué énormément. Aujourd’hui, les traitements les plus à la pointe incluent la radiothérapie par modulation d’intensité (« Intensity-Modulated Radiation Therapy » ou IMRT) avec l’aide d’outils d’imagerie hautement sophistiqués. L’IMRT est une technique complexe qui requiert un excellent contrôle de la précision dans toutes les étapes du processus du traitement. Ces étapes peuvent être résumées en trois catégories qui sont 1) l’étalonnage et la stabilité de l’unité de traitement, 2) le positionnement et la qualité des données associées au patient et 3) la précision du calcul de dose effectué lors de la planification du traitement. Pour améliorer l’incertitude globale d’un traitement donné, des efforts de recherche sont nécessaires dans les trois catégories. Cette thèse se concentre sur le processus de calcul de dose. Durant les dernières décennies, la complexité et la précision des algorithmes de dose ont augmenté fortement grâce aux énormes progrès de l’informatique. Malgré cela, l’immense majorité des algorithmes utilisent des méthodes analytiques impliquant des approximations importantes au niveau de la physique de transport des particules. Il est cependant possible de concevoir des algorithmes qui évitent ces approximations en se basant sur des méthodes dites de Monte Carlo (MC) qui simulent fidèlement la réalité physique et qui sont considérées aujourd’hui comme les plus précises de calcul de la dose dans les tissus humains. Malheureusement, jusqu’à récemment, la vitesse des simulations MC était trop lente pour être compatible avec les contraintes de temps liées à la routine clinique. Mais les progrès continus en puissance de calcul combinés avec l’introduction de simplifications pertinentes dans les codes MC permettent d’envisager l’introduction d’algorithmes MC en routine clinique, ce qui est déjà le cas pour plusieurs systèmes de planification de traitement commerciaux. L’objectif de cette thèse était d’évaluer la valeur ajoutée du MC comparé à des algorithmes analytiques modernes pour des traitements IMRT complexes de tumeurs entourées de nombreuses inhomogénéités de densité. Ces évaluations ont été effectuées pour deux techniques de traitement IMRT : « step-and-shoot » et tomothérapie hélicoïdale. Pour l’IMRT « step-and-shoot » délivrée par une unité de traitement Elekta SL25, des simulations MC avec BEAMnrc ont été comparées avec un algorithme commercialisé récemment par la firme Varian appelé « Anisotropic Analytical Algorithm » (AAA). Pour la tomothérapie, une étude similaire a été accomplie mais pour le code utilisé dans le système fourni par « Tomotherapy Incorporated » et basé sur un algorithme de type convolution/superposition utilisant l’approximation « collapsed-cone ». Durant cette seconde étude, la modélisation MC au moyen du code MC PENELOPE était un aspect très important étant donné que c’est la première fois qu’un code MC complet pour la tomothérapie était construit avec tous les détails techniques de la machine fournis par le constructeur. De plus, le modèle MC, appelé TomoPen, a été conçu en vue d’une introduction future dans le système clinique. La vitesse de la simulation était donc une contrainte importante. La stratégie utilisée pour la simulation et adoptée dans TomoPen consiste principalement à simplifier drastiquement le transport des photons dans le collimateur multilames et permet de calculer des distributions de dose dans une tumeur bilatérale de la sphère tête et cou en à peu près 10 heures sur un processeur de 2 GHz, sans perte de précision significative. En utilisant le groupe d’ordinateurs fourni avec chaque unité de traitement de tomothérapie, ce temps de simulation peut être réduit d’un facteur 32, correspondant au nombre de processeurs. Durant cette thèse, la correspondance entre algorithmes analytiques et MC était en général satisfaisante pour la majorité des cas cliniques et expérimentaux étudiés. Cependant, des différences ont pu être observées pour des situations critiques, comme des petites tumeurs pulmonaires ou des tumeurs ethmoïdes. Même si ces déviations n’étaient pas « dramatiques », elles ont pu démontrer clairement le potentiel des algorithmes MC dans la pratique clinique afin d’améliorer la qualité globale et la précision des traitements.
129

The effects of anatomic resolution, respiratory variations and dose calculation methods on lung dosimetry

Babcock, Kerry Kent Ronald 14 January 2010
The goal of this thesis was to explore the effects of dose resolution, respiratory variation and dose calculation method on dose accuracy. To achieve this, two models of lung were created. The first model, called TISSUE, approximated the connective alveolar tissues of the lung. The second model, called BRANCH, approximated the lungs bronchial, arterial and venous branching networks. Both models were varied to represent the full inhalation, full exhalation and midbreath phases of the respiration cycle.<p> To explore the effects of dose resolution and respiratory variation on dose accuracy, each model was converted into a CT dataset and imported into a Monte Carlo simulation. The resulting dose distributions were compared and contrasted against dose distributions from Monte Carlo simulations which included the explicit model geometries. It was concluded that, regardless of respiratory phase, the exclusion of the connective tissue structures in the CT representation did not significantly effect the accuracy of dose calculations. However, the exclusion of the BRANCH structures resulted in dose underestimations as high as 14\% local to the branching structures. As lung density decreased, the overall dose accuracy marginally decreased.<p> To explore the effects of dose calculation method on dose accuracy, CT representations of the lung models were imported into the Pinnacle$^3$ treatment planning system. Dose distributions were calculated using the collapsed cone convolution method and compared to those derived using the Monte Carlo method. For both lung models, it was concluded that the accuracy of the collapsed cone algorithm decreased with decreasing density. At full inhalation lung density, the collapsed cone algorithm underestimated dose by as much as 15\%. Also, the accuracy of the CCC method decreased with decreasing field size.<p> Further work is needed to determine the source of the discrepancy.
130

Monte Carlo simulation techniques : The development of a general framework

Nilsson, Emma January 2009 (has links)
Algorithmica Research AB develops software application for the financial markets. One of their products is Quantlab that is a tool for quantitative analyses. An effective method to value several financial instruments is Monte Carlo simulation. Since it is a common method Algorithmica is interesting in investigating if it is possible to create a Monte Carlo framework. A requirement from Algorithmica is that the framework is general and this is the main problem to solve. It is difficult to generate a generalized framework because financial derivatives have very different appearances. To simplify the framework the thesis will be delimitated to European style derivatives where the underlying asset is following a Geometric Brownian Motion. The definition of the problem and delimitation were defined gradually, in parallel with the review of literature, this to be able to decide what purpose, and delimitations that is reasonable to treat. Standard Monte Carlo requires a large number of trials and is therefore slow. To speed up the process there exist different variance reduction techniques and also Quasi Monte Carlo simulation, where deterministic numbers (low discrepancy sequences) is used instead of random. The thesis investigated the variance reduction techniques; control variate technique, antithetic variate technique, and the low discrepancy sequences; Sobol, Faure and Halton. Three test instruments were chosen to test the framework, an Asian option and a Barrier option where the purpose is to conclude which Monte Carle method that performs best, and also a structured product; Smart Start, that is more complex and the purpose is to test that the framework can handle it. To increase the understanding of the theory the Halton, Faure and Sobol sequence were implemented in Quantlab in parallel with the review of literature. The Halton and Faure sequences also seemed to perform worse than Sobol so they were not further analyzed. The developing of the framework was an iterative process. The chosen solution is to design a general framework by using five function pointers; the path generator, the payoff function, the stop criterion function and the volatility and interest rates. The user specifies these functions by him/her given some obligatory input and output values. It is not a problem-free solution to use function pointers and several conflicts and issues are defined, therefore it is not recommended to implement the framework as it is designed today. In parallel with the developing of the framework several experiments on the Asian and Barrier options were performed with varying result and it is not possible to draw a conclusion on which method that is best. Often Sobol seems to converge better and fluctuates less than standard Monte Carlo. The literature indicates that it is important that the user has an understanding of the instrument that should be valued, the stochastic process it follows and the advantages and disadvantages of different Monte Carlo methods. It is recommended to evaluate the different method with experiments, before deciding which method to use when valuing a new derivative.

Page generated in 0.0596 seconds