• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2331
  • 1076
  • 695
  • 186
  • 125
  • 119
  • 68
  • 57
  • 53
  • 53
  • 35
  • 33
  • 30
  • 24
  • 23
  • Tagged with
  • 5769
  • 5621
  • 1670
  • 1371
  • 572
  • 536
  • 533
  • 528
  • 427
  • 414
  • 395
  • 380
  • 334
  • 329
  • 311
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Monte Carlo calculations on the helix-coil transition in polypeptides : a study of the kinetics and the effect of excluded volume /

Neves, Darrow Edward January 1976 (has links)
No description available.
242

Dosimetría Monte Carlo para campos colimados de fotones

Rucci, José Alexis 25 March 2015 (has links)
La planificación de tratamientos en radioterapia utilizando códigos Monte Carlo está convirtiéndose rápidamente en una alternativa a los sistemas de planificación de tratamiento tradicionales, y ciertamente son considerados una herramienta útil a los efectos de la verificación independiente dentro de un sistema de manejo de la calidad. Esto es posible, en parte, gracias al poder de cálculo de las computadoras actuales, que hace posible la obtención de resultados estadísticamente satisfactorios en poco tiempo. En la presente tesis se desarrolló un modelo de fuentes virtuales sencillo a fin de reemplazar la geometría de un cabezal de tratamiento de radioterapia, que permita obtener los mismos resultados dosimétricos dentro de los intervalos de confianza recomendados, sin necesidad de simular detalladamente el cabezal. Este modelo se construyó partiendo de archivos de espacio de fase disponibles en la base de datos del OIEA. En una primera parte se desarrolló un modelo híbrido que consistió en una fuente extensa de fotones con una estructura geométrica que permitió modelar el tamaño del campo y generar contaminación electrónica. Posteriormente se procedió a simular tamaños de campo pequeños (menores a 20 x 20 cm2) y se calculó la deposición de dosis en agua, comparando los resultados obtenidos con mediciones experimentales. Para extender el modelo a tamaños de campo grandes, en una segunda etapa se realizó una modificación a la geometría del modelo con el agregado de un filtro aplanador genérico de espesor variable. Posteriormente se realizaron simulaciones para campos grandes (hasta 40x40 cm2), comparándolos nuevamente con mediciones experimentales. Los resultados obtenidos mostraron un buen acuerdo con las mediciones experimentales, dentro de los intervalos de confianza recomendados. Esto sugiere que este modelo puede utilizarse a propósitos de verificación de planificadores de tratamiento de radioterapia. / The implementation of Monte Carlo methods for treatment planning in radiotherapy are an alternative to traditional Treatment Planning Systems. Actually, Monte Carlo methods have demondtrated to be a useful tool for dose verification. The potentiality of Monte Carlo methods was significantly increased by computation capacities currently available. In this thesis, a simple virtual source model was developed, in order to replace the complete geometry of a radiotherapy accelerator treatment head and to be able to obtain the same results within the recommended confidence intervals. This model is based on the phase space files available in the IAEA's database. In a first step, a hybrid model was developed. It consisted of an extended photon source added to a geometrically simple structure which allows the configuration of the field size and to generate electronic contamination. Thereafter Monte Carlo simulations was performed in order to calculate the dose deposition in a water phantom for small field sizes (less than 20 x 20 cm2) comparing these results with experimental measurements. In a second step, a change in the geometry of the model was developed in order to adapt it to large fields simulations, adding a generic flattening filter with variable thickness. It was shown that a relatively simple calibration method could be used to determine the filter thickness. With this addition both PDDs and cross profiles were calculated for large fields up to 40x40 cm2 and compared to experimental ones.
243

Estimation of DSGE Models: A Monte Carlo Analysis

Motula, Paulo Fernando Nericke 18 June 2013 (has links)
Submitted by Paulo Fernando Nericke Motula (pnericke@fgvmail.br) on 2013-06-29T15:45:20Z No. of bitstreams: 1 Dissertacao - Paulo Motula.pdf: 1492951 bytes, checksum: d60fce8c6165733b9666076aef7e2a75 (MD5) / Approved for entry into archive by Janete de Oliveira Feitosa (janete.feitosa@fgv.br) on 2013-07-03T13:29:49Z (GMT) No. of bitstreams: 1 Dissertacao - Paulo Motula.pdf: 1492951 bytes, checksum: d60fce8c6165733b9666076aef7e2a75 (MD5) / Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2013-07-09T19:35:20Z (GMT) No. of bitstreams: 1 Dissertacao - Paulo Motula.pdf: 1492951 bytes, checksum: d60fce8c6165733b9666076aef7e2a75 (MD5) / Made available in DSpace on 2013-07-09T19:40:59Z (GMT). No. of bitstreams: 1 Dissertacao - Paulo Motula.pdf: 1492951 bytes, checksum: d60fce8c6165733b9666076aef7e2a75 (MD5) Previous issue date: 2013-06-18 / We investigate the small sample properties and robustness of the parameter estimates of DSGE models. Our test ground is the Smets and Wouters (2007)'s model and the estimation procedures we evaluate are the Simulated Method of Moments (SMM) and Maximum Likelihood (ML). We look at the empirical distributions of the parameter estimates and their implications for impulse-response and variance decomposition in the cases of correct specification and two types of misspecification. Our results indicate an overall poor performance of SMM and some patterns of bias in impulse-response and variance decomposition for ML under the types of misspecification studied. / Neste trabalho investigamos as propriedades em pequena amostra e a robustez das estimativas dos parâmetros de modelos DSGE. Tomamos o modelo de Smets and Wouters (2007) como base e avaliamos a performance de dois procedimentos de estimação: Método dos Momentos Simulados (MMS) e Máxima Verossimilhança (MV). Examinamos a distribuição empírica das estimativas dos parâmetros e sua implicação para as análises de impulso-resposta e decomposição de variância nos casos de especificação correta e má especificação. Nossos resultados apontam para um desempenho ruim de MMS e alguns padrões de viés nas análises de impulso-resposta e decomposição de variância com estimativas de MV nos casos de má especificação considerados.
244

Avaliação da viabilidade de implementação de política de hedge de preços em empresas de mineração utilizando simulação de Monte Carlo

Werneck, Fernando Vieira 04 April 2018 (has links)
Submitted by Fernando Werneck (werneck.fernando@gmail.com) on 2018-05-21T20:01:54Z No. of bitstreams: 1 Dissertacao_Fernando Werneck_v final.pdf: 2257057 bytes, checksum: 347d2e5e3eacb6fd1a80e0b73d2a12d6 (MD5) / Approved for entry into archive by GILSON ROCHA MIRANDA (gilson.miranda@fgv.br) on 2018-05-28T16:33:41Z (GMT) No. of bitstreams: 1 Dissertacao_Fernando Werneck_v final.pdf: 2257057 bytes, checksum: 347d2e5e3eacb6fd1a80e0b73d2a12d6 (MD5) / Made available in DSpace on 2018-06-15T20:08:05Z (GMT). No. of bitstreams: 1 Dissertacao_Fernando Werneck_v final.pdf: 2257057 bytes, checksum: 347d2e5e3eacb6fd1a80e0b73d2a12d6 (MD5) Previous issue date: 2018-04-04 / The volatility of the market price of their products is one of the biggest sources of variability for the mining sector. Therefore, reducing or elimination this factor, significantly changes their risk profile. This dissertation discusses the potential benefit for mining companies of adopting a price hedge policy. Having a price hedge will be positive when prices go down and especially important when they reach stress levels. Consequently, a traditional deterministic approach, where the model inputs are fixed, would not support an appropriate analysis. As an alternative, with Monte Carlo simulation, it is possible to evaluate a wider range of possible outcomes and asses tail risk into the analysis. Therefore, this method was chosen to compare the economic results of a company under two different scenarios: when they allow their prices to fluctuate according to the market and under a fixed prices policy. A greenfield copper mine project was chosen to represent the operational and financial complexity of the sector. The interaction between financial leverage, company default risk, cost of debt and cash flow volatility is a key factor to determine whether it would be profitable to adopt a price hedge policy. The results show that, under certain circumstances, it is possible to create economical value by fixing their prices. Those factors are: lower direct costs, lower volatility of production and costs, higher commodities prices. / A volatilidade do preço de mercado de seus produtos é uma das maiores fontes de variabilidade no setor de mineração. Assim, reduzir ou eliminar esse fator altera de maneira significativa seu perfil de risco. Esta dissertação discute o benefício potencial para empresas de mineração em adotar uma política de hedge. Ter os preços fixados será positivo quando o mercado está em baixa e especialmente importante quando o mercado está passando por algum stress. Deste modo, uma abordagem determinística tradicional, na qual os componentes do modelo são fixos, não permitiria uma análise apropriada. Alternativamente, com a simulação de Monte Carlo, é possível avaliar um conjunto mais amplo de resultados e incorporar os riscos de cauda na análise. Assim, este método foi escolhido para realizar a comparação dos resultados econômicos de uma companhia em dois cenários diferentes: quando ela deixa os preços flutuarem seguindo o mercado e sob uma política de fixação de preços. Um projeto de uma nova mina de cobre foi escolhido para representar a complexidade operacional e financeira do setor. A interação entre alavancagem financeira, risco de calote, custo da dívida e volatilidade do fluxo de caixa é um fator chave para determinar se seria positivo adotar uma política de hedge. Os resultados mostram que, sob certas circunstâncias, é possível criar valor econômico ao fixar seus preços. A saber: menor custo direto, menor volatilidade da produção e das despesas e maior volatilidade do preço das commodities.
245

Monte Carlo Simulations for Chemical Systems

Rönnby, Karl January 2016 (has links)
This thesis investigates dierent types of Monte Carlo estimators for use in computationof chemical system, mainly to be used in calculating surface growthand evolution of SiC. Monte Carlo methods are a class of algorithms using randomsampling to numerical solve problems and are used in many cases. Threedierent types of Monte Carlo methods are studied, a simple Monte Carlo estimatorand two types of Markov chain Monte Carlo Metropolis algorithm MonteCarlo and kinetic Monte Carlo. The mathematical background is given for allmethods and they are tested both on smaller system, with known results tocheck their mathematical and chemical soundness and on larger surface systemas an example on how they could be used
246

Anwendung der Monte-Carlo-Methoden zur Lösung spezieller Probleme des Photonentransports / Application of the Monte Carlo Methods to solve the special photon transport problems

Dang, Hieu-Trung 16 April 2002 (has links) (PDF)
Solutions were developed to solve the special photon transport problems. A respective Monte Carlo code were implemented. The photon transport calculations were made for simulation of light distribution in tissues, for 3D estimation of dosis brachytherapy source, for determination of scattering coincidences in PET and for solving of one special problem for ambient dosimetry. The developed calculation methods are based on a purely statistical approach and can therefore universal applied. The efficiency in respect of precision, numerical effectiveness, as well as memory requirement were optimised and verified by the calculations done. The modification and enhancement can be easy realised thanks to modular, object orientated implementation of the program and enables development of new application fields in physics and medicine. / Zur Lösung speziellen Probleme des Photonentransports wurden grundlegende Ansätze gewonnen und in einem Transportprogramm umgesetzt. Die Photonen-Transportrechnungen zur Simulation des Lichttransports in trüben Medien, für die dreidimensionale Dosisberechnung für interstitielle Brachytherapiequellen, für die Ermittlung der Streustrahlungsverteilungen in PET-Scannern und für die Lösung eines speziellen Problems der Umgebungsdosimetrie wurden durchgeführt. Die entwickelten Berechnungsmethoden basieren auf einer rein probabilistischen Herangehensweise und lassen sich deshalb universell verwenden. Ihre Leistungsfähigkeit hinsichtlich der Genauigkeit, der numerischen Effektivität sowie des Bedarfs an Rechenresourcen wurde optimiert und konnte durch die durchgeführten Berechnungen bestätigt werden. Durch den modularen, objektorientierten Programmaufbau sind Modifikationen und Erweiterungen relativ einfach durchzuführen. Das ermöglicht eine Erschließung von neuen Anwendungsgebieten in der Physik und Medizin.
247

Shielding study against high-energy neutrons produced in a proton therapy facility by means of Monte Carlo codes and on-site measurements / Etude de blindages pour un faisceau de protons thérapeutique: simulations par les méthodes de Monte Carlo et mesures au centre de protonthérapie d'Essen

Vanaudenhove, Thibault 12 June 2014 (has links)
Over the last few decades, radiotherapy using high-energy proton beams over the range from 50 MeV to 250 MeV has been increasingly used and developed. Indeed, it offers the possibility to focus the dose in a very narrow area around the tumor cells. The tumor control is improved compared to radiotherapy using photon beams and the healthy cells around the tumor are not irradiated since the range of charged particles is limited. However, due to nuclear reactions of the incident charged particles in the tissue, secondary high-energy radiations, essentially photons and neutrons, are produced and irradiate the treatment room.<p>As a consequence, thick concrete shielding walls are placed around the treatment room to ensure that other people and workers received a dose as small as possible. The dose measurement is performed with specific dosemeters such as the WENDI-II, which gives a conservative estimation of the ambient dose equivalent up to 5 GeV. The dose in working areas may also be estimated by means of numerical calculations by using simulation codes of particle transport such as the GEANT4, MCNPX, FLUKA and PHITS Monte Carlo codes.<p>Secondary particle yields calculated with Monte Carlo codes show discrepancies when different physical models are used but are globally in good agreement with experimental data from the literature. Neutron and photon doses decrease exponentially through concrete shielding wall but the neutron dose is definitely the main component behind a wall with sufficient thickness. Shielding parameters, e.g. attenuation coefficients, vary as functions of emission angle (regarding the incident beam direction), incident proton energy, and target material and composition.<p>The WENDI-II response functions computed by using different hadronic models show also some discrepancies. Thermal treatment of hydrogen in the polyethylene composing the detector is also of great importance to calculate the correct response function and the detector sensitivity.<p>Secondary particle sources in a proton therapy facility are essentially due to losses in cyclotron and beam interactions inside the energy selection system, with the treatment nozzle components and the target - patient or phantom. Numerical and experimental results of the dose in mazes show a good agreement for the most of detection points while they show large discrepancies in control rooms. Indeed, statistical consistency is reached with difficulty for both experimental and calculated results in control rooms since concrete walls are very thick in this case.<p>/<p>La radiothérapie utilisant des faisceaux de protons d’énergie entre 50 MeV et 250 MeV s’est largement développée ces dernières années. Elle a l’immense avantage de pouvoir concentrer la dose due au faisceau incident de manière très efficace et très précise sur la tumeur, en épargnant les éventuels organes sains et sensibles aux radiations situés aux alentours. Cependant, des rayonnements « secondaires » très énergétiques sont créés par les réactions nucléaires subies par les protons lors de leur parcours dans les tissus, et peuvent sortir du patient. Des blindages entourant la salle de traitement et suffisamment épais doivent être présents afin que la dose reçue par les personnes se trouvant aux alentours soit la plus faible possible. La mesure de la dose se fait avec des dosimètres spécifiques et sensibles aux rayonnements de haute énergie, tels que le WENDI-II pour les neutrons. L’estimation de cette dose, et donc la modélisation des blindages, se fait également avec des codes de simulation numérique de transport de particules par les méthodes de Monte Carlo, tels que GEANT4, MCNPX, FLUKA et PHITS.<p>La production de rayonnements secondaires calculée à l’aide de codes Monte Carlo montre des écarts significatifs lorsque différents modèles d’interactions physiques sont utilisés, mais est en bon accord avec des données expérimentales de référence. L’atténuation de la dose due aux neutrons et aux photons secondaires à travers un blindage composé de béton est exponentielle. De plus, la dose due aux neutrons est clairement la composante dominante au-delà d’une certaine épaisseur. Les paramètres d’atténuation, comme par exemple le coefficient d’atténuation, dépendent de l’angle d’émission (par rapport à la direction du faisceau incident), de l’énergie des protons incidents et de la nature et la composition de la cible.<p>La fonction de réponse du dosimètre WENDI-II montre également des variations lorsque différents modèles physiques sont considérés dans les codes Monte Carlo. La prise en compte d’effets fins comme les états de vibration et de rotation des atomes d’hydrogène au sein du polyéthylène composant le détecteur se révèle essentielle afin de caractériser correctement la réponse du détecteur ainsi que sa sensibilité.<p>L’émission secondaire dans un centre de protonthérapie est essentiellement due aux pertes dans le cyclotron et aux interactions du faisceau avec les systèmes de sélection de l’énergie, les composants de la tête de tir et le patient (ou le fantôme). L’évaluation numérique de la dose dans les labyrinthes des différentes salles du centre montre un bon accord avec les données expérimentales. Tandis que pour les points de mesure dans leur salle de contrôle respective, de larges différences peuvent apparaitre. Ceci est en partie dû à la difficulté d’obtenir des résultats statistiquement recevables du point de vue expérimental, mais aussi numérique, au vu de l’épaisseur des blindages entourant les salles de contrôle. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
248

Improved Statistical Methods for Elliptic Stochastic Homogenization Problems : Application of Multi Level- and Multi Index Monte Carlo on Elliptic Stochastic Homogenization Problems

Daloul, Khalil January 2023 (has links)
In numerical multiscale methods, one relies on a coupling between macroscopic model and a microscopic model. The macroscopic model does not include the microscopic properties that the microscopic model offers and that are vital for the desired solution. Such microscopic properties include parameters like material coefficients and fluxes which may variate microscopically in the material. The effective values of this data can be computed by running local microscale simulations while averaging the microscopic data. One desires the effect of the microscopic coefficients on a macroscopic scale, and this can be done using classical homogenisation theory. One method in the homogenization theory is to use local elliptic cell problems in order to compute the homogenized constants and this results in <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Clambda%20/R" data-classname="equation_inline" data-title="" /> error where <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?%5Clambda" data-classname="equation" /> is the wavelength of the microscopic variations and <img src="http://www.diva-portal.org/cgi-bin/mimetex.cgi?R" data-classname="mimetex" data-title="" /> is the size of the simulation domain. However, one could greatly improve the accuracy by a slight modification in the homogenisation elliptic PDE and use a filter in the averaging process to get much better orders of error. The modification relates the elliptic PDE to a parabolic one, that could be solved and integrated in time to get the elliptic PDE's solution.   In this thesis I apply the modified elliptic cell homogenization method with a qth order filter to compute the homogenized diffusion constant in a 2d Poisson equation on a rectangular domain. Two cases were simulated. The diffusion coefficients used in the first case was a deterministic 2d matrix function and in the second case I used stochastic 2d matrix function, which results in a 2d stochastic differential equation (SDE). In the second case two methods were used to determine the expected value of the homogenized constants, firstly the multi-level Monte Carlo (MLMC) and secondly its generalization multi-index Monte Carlo (MIMC). The performance of MLMC and MIMC is then compared when used in the process of the homogenization.   In the homogenization process the finite element notations in 2d were used to estimate a solution of the Poisson equation. The grid spatial steps were varied in a first order differences in MLMC (square mesh) and first order mixed differences in MIMC (which allows for rectangular mesh).
249

Kinetic Monte Carlo simulations of autocatalytic protein aggregation

Eden-Jones, Kym Denys January 2014 (has links)
The self-assembly of proteins into filamentous structures underpins many aspects of biology, from dynamic cell scaffolding proteins such as actin, to the amyloid plaques responsible for a number of degenerative diseases. Typically, these self-assembly processes have been treated as nucleated, reversible polymerisation reactions, where dynamic fluctuations in a population of monomers eventually overcome an energy barrier, forming a stable aggregate that can then grow and shrink by the addition and loss of more protein from its ends. The nucleated, reversible polymerisation framework is very successful in describing a variety of protein systems such as the cell scaffolds actin and tubulin, and the aggregation of haemoglobin. Historically, amyloid fibrils were also thought to be described by this model, but measurements of their aggregation kinetics failed to match the model's predictions. Instead, recent work indicates that autocatalytic polymerisation - a process by which the number of growth competent species is increased through secondary nucleation, in proportion to the amount already present - is better at describing their formation. In this thesis, I will extend the predictions made in this mean-field, autocatalytic polymerisation model through use of kinetic Monte Carlo simulations. The ubiquitous sigmoid-like growth curve of amyloid fibril formation often possesses a notable quiescent lag phase which has been variously attributed to primary and secondary nucleation processes. Substantial variability in the length of this lag phase is often seen in replicate experimental growth curves, and naively may be attributed to fluctuations in one or both of these nucleation processes. By comparing analytic waiting-time distributions, to those produced by kinetic Monte Carlo simulation of the processes thought to be involved, I will demonstrate that this cannot be the case in sample volumes comparable with typical laboratory experiments. Experimentally, the length of the lag phase, or "lag time", is often found to scale with the total protein concentration, according to a power law with exponent γ. The models of nucleated polymerisation and autocatalytic polymerisation predict different values for this scaling exponent, and these are sometimes used to identify which of the models best describes a given protein system. I show that this approach is likely to result in a misidentification of the dominant mechanisms under conditions where the lag phase is dominated by a different process to the rest of the growth curve. Furthermore, I demonstrate that a change of the dominant mechanism associated with total protein concentration will produce "kinks" in the scaling of lag time with total protein concentration, and that these may be used to greater effect in identifying the dominant mechanisms from experimental kinetic data. Experimental data for bovine insulin aggregation, which is well described by the autocatalytic polymerisation model for low total protein concentrations, displays an intriguing departure from the predicted behaviour at higher protein concentrations. Additionally, the protein concentration at which the transition occurs, appears to be affected by the presence of salt. Coincident with this, an apparent change in the fibril structure indicates that different aggregation mechanisms may operate at different total protein concentrations. I demonstrate that a transition whereby the self-assembly mechanisms change once a critical concentration of fibrils or fibrillar protein is reached, can explain the observed behaviour and that this predicts a substantially higher abundance of shorter laments - which are thought to be pathogenic - at lower total protein concentrations than if self-assembly were consistently autocatalytic at all protein concentration. Amyloid-like loops have been observed in electron and atomic-force microscographs, together with non-looped fibrils, for a number of different proteins including ovalbumin. This implies that fibrils formed of these proteins are able to grow by fibrillar end-joining, and not only monomer addition as is more commonly assumed. I develop a simple analytic expression for polymerisation by monomer addition and fibrillar end-joining, (without autocatalysis) and show that this is not sufficient to explain the growth curves obtained experimentally for ovalbumin. I then demonstrate that the same data can be explained by combining fibrillar end-joining and fragmentation. Through the use of an analytic expression, I estimate the kinetic rates from the experimental growth curves and, via simulation, investigate the distribution of lament and loop lengths. Together, my findings demonstrate the relative importance of different molecular mechanisms in amyloid fibril formation, how these might be affected by various environmental parameters, and characteristic behaviour by which their involvement might be detected experimentally.
250

Enhancement of thermionic cooling using Monte Carlo simulation

Stephen, Alexander January 2014 (has links)
Advances in the field of semiconductor physics have allowed for rapid development of new, more powerful devices. The new fabrication techniques allow for reductions in device geometry, increasing the possible wafer packing density. The increased output power comes with the price of excessive heat generation, the removal of which proves problematic at such scales for conventional cooling systems. Consequently, there is a rising demand for new cooling systems, preferably those that do not add large amount of additional bulk to the system. One promising system is the thermoelectric (TE) cooler which is small enough to be integrated onto the device wafer. Unlike more traditional gas and liquid coolers, TE coolers do not require moving parts or external liquid reservoirs, relying only on the flow of electrons to transport heat energy away from the device. Although TE cooling provides a neat solution for the extraction of heat from micron scale devices, it can normally only produce small amounts of cooling of 1-2 Kelvin, limiting its application to low power devices. This research aimed to find ways to enhance the performance of the TE cooler using detailed simulation analysis. For this, a self consistent, semi-classical, ensemble Monte Carlo model was designed to investigate the operation of the TE cooler at a higher level than would be possible with experimental measurements alone. As part of its development, the model was validated on a variety of devices including a Gunn diode and two micro-cooler designs from the literature, one which had been previously simulated and another which had been experimentally analysed. When applied to the TE cooler of focus, novel operational data was obtained and signification improvements in cooling power were found with only minor alterations to the device structure and without need for an increase in volume.

Page generated in 0.0378 seconds