• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2274
  • 1055
  • 674
  • 184
  • 120
  • 103
  • 68
  • 55
  • 53
  • 53
  • 35
  • 32
  • 30
  • 24
  • 23
  • Tagged with
  • 5598
  • 5598
  • 1659
  • 1365
  • 571
  • 532
  • 532
  • 525
  • 423
  • 412
  • 395
  • 379
  • 329
  • 325
  • 308
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Monte Carlo Group - Atomic Physics Department

Rossen Radev 06 June 1997 (has links)
No description available.
272

Adaptive stopping for fast particle smoothing

Taghavi, Ehsan, Lindsten, Fredrik, Svensson, Lennart, Schön, Thomas B. January 2013 (has links)
Particle smoothing is useful for offline state inference and parameter learning in nonlinear/non-Gaussian state-space models. However, many particle smoothers, such as the popular forward filter/backward simulator (FFBS), are plagued by a quadratic computational complexity in the number of particles. One approach to tackle this issue is to use rejection-sampling-based FFBS (RS-FFBS), which asymptotically reaches linear complexity. In practice, however, the constants can be quite large and the actual gain in computational time limited. In this contribution, we develop a hybrid method, governed by an adaptive stopping rule, in order to exploit the benefits, but avoid the drawbacks, of RS-FFBS. The resulting particle smoother is shown in a simulation study to be considerably more computationally efficient than both FFBS and RS-FFBS. / CNDM / CADICS
273

Monte Carlo uncertainty reliability and isotope production calculations for a fast reactor

Miles, Todd L. 09 December 1991 (has links)
With the advent of more powerful, less expensive computing resources, more and more attention is being given to Monte Carlo techniques in design application. In many circles, stochastic solutions are considered the next best thing to experimental data. Statistical uncertainties in Monte Carlo calculations are typically determined by the first and second moments of the tally. For certain types of calculations, there is concern that the uncertainty estimate is significantly non-conservative. This is typically seen in reactor eigenvalue problems where the uncertainty estimate is aggravated by the generation-to-generation fission source. It has been speculated that optimization of the random walk, through biasing techniques, may increase the non-conservative nature of the uncertainty estimate. A series of calculations are documented here which quantify the reliability of the Monte Carlo Neutron and Photon (MCNP) mean and uncertainty estimates by comparing these estimates to the true mean. These calculations were made with a liquid metal fast reactor model, but every effort was made to isolate the statistical nature of the uncertainty estimates so that the analysis of the reliability of the MCNP estimates should be relevant for small thermal reactors as well. Also, preliminary reactor physics calculations for two different special isotope production test assemblies for irradiation in the Fast Flux Test Facility (FFTF) were performed using MCNP and are documented here. The effect of an yttrium-hydride moderator to tailor the neutron flux incident on the targets to maximize isotope production for different designs in different locations within the reactor is discussed. These calculations also demonstrate the useful application of MCNP in design iterations by utilizing many of the codes features. / Graduation date: 1992
274

Monte Carlo modeling of the sensitivity of x-ray photoconductors

Yunus, Mohammad 13 May 2005
The sensitivity reduction or ghosting mechanism of x-ray photoconductor is studied based on Monte Carlo simulation techniques. We have calculated the sensitivity reduction for different detector operating conditions (applied electric field, x-ray spectrum and photoconductor thickness) and for different levels of carrier trapping. We have analyzed the effect of photoconductor biasing (positive or negative) on ghosting. The following effects are taken into account in modeling the ghosting phenomena: (i) recombination between trapped and oppositely charged drifting carriers, (ii) trap filling, (iii) nonuniform electric field, (iv) detrapping of trapped holes, and (v) x-ray induced trap generation. Our calculation shows that not only the recombination between trapped and oppositely charged drifting carriers but the x-ray induced trap generation is also responsible for ghosting in photoconductor based x-ray image detectors. Moreover not all the trapped carriers take part in recombination; rather only a fraction of the trapped carriers are involved in recombination. Electric field also plays an important role in ghosting calculations via the electron hole pair generation mechanism. Trap filling has also non trivial effects on ghosting. The simulation results show that the amount of ghosting strongly depends on the applied electric field. Ghosting increases with decreasing applied electric field and vice versa. It is observed that ghosting is higher at high carrier trapping level than at low trapping level. Again ghosting is more pronounced in chest radiographic detector than mammographic detector. In chest radiographic detector, carrier trapping is high due to greater thickness hence recombination and electric field effects are prominent in chest radiographic detector. Biasing dependent ghosting depends on the carrier mobility lifetime product. For positively biased detectors, ghosting is less if the mobility lifetime product of hole is higher than that of electron and vice versa for negatively biased detectors. It also appears that the use of only recombination to calculate ghosting, as believed the primary source of ghosting in some literatures, will lead to significant error in the calculation of ghosting.
275

Discrete Event Simulation in the Preliminary Estimation Phase of Mega Projects: A Case Study of the Central Waterfront Revitalization Project

Nahrvar, Shayan 27 July 2010 (has links)
The methodology of discrete-event simulation provides a promising alternative to solving complicated construction systems. Given the level of uncertainty that exists in the early estimation phase of mega-projects regarding cost and risk, project simulations have become a central part of decision-making and planning. In this paper, an attempt is made to compare the output generated by a model constructed under the Monte Carlo framework with that of Discrete-Event Simulation to determine the similarities and difference between the two methods. To achieve this, the Simphony modeling (DES) environment is used. The result is then compared to a Monte Carlo simulation conducted by Golder Associates.
276

Discrete Event Simulation in the Preliminary Estimation Phase of Mega Projects: A Case Study of the Central Waterfront Revitalization Project

Nahrvar, Shayan 27 July 2010 (has links)
The methodology of discrete-event simulation provides a promising alternative to solving complicated construction systems. Given the level of uncertainty that exists in the early estimation phase of mega-projects regarding cost and risk, project simulations have become a central part of decision-making and planning. In this paper, an attempt is made to compare the output generated by a model constructed under the Monte Carlo framework with that of Discrete-Event Simulation to determine the similarities and difference between the two methods. To achieve this, the Simphony modeling (DES) environment is used. The result is then compared to a Monte Carlo simulation conducted by Golder Associates.
277

Incorporating substation and switching station related outages in composite system reliability evaluation

Nighot, Rajesh U 06 October 2003
This thesis presents the development of a new method for incorporating station related outages in composite or bulk system reliability analysis. Station related failures can cause multiple component outages that can propagate to other parts of the network resulting in severe damages. In order to minimize the effects of station related outages on the composite system performance it is necessary for the designer to assess their effects. This task can be achieved by including station related outages in the composite system evaluation. Monte Carlo simulation is used in this research to assess composite system reliability. The new method described in this thesis is used to include station related outages in the reliability evaluation of two composite test systems. This new method is relatively simple and can be used to consider multiple component outages due to station related failures in composite system reliability evaluation. In this approach, the effects of station related outages are combined with the connected terminal failure parameters. Reliability studies conducted on the two composite test systems demonstrates that station failures significantly affect the system performance. The system reliability can be improved by selecting appropriate station configurations. This is illustrated by application to the two composite test systems.
278

Incorporating substation and switching station related outages in composite system reliability evaluation

Nighot, Rajesh U 06 October 2003 (has links)
This thesis presents the development of a new method for incorporating station related outages in composite or bulk system reliability analysis. Station related failures can cause multiple component outages that can propagate to other parts of the network resulting in severe damages. In order to minimize the effects of station related outages on the composite system performance it is necessary for the designer to assess their effects. This task can be achieved by including station related outages in the composite system evaluation. Monte Carlo simulation is used in this research to assess composite system reliability. The new method described in this thesis is used to include station related outages in the reliability evaluation of two composite test systems. This new method is relatively simple and can be used to consider multiple component outages due to station related failures in composite system reliability evaluation. In this approach, the effects of station related outages are combined with the connected terminal failure parameters. Reliability studies conducted on the two composite test systems demonstrates that station failures significantly affect the system performance. The system reliability can be improved by selecting appropriate station configurations. This is illustrated by application to the two composite test systems.
279

A short-time dynamics study of Heisenberg non-collinear magnets

Zelli, Mirsaeed 14 September 2007 (has links)
A generalized model which describes a family of antiferromagnetic Heisenberg magnets on a three-dimensional stacked triangular lattice is introduced. The model contains a constraint parameter which changes the details of the interactions but not the symmetry of the model. We investigate the question of whether a first or second order phase transition occurs in these systems using a short time dynamics method. This method does not suffer from the problem of critical slowing down which occurs in the usual equilibrium Monte Carlo simulations. The effective critical exponents are determined as a function of the constraint parameter. Our results provide strong evidence that the phase transition is first order. In addition, for a particular value of the constraint parameter, the model corresponds to an antiferromagnet on a stacked Kagome lattice. In this case, our results are not inconsistent with the existence of a finite temperature first order phase transition. / October 2007
280

Reactivity Analysis of Nuclear Fuel Storages : The Effect of 238U Nuclear Data Uncertainties

Östangård, Louise January 2013 (has links)
The aim of this master thesis work was to investigate how the uncertainties in nuclear data for 238U affects the uncertainty of keff in criticality simulations for nuclear fuel storages. This was performed by using the Total Monte Carlo (TMC) method which allows propagation of nuclear data uncertainties from basic nuclear physics to reactor parameters, such as keff. The TMC approach relies on simulations with hundreds of calculations of keff with different random nuclear data libraries for 238U for each calculation. The result is a probability distribution for keff where the standard deviation for the distribution represents a spread in keff due to statistical and nuclear data uncertainties. Simulations were performed with MCNP for a nuclear fuel storage representing two different cases:  Normal Case and Worst Case. Normal Case represents a scenario during normal conditions and Worst Case represents accident conditions where optimal moderation occurs. In order to validate the MCNP calculations and the libraries produced with TMC, criticality benchmarks were used. The calculated mean value of keff for the criticality benchmark simulations with random libraries produced with TMC obtained a good agreement with the experimental keff for the benchmarks. This indicates that the libraries used in this this work were of good quality. The TMC method´s drawback is the long calculation time, therefore the new method, fast TMC, was tested.  Both fast TMC and original TMC were applied to the Normal Case. The two methods obtained similar results, indicating that fast TMC is a good option in order to reduce the computational time. The computer time using fast TMC was found to be significantly faster compared with original TMC in this work. The 238U nuclear data uncertainty was obtained to be 209 pcm for the Normal Case, both for original and fast TMC. For the Worst Case simulation the 238U nuclear data uncertainty was obtained to be 672 pcm with fast TMC. These results show the importance of handling uncertainties in nuclear data in order to improve the knowledge about the uncertainties for criticality calculations of keff. / Nukleära databibliotek innehåller all nödvändig information för att till exempel kunna simulera en reaktor eller en bränslebassäng för kärnbränsle. Dessa bibliotek är centrala vid beräkningar av olika reaktorparametrar som krävs för en säker kärnkraftsproduktion. En viktig reaktorparameter är multiplikationskonstanten (keff) som anger reaktiviteten för ett system. Ett kritiskt system (keff = 1) innebär att en kedjereaktion av kärnklyvningar kan upprätthållas. Detta tillstånd erfordras i en reaktor för att möjliggöra elproduktion. I en bränslebassäng där använt kärnbränsle förvaras är det viktigt att systemet är underkritiskt (keff < 1). Olika reaktorkoder används för att utföra dessa beräkningar av keff, vars resultat används i processen för att designa säkra bränsleförråd för kärnbränsle. Dagens nukleära databibliotek innehåller osäkerheter som i sin tur beror på osäkerheter i de modellparametrar som används vid framställningen av biblioteken.  Ofta är dessa nukleära data osäkerheter okända, vilket ger upphov till okända osäkerheter vid beräkning av keff. Vattenfall Nuclear Fuel AB undersöker idag möjligheten att öka anrikningen på bränslet för att minska antalet behövda bränsleknippen för en viss energimängd.  Varje bränsleknippe blir då mer reaktiv och i och med det minskar marginalen till kriticitet i bränslebassängen. Därmed är osäkerheterna för nukleära data viktiga i processen för att kunna beräkna den maximalt tillåtna anrikningen för bränslet. För att undersöka hur stora dessa osäkerheter är, användes en relativ ny metod TMC (Total Monte Carlo) som propagerar osäkerheter i nukleära data till olika reaktorparametrar (t.ex. keff) i en enda simuleringsprocess.  TMC metoden användes för att undersöka hur osäkerheterna i nukleära data för 238U påverkar beräkningar av keff för en bränslebassäng med använt kärnbränsle. Beräkningar utfördes för en bränslebassäng under normala driftförhållanden samt för en olyckshändelse då optimal moderering förekommer. Resultaten visade på att standardavvikelsen för nukleära data för 238U var 209 pcm vid normala driftförhållanden och 672 pcm för fallet med optimal moderering. Den ursprungliga TMC metoden är en tidskrävande metod och nyligen har en snabbare variant av TMC utvecklats. Denna nya metod applicerades också på bränslebassängen under normala driftförhållanden och resultaten jämfördes. Resultaten visade att båda metoderna beräknade samma nukleära dataosäkerhet för 238U och genom att använda den snabba TMC metoden, minskade beräkningstiden betydligt jämfört med att använda den ursprungliga TMC metoden.

Page generated in 0.0533 seconds