• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 9
  • 2
  • 2
  • 2
  • Tagged with
  • 38
  • 38
  • 13
  • 12
  • 11
  • 11
  • 11
  • 9
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Reactivity Analysis of Nuclear Fuel Storages : The Effect of 238U Nuclear Data Uncertainties

Östangård, Louise January 2013 (has links)
The aim of this master thesis work was to investigate how the uncertainties in nuclear data for 238U affects the uncertainty of keff in criticality simulations for nuclear fuel storages. This was performed by using the Total Monte Carlo (TMC) method which allows propagation of nuclear data uncertainties from basic nuclear physics to reactor parameters, such as keff. The TMC approach relies on simulations with hundreds of calculations of keff with different random nuclear data libraries for 238U for each calculation. The result is a probability distribution for keff where the standard deviation for the distribution represents a spread in keff due to statistical and nuclear data uncertainties. Simulations were performed with MCNP for a nuclear fuel storage representing two different cases:  Normal Case and Worst Case. Normal Case represents a scenario during normal conditions and Worst Case represents accident conditions where optimal moderation occurs. In order to validate the MCNP calculations and the libraries produced with TMC, criticality benchmarks were used. The calculated mean value of keff for the criticality benchmark simulations with random libraries produced with TMC obtained a good agreement with the experimental keff for the benchmarks. This indicates that the libraries used in this this work were of good quality. The TMC method´s drawback is the long calculation time, therefore the new method, fast TMC, was tested.  Both fast TMC and original TMC were applied to the Normal Case. The two methods obtained similar results, indicating that fast TMC is a good option in order to reduce the computational time. The computer time using fast TMC was found to be significantly faster compared with original TMC in this work. The 238U nuclear data uncertainty was obtained to be 209 pcm for the Normal Case, both for original and fast TMC. For the Worst Case simulation the 238U nuclear data uncertainty was obtained to be 672 pcm with fast TMC. These results show the importance of handling uncertainties in nuclear data in order to improve the knowledge about the uncertainties for criticality calculations of keff. / Nukleära databibliotek innehåller all nödvändig information för att till exempel kunna simulera en reaktor eller en bränslebassäng för kärnbränsle. Dessa bibliotek är centrala vid beräkningar av olika reaktorparametrar som krävs för en säker kärnkraftsproduktion. En viktig reaktorparameter är multiplikationskonstanten (keff) som anger reaktiviteten för ett system. Ett kritiskt system (keff = 1) innebär att en kedjereaktion av kärnklyvningar kan upprätthållas. Detta tillstånd erfordras i en reaktor för att möjliggöra elproduktion. I en bränslebassäng där använt kärnbränsle förvaras är det viktigt att systemet är underkritiskt (keff < 1). Olika reaktorkoder används för att utföra dessa beräkningar av keff, vars resultat används i processen för att designa säkra bränsleförråd för kärnbränsle. Dagens nukleära databibliotek innehåller osäkerheter som i sin tur beror på osäkerheter i de modellparametrar som används vid framställningen av biblioteken.  Ofta är dessa nukleära data osäkerheter okända, vilket ger upphov till okända osäkerheter vid beräkning av keff. Vattenfall Nuclear Fuel AB undersöker idag möjligheten att öka anrikningen på bränslet för att minska antalet behövda bränsleknippen för en viss energimängd.  Varje bränsleknippe blir då mer reaktiv och i och med det minskar marginalen till kriticitet i bränslebassängen. Därmed är osäkerheterna för nukleära data viktiga i processen för att kunna beräkna den maximalt tillåtna anrikningen för bränslet. För att undersöka hur stora dessa osäkerheter är, användes en relativ ny metod TMC (Total Monte Carlo) som propagerar osäkerheter i nukleära data till olika reaktorparametrar (t.ex. keff) i en enda simuleringsprocess.  TMC metoden användes för att undersöka hur osäkerheterna i nukleära data för 238U påverkar beräkningar av keff för en bränslebassäng med använt kärnbränsle. Beräkningar utfördes för en bränslebassäng under normala driftförhållanden samt för en olyckshändelse då optimal moderering förekommer. Resultaten visade på att standardavvikelsen för nukleära data för 238U var 209 pcm vid normala driftförhållanden och 672 pcm för fallet med optimal moderering. Den ursprungliga TMC metoden är en tidskrävande metod och nyligen har en snabbare variant av TMC utvecklats. Denna nya metod applicerades också på bränslebassängen under normala driftförhållanden och resultaten jämfördes. Resultaten visade att båda metoderna beräknade samma nukleära dataosäkerhet för 238U och genom att använda den snabba TMC metoden, minskade beräkningstiden betydligt jämfört med att använda den ursprungliga TMC metoden.
2

Nuclear data uncertainty quantification and data assimilation for a lead-cooled fast reactor : Using integral experiments for improved accuracy

Alhassan, Erwin January 2015 (has links)
For the successful deployment of advanced nuclear systems and optimization of current reactor designs, high quality nuclear data are required. Before nuclear data can be used in applications they must first be evaluated, tested and validated against a set of integral experiments, and then converted into formats usable for applications. The evaluation process in the past was usually done by using differential experimental data which was then complemented with nuclear model calculations. This trend is fast changing due to the increase in computational power and tremendous improvements in nuclear reaction models over the last decade. Since these models have uncertain inputs, they are normally calibrated using experimental data. However, these experiments are themselves not exact. Therefore, the calculated quantities of model codes such as cross sections and angular distributions contain uncertainties. Since nuclear data are used in reactor transport codes as input for simulations, the output of transport codes contain uncertainties due to these data as well. Quantifying these uncertainties is important for setting safety margins; for providing confidence in the interpretation of results; and for deciding where additional efforts are needed to reduce these uncertainties. Also, regulatory bodies are now moving away from conservative evaluations to best estimate calculations that are accompanied by uncertainty evaluations. In this work, the Total Monte Carlo (TMC) method was applied to study the impact of nuclear data uncertainties from basic physics to macroscopic reactor parameters for the European Lead Cooled Training Reactor (ELECTRA). As part of the work, nuclear data uncertainties of actinides in the fuel, lead isotopes within the coolant, and some structural materials have been investigated. In the case of the lead coolant it was observed that the uncertainty in the keff and the coolant void worth (except in the case of 204Pb), were large, with the most significant contribution coming from 208Pb. New 208Pb and 206Pb random nuclear data libraries with realistic central values have been produced as part of this work. Also, a correlation based sensitivity method was used in this work, to determine parameter - cross section correlations for different isotopes and energy groups. Furthermore, an accept/reject method and a method of assigning file weights based on the likelihood function are proposed for uncertainty reduction using criticality benchmark experiments within the TMC method. It was observed from the study that a significant reduction in nuclear data uncertainty was obtained for some isotopes for ELECTRA after incorporating integral benchmark information. As a further objective of this thesis, a method for selecting benchmark for code validation for specific reactor applications was developed and applied to the ELECTRA reactor. Finally, a method for combining differential experiments and integral benchmark data for nuclear data adjustments is proposed and applied for the adjustment of neutron induced 208Pb nuclear data in the fast energy region.
3

A new unresolved resonance region methodology

Holcomb, Andrew Michael 07 January 2016 (has links)
A new method for constructing probability tables in the Unresolved Resonance Region (URR) has been developed. This new methodology is an extensive modification of the Single-Level Breit-Wigner (SLBW) resonance-pair sequence method commonly used to generate probability tables in the URR. Using a Monte Carlo process, many resonance-pair sequences are generated by sampling the average resonance parameter data for the unresolved resonance region from the ENDF data file. The resonance parameters are then converted to the Reich-Moore format to take advantage of the more robust R-Matrix Limited (RML) format. For each sampled set of resonance-pair sequences, the temperature-dependent cross sections are calculated on a small grid around the energy of reference using the RML formalism and the Leal-Hwang Doppler broadening methodology. The effective cross sections calculated at the energy of reference are then used to construct probability tables in the unresolved resonance region. The RML cross section reconstruction algorithm has been rigorously tested for a variety of isotopes, including O-16, F-19, Cl-35, Fe-56, Cu-63, and Cu-65. The new URR method also produced normalized cross-section factor probability tables for U-238 that were found to be in agreement with current standards. The modified U-238 probability tables were shown to produce k-eff results in excellent agreement with several standard benchmarks, including the IEU-MET-FAST-007, IEU-MET-FAST-003, and IEU-COMP-FAST-004 benchmarks.
4

Generation of high fidelity covariance data sets for the natural molybdenum isotopes including a series of molybdenum sensitive critical experiment designs

Van der Hoeven, Christopher Ainslie 15 October 2013 (has links)
Quantification of uncertainty in computational models of nuclear systems is required for assessing margins of safety for both design and operation of those systems. The largest source of uncertainty in computational models of nuclear systems derives from the nuclear cross section data used for modeling. There are two parts to cross section uncertainty data: the relative uncertainty in the cross section at a particular energy, and how that uncertainty is correlated with the uncertainty at all other energies. This cross section uncertainty and uncertainty correlation is compiled as covariance data. High fidelity covariance data exists for a few key isotopes, however the covariance data available for many structural materials is considered low fidelity, and is derived primarily from integral measurements with little meaningful correlation between energy regions. Low fidelity covariance data is acceptable for materials to which the operating characteristics of the modeled nuclear system are insensitive. However, in some cases, nuclear systems can be sensitive to isotopes with only low fidelity covariance data. Such is the case for the new U(19.5%)-10Moly foil fuel form to be produced at the Y-12 National Security Complex for use in research and test reactors. This fuel is ten weight percent molybdenum, the isotopes of which have only low fidelity covariance data. Improvements to the molybdenum isotope covariance data would benefit the modeling of systems using the new fuel form. This dissertation provides a framework for deriving high fidelity molybdenum isotope covariance data from a set of elemental molybdenum experimental cross section results. Additionally, a series of critical experiments featuring the new Y-12 fuel form was designed to address deficiencies in the critical experiment library with respect to molybdenum isotopes. Along with existing molybdenum sensitive critical experiments, these proposed experiments were used as a basis to compare the performance of the new high fidelity molybdenum covariance data set with the existing low fidelity covariance data set using the nuclear modeling code SCALE. The use of the high fidelity covariance data was found to result in reduced overall bias, reduced bias due to the molybdenum isotopes, and improved goodness-of-fit of computational results to experimental results. / text
5

Modelo híbrido de banco de dados relacional, de alto desempenho e capacidade de armazenamento, para aplicacoes voltadas a engenharia nuclear / Relational database hybrid model, of high performance and storaging capacity, for nuclear engineering applications

GOMES NETO, JOSE 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:54:29Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T14:07:20Z (GMT). No. of bitstreams: 1 12769.pdf: 5367552 bytes, checksum: 1c6f3e52f8be9724413e2b8f8460395f (MD5) / Dissertação (Mestrado) / IPEN/D / Instituto de Pesquisas Energéticas e Nucleares - IPEN/CNEN-SP
6

Modelo híbrido de banco de dados relacional, de alto desempenho e capacidade de armazenamento, para aplicacoes voltadas a engenharia nuclear / Relational database hybrid model, of high performance and storaging capacity, for nuclear engineering applications

GOMES NETO, JOSE 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:54:29Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T14:07:20Z (GMT). No. of bitstreams: 1 12769.pdf: 5367552 bytes, checksum: 1c6f3e52f8be9724413e2b8f8460395f (MD5) / O objetivo deste trabalho é apresentar o banco de dados relacional, denominado FALCAO, que foi criado e implementado com a função de armazenar as variáveis monitoradas no reator de pesquisa IEA-R1, localizado no Instituto de Pesquisas Energéticas e Nucleares, IPEN CNEN/SP. O modelo lógico de dados e sua influência direta na integridade da informação fornecida são cuidadosamente considerados. São apresentados os conceitos e etapas de normalização e desnormalização, incluindo as entidades e relacionamentos do modelo lógico de dados. São também apresentadas as influências dos relacionamentos e regras do modelo de dados nos processos de aquisição, carga e disponibilização da informação final, sob a óptica do desempenho, visto que estes processos ocorrem em lotes e em pequenos intervalos de tempo. A aplicação SACD, através de suas funcionalidades, apresenta as informações armazenadas no banco FALCAO de maneira prática e otimizada. A implementação do banco de dados FALCAO ocorreu com o êxito esperado, mostrando-se indispensável ao cotidiano dos pesquisadores envolvidos por conta da substancial melhoria dos processos e da confiabilidade associada a estes. / Dissertação (Mestrado) / IPEN/D / Instituto de Pesquisas Energéticas e Nucleares - IPEN/CNEN-SP
7

Mesures de perturbations sur le réacteur CALIBAN : interprétation en terme de qualification des données nucléaires / Reactivity worth measurements on the CALIBAN reactor : interpretation of integral experiments for the nuclear data validation

Richard, Benoit 19 December 2012 (has links)
La bonne connaissance des données nucléaires de base, grandeurs d’entrée pour les codes de calcul neutronique, constitue l’un des piliers fondamentaux de la réussite des grands programmes de l’industrie nucléaire. Ce travail a pour vocation d’apporter des informations nécessaires à la démarche de validation intégrale des données nucléaires. Des expériences de perturbations ont été effectuées auprès du réacteur Caliban, elles concernent quatre matériaux d’intérêt pour l’industrie nucléaire : or, lutécium, plutonium et uranium 238. D’autres expériences visant à caractériser le réacteur Caliban sont également présentées et discutées, ces dernières sont essentielles à la bonne interprétation des expériences de perturbations. Après définition des protocoles expérimentaux et des incertitudes associées, les résultats de mesures sont présentés et confrontés avec des résultats de calculs. La méthodologie utilisée dans les calculs numériques est décrite précisément, notamment la génération de données multigroupes pour les codes déterministes. La manière dont les expériences ont été modélisées est également présentée avec les incertitudes associées. Cette comparaison a permis d’aboutir à une interprétation en terme de qualification des bibliothèques de données nucléaires. Les écarts observés sont discutés et justifient la poursuite de telles expériences. / The good knowledge of nuclear data, input parameters for the neutron transport calculation codes, is necessary to support the advances of the nuclear industry. The purpose of this work is to bring pertinent information regarding the nuclear data integral validation process. Reactivity worth measurements have been performed on the Caliban reactor, they concern four materials of interest for the nuclear industry : gold, lutetium, plutonium and uranium 238. Experiments which have been conducted in order to improve the characterization of the core are also described and discussed, the latter are necessary to the good interpretation of reactivity worth measurements. The experimental procedures are described with their associated uncertainties, measurements are then compared to numerical results. The methods used in numerical calculations are reported, especially the multigroup cross sections generation for deterministic codes. The modeling of the experiments is presented along with the associated uncertainties. This comparison led to an interpretation concerning the qualification of nuclear data libraries. Discrepancies are reported, discussed and justify the need of such experiments.
8

Nuclear data uncertainty propagation and uncertainty quantification in nuclear codes

Fiorito, Luca 03 October 2016 (has links)
Uncertainties in nuclear model responses must be quantified to define safety limits, minimize costs and define operational conditions in design. Response uncertainties can also be used to provide a feedback on the quality and reliability of parameter evaluations, such as nuclear data. The uncertainties of the predictive model responses sprout from several sources, e.g. nuclear data, model approximations, numerical solvers, influence of random variables. It was proved that the largest quantifiable sources of uncertainty in nuclear models, such as neutronics and burnup calculations, are the nuclear data, which are provided as evaluated best estimates and uncertainties/covariances in data libraries. Nuclear data uncertainties and/or covariances must be propagated to the model responses with dedicated uncertainty propagation tools. However, most of the nuclear codes for neutronics and burnup models do not have these capabilities and produce best-estimate results without uncertainties. In this work, the nuclear data uncertainty propagation was concentrated on the SCK•CEN code burnup ALEPH-2 and the Monte Carlo N-Particle code MCNP.Two sensitivity analysis procedures, i.e. FSAP and ASAP, based on linear perturbation theory were implemented in ALEPH-2. These routines can propagate nuclear data uncertainties in pure decay models. ASAP and ALEPH-2 were tested and validated against the decay heat and uncertainty quantification for several fission pulses and for the MYRRHA subcritical system. The decay uncertainty is necessary to define the reliability of the decay heat removal systems and prevent overheating and mechanical failure of the reactor components. It was proved that the propagation of independent fission yield and decay data uncertainties can be carried out with ASAP also in neutron irradiation models. Because of the ASAP limitations, the Monte Carlo sampling solver NUDUNA was used to propagate cross section covariances. The applicability constraints of ASAP drove our studies towards the development of a tool that could propagate the uncertainty of any nuclear datum. In addition, the uncertainty propagation tool was supposed to operate with multiple nuclear codes and systems, including non-linear models. The Monte Carlo sampling code SANDY was developed. SANDY is independent of the predictive model, as it only interacts with the nuclear data in input. Nuclear data are sampled from multivariate probability density functions and propagated through the model according to the Monte Carlo sampling theory. Not only can SANDY propagate nuclear data uncertainties and covariances to the model responses, but it is also able to identify the impact of each uncertainty contributor by decomposing the response variance. SANDY was extensively tested against integral parameters and was used to quantify the neutron multiplication factor uncertainty of the VENUS-F reactor.Further uncertainty propagation studies were carried out for the burnup models of light water reactor benchmarks. Our studies identified fission yields as the largest source of uncertainty for the nuclide density evolution curves of several fission products. However, the current data libraries provide evaluated fission yields and uncertainties devoid of covariance matrices. The lack of fission yield covariance information does not comply with the conservation equations that apply to a fission model, and generates inconsistency in the nuclear data. In this work, we generated fission yield covariance matrices using a generalised least-square method and a set of physical constraints. The fission yield covariance matrices solve the inconsistency in the nuclear data libraries and reduce the role of the fission yields in the uncertainty quantification of burnup models responses. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
9

Avaliação de dados nucleares para dosimetria de nêutrons / Evaluation of nuclear data for neutron dosimetry

Tardelli, Tiago Cardoso 01 November 2013 (has links)
Doses absorvidas e doses efetivas podem ser calculadas utilizando códigos computacionais de transporte de radiação. A qualidade desses cálculos depende dos dados nucleares, no entanto, são raras as informações sobre as diferenças nas doses causadas por diferentes bibliotecas. O objetivo desse estudo é comparar os valores de dose (absorvida e efetiva) obtidos utilizando diferentes bibliotecas de dados nucleares devido a uma fonte externa de nêutrons na faixa de 10-11 a 20 MeV. As bibliotecas de dados nucleares são: JENDL 4.0, JEFF 3.1.1 e ENDF/B-VII.0. Cálculos de doses foram realizados utilizando o código MCNPX considerando o modelo antropomórfico da ICRP-110. As diferenças nos valores das doses absorvidas utilizando as bibliotecas JEFF 3.1.1 e a ENDF/B.VII são pequenas, em torno de 1%, porém os resultados obtidos com a JENDL 4.0 apresentam diferenças de até 85 % compara aos resultados da ENDF/B-VII.0 e JEFF 3.1.1. Diferenças nas doses efetivas são em torno de 1,5% entre ENDF/B-VII.0 e JEFF 3.1.1, e 11 % entre ENDF/B-VII.0 e JENDL 4.0. / Absorbed dose and Effective dose are usually calculated using radiation transport computer codes. The quality of the calculations of absorbed dose depends on nuclear data utilized, however, there are rare information about the differences in dose caused by the use of different libraries. The objective of this study is to compare dose values obtained using different nuclear data libraries due to external source of neutrons in the energy range from 10-11 to 20 MeV. The nuclear data libraries used are: JENDL 4.0, JEFF 3.3.1 and ENDF/B.VII. Dose calculations were carried out with the MCNPX code considering the anthropomorphic ICRP 110 model. The differences in the absorbed dose values using JEFF 3.3.1 and ENDF/B.VII libraries are small, around 1%, but the results obtained with JENDL 4.0 presented differences up to 85% compared to ENDF and JEFF results. Differences in effective dose values are around 1.5% between ENDF and JEFF and 11% between ENDF/B.VII and JENDL 4.0.
10

Développement et validation de schémas de calcul dédiés à l'interprétation des mesures par oscillation pour l'amélioration des données nucléaires / Development and validation of calculation schemes dedicated to the interpretation of small reactivity effects for nuclear data improvement

Gruel, Adrien 24 October 2011 (has links)
Les mesures de réactivité par la technique d'oscillation, comme celles effectuées dans le réacteur Minerve, permettent de tester de nombreux paramètres neutroniques sur des matériaux, des combustibles ou des isotopes spécifiques. Généralement, les effets attendus sont très faibles, tout au plus de l'ordre de la dizaine de pcm. La modélisation de ces expériences doit donc être particulièrement précise, afin d'obtenir un retour fiable et précis sur les paramètres ciblés. En particulier, les biais de calcul doivent être clairement identifiés, quantifiés et maîtrisés afin d'obtenir des informations pertinentes sur les données nucléaires de base. L'enjeu de cette thèse est le développement d'un schéma de calcul de référence, dont les incertitudes sont clairement identifiées et quantifiées, permettant l'interprétation des mesures par oscillation. Dans ce document plusieurs méthodes de calcul de ces faibles effets en réactivité sont présentées, basées sur des codes de calculs neutroniques déterministes et/ou stochastiques. Ces méthodes sont comparées sur un benchmark numérique, permettant leur validation par rapport à un calcul de référence. Trois applications sont ici présentées dans le détail : une méthode purement déterministe utilisant la théorie des perturbations exacte pour la qualification des sections efficaces des principaux produits de fission en REP, dans le cadre d'études sur l'estimation de la perte du réactivité du combustible au cours du cycle ; une méthode hybride, basée sur un calcul stochastique et la théorie des perturbations exacte, permet d'obtenir un retour précis sur les données nucléaires de bases d'isotopes, dans notre cas l'241Am; et enfin, une troisième méthode, reposant sur un calcul perturbatif Monte Carlo, est utilisée pour une étude de conception. / Reactivity measurements by the oscillation technique, as those performed in the Minerve reactor, enable to access various neutronic parameters on materials, fuels or specific isotopes. Usually, expected reactivity effects are small, about ten pcm at maximum. Then, the modeling of these experiments should be very precise, to obtain reliable feedback on the pointed parameters. Especially, calculation biases should be precisely identified, quantified and reduced to get precise information on nuclear data. The goal of this thesis is to develop a reference calculation scheme, with well quantified uncertainties, for in-pile oscillation experiments. In this work are presented several small reactivity calculation methods, based on deterministic and/or stochastic calculation codes. Those method are compared thanks to a numerical benchmark, against a reference calculation. Three applications of these methods are presented here: a purely deterministic calculation with exact perturbation theory formalism is used for the experimental validation of fission product cross sections, in the frame of reactivity loss studies for irradiated fuel; an hybrid method, based on a stochastic calculation and the exact perturbation theory is used for the readjustment of nuclear data, here 241Am; and a third method, based on a perturbative Monte Carlo calculation, is used in a conception study.

Page generated in 0.167 seconds