• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 213
  • 45
  • 27
  • 26
  • 24
  • 21
  • 16
  • 15
  • 12
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 456
  • 71
  • 56
  • 55
  • 47
  • 40
  • 39
  • 35
  • 31
  • 31
  • 30
  • 30
  • 29
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
391

Measurement of neutron flux spectra in a Tungsten Benchmark by neutron foil activation method / Messung der Neutronenflussspektren in einem Wolfram-Benchmark mit der Multifolien-Neutronenaktivierungstechnik

Negoita, Cezar Ciprian 16 August 2004 (has links) (PDF)
The nuclear design of fusion devices such as ITER (International Thermonuclear Experimental Reactor), which is an experimental fusion reactor based on the "tokamak" concept, rely on the results of neutron physical calculations. These depend on the knowledge of the neutron and photon flux spectra which is particularly important because it permits to anticipate the possible answers of the whole structure to phenomena such as nuclear heating, tritium breeding, atomic displacements, radiation shielding, power generation and material activation. The flux spectra can be calculated with transport codes, but validating measurements are also required. An important constituent of structural materials and divertor areas of fusion reactors is tungsten. This thesis deals with the measurement of the neutron fluence and neutron energy spectrum in a tungsten assembly by means of multiple foil neutron activation technique. In order to check and qualify the experimental tools and the codes to be used in the tungsten benchmark experiment, test measurements in the D-T and D-D neutron fields of the neutron generator at Technische Universität Dresden were performed. The characteristics of the D-D and D-T reactions, used to produce monoenergetic neutrons, together with the selection of activation reactions suitable for fusion applications and details of the activation measurements are presented. Corrections related to the neutron irradiation process and those to the sample counting process are discussed, too. The neutron fluence and its energy distribution in a tungsten benchmark, irradiated at the Frascati Neutron Generator with 14 MeV neutrons produced by the T(d, n)4He reaction, are then derived from the measurements of the neutron induced γ-ray activity in the foils using the STAYNL unfolding code, based on the linear least-square-errors method, together with the IRDF-90.2 (International Reactor Dosimetry File) cross section library. The differences between the neutron flux spectra measured by means of neutron foil activation and the neutron flux spectra obtained in the same assembly, making use of an NE213 liquid-scintillation spectrometer were studied. The comparison of measured neutron spectra with the spectra calculated with the MCNP-4B (Monte Carlo neutron and photon transport) code, which allows a crucial test of the evaluated nuclear data used in fusion reactor design, is discussed, too. In conclusion, this thesis shows the applicability of the neutron foil activation technique for the measurement of neutron flux spectra inside a thick tungsten assembly irradiated with 14 MeV from a D-T generator. / Die Konstruktion von Fusionsreaktoren wie ITER (International Thermonuclear Experimental Reactor), der ein experimenteller Fusionsreaktor ist und auf dem "Tokamak"-Konzept beruht, basiert unter neutronenphysikalischen Gesichtspunkten auf den Ergebnissen von umfangreichen Simulationsrechnungen. Diese setzen die Kenntnis der Spektren des Neutronen- und Photonenflusses voraus die besonders wichtig ist, weil sie, die möglichen Antworten der ganzen Struktur auf physikalische Prozesse vorauszuberechnen erlaubt wie z.B.: Heizen durch nukleare Prozesse, Tritium-Brüten, Atomverschiebung, Abschirmung von Strahlung, Leistungserzeugung und Materialaktivierung. Die Flußspektren können mittels Transportcodes berechnet werden, aber es werden auch Messungen zu ihrer Bestätigung benötigt. Ein wichtiger Bestandteil des Strukturmaterials und der Divertor-Flächen der Fusionsreaktoren ist Wolfram. Diese Dissertation behandelt die Messungen der Neutronspektren und ?fluenz in einer Wolfram-Anordnung mittels der Multifolien-Neutronenaktivierungstechnik. Um die anzuwendenden experimentellen Geräte und die Codes, die im Wolfram-Benchmark-Experiment eingesetzt werden, zu überprüfen und zu bestimmen, wurden Testmessungen in den D-T und D-D Neutronenfeldern des Neutronengenerator der Technischen Universität Dresden durchgeführt. Die Eigenschaften der D-T und D-D Reaktionen, die für die Erzeugung von monoenergetischen Neutronen verwendet werden, sowie die Auswahl der Aktivierungsreaktionen, die für Fusionsanwendungen geeignet sind und die Aktivierungsmessung werden detailliert vorgestellt. Korrekturen, die sich auf den Neutronen-Bestrahlungsprozess und auf den Probenzählungsprozess beziehen, werden ebenfalls besprochen. Die Neutronenfluenz und ihre Energieverteilung in einem Wolfram-Benchmark, bestrahlt am Frascati Neutronen Generator mit 14 MeV-Neutronen aus der T(d, n)4He Reaktion, werden aus den Messungen der γ-Strahlenaktivität, die von Neutronen in den Folien induziert ist, durch den STAYNL Entfaltungscode, der auf der Methode der kleinsten Fehlerquadrate basiert, zusammen mit der IRDF-90.2 Wirkungsquerschnitt-Bibliothek abgeleitet. Die Unterschiede zwischen den Neutronenflußspektren, die mit Hilfe der Multifolien-Neutronenaktivierung ermittelt wurden, und den Neutronenflußspektren, gemessen im selben Aufbau mit einem NE-213 Flüssigszintillator, wurden untersucht. Die gemessenen Neutronenspektren werden den aus MCNP-4B Rechnungen (Monte Carlo neutron and photon transport) ermittelten Spektren gegenüber gestellt. Der Vergleich stellt einen wichtigen Test der evaluierten Kerndaten für Fusionsreaktorkonzepte dar. Zusammenfassend zeigt diese Arbeit die Anwendbarkeit der Multifolien-Neutronenaktivierungstechnik bei Messungen der Neutronenflussspektren innerhalb eines massiven Wolframblocks bei Bestrahlung mit schnellen Neutronen aus D-T Generatoren.
392

Online-instrumentering på avloppsreningsverk : status idag och effekter av givarfel på reningsprocessen / Online sensors in wastewater treatment plants : status today and the effects of sensor faults on the treatment process

Ahlström, Marcus January 2018 (has links)
Effektiviteten av automatiserade reningsprocesser inom avloppsreningsverk beror ytterst på kvaliteten av de mätdata som fås från installerade instrument. Givarfel påverkar verkens styrning och är ofta anledningen till att olika reglerstrategier fallerar. Idag saknas standardiserade riktlinjer för hur instrumenteringsarbetet på svenska reningsverk bör organiseras vilket ger begränsade förutsättningar för reningsverken att resurseffektivt nå sina utsläppskrav. Mycket forskning har gjorts på att optimera olika reglerstrategier men instrumentens roll i verkens effektivitet har inte givits samma uppmärksamhet. Syftet med detta examensarbete har varit att undersöka hur instrumentering på reningsverk kan organiseras och struktureras för att säkerställa mätdata av god kvalitet och att undersöka effekter av givarfel på reningsprocessen. Inom arbetet genomfördes en litteraturstudie där instrumentering på reningsverk under-söktes. Effekter av givarfel på reningsprocessen undersöktes genom att simulera en fördenitrifikationsprocess i Benchmark Simulation Model no. 2 där bias och drift implementerades i olika givare. Simuleringar visade att positiva bias (0,10–0,50 mg/l) i en ammoniumgivare inom en kaskadreglering bidrar till att öka luftförbrukningen med cirka 4–25 %. Vidare resulterade alla typer av fel i DO-givare i den sista aeroba bassängen i en markant större påverkan på reningsprocessen än samma fel i DO-givare i någon av de tidigare aeroba bassängerna. Om den sista aeroba bassängen är designad för att hålla lägre syrehalter är DO-givaren i den bassängen den viktigaste DO-givaren att underhålla. Positiva bias (200–1 000 mg/l) i TSS-givare som används för att styra uttaget av överskottsslam bidrog till kraftiga ökningar av mängden ammonium med cirka 29–464 % i utgående vatten. Negativ drift i DO-givare visade att stora besparingar i luftningsenergi, cirka 4 %, var möjliga genom ett mer frekvent underhåll av DO-givarna. Huruvida ett instrument lider av ett positivt eller negativt givarfel, bias eller drift, kommer att påverka hur mycket och i vilken mån reningsprocessen påverkas. Studien av givarfel visade att effekten av ett positivt eller ett negativt fel varierade och att effekten på reningsprocessen inte var linjär. Effekten av givarfel på reningsprocessen kommer i slutändan att bero på den implementerade reglerstrategin, inställningar i regulatorerna och på den styrda processen. / The effectiveness of automated treatment processes within wastewater treatment plants ultimately depend on the quality of the measurement data that is given from the installed sensors. Sensor faults affect the control of the treatment plants and are often the reason different control strategies fail. Today there is a lack of standardized guidelines for how to organize and work with online sensors at Swedish wastewater treatment plants which limits the opportunities for treatment plants to reach their effluent criteria in a resource efficient manner. Much research has been done on ways to optimize control strategies but the role of sensors in the efficiency of the treatment plants has not been given the same level of attention. The purpose of this thesis has been to examine how instrumentation at wastewater treatment plants can be organized and structured to ensure good quality measurement data and to examine how sensor faults affect the treatment process. Within the thesis a literature study was conducted where instrumentation at wastewater treatment plants was examined. The effects of sensor faults were examined by simulating a pre-denitrification process in Benchmark Simulation Model no. 2 where off-sets (biases) and drift where added to measurements from different implemented sensors. The simulations showed that positive off-sets (0.10–0.50 mg/l) in an ammonium sensor within a cascaded feedback-loop adds to the energy consumption used for aeration by roughly 4-25%. It could further be shown that all types of faults in a DO sensor in the last aerated basin had significantly larger effect on the treatment process than the same fault in any of the other DO sensors in the preceding basins. If the last aerated basin is designed to have low DO concentrations the DO sensor in that basin is the most important DO sensor to maintain. Positive off-sets (200–1 000 mg TSS/l) in suspended solids sensors used for control of waste activated sludge flow contributed to large increases of ammonia, by 29-464%, in effluent waters. Negative drift in DO sensors showed that significant savings in aeration energy, roughly 4%, was possible to achieve with more frequent maintenance. Whether a sensor is affected by a positive or a negative fault, be it off-set or drift, will affect how much and in what way the treatment process will be affected. The study of sensor faults showed that the effect of a positive or a negative fault varied and that the effect on the treatment process was not linear. The effect of a sensor fault on the treatment process will ultimately depend on the implemented control strategy, settings in the controllers and on the controlled process.
393

Agrégation de classements avec égalités : algorithmes, guides à l'utilisateur et applications aux données biologiques / Rank aggregation with ties : algorithms, user guidance et applications to biologicals data

Brancotte, Bryan 25 September 2015 (has links)
L'agrégation de classements consiste à établir un consensus entre un ensemble de classements (éléments ordonnés). Bien que ce problème ait de très nombreuses applications (consensus entre les votes d'utilisateurs, consensus entre des résultats ordonnés différemment par divers moteurs de recherche...), calculer un consensus exact est rarement faisable dans les cas d'applications réels (problème NP-difficile). De nombreux algorithmes d'approximation et heuristiques ont donc été conçus. Néanmoins, leurs performances (en temps et en qualité de résultat produit) sont très différentes et dépendent des jeux de données à agréger. Plusieurs études ont cherché à comparer ces algorithmes mais celles-ci n’ont généralement pas considéré le cas (pourtant courant dans les jeux de données réels) des égalités entre éléments dans les classements (éléments classés au même rang). Choisir un algorithme de consensus adéquat vis-à-vis d'un jeu de données est donc un problème particulièrement important à étudier (grand nombre d’applications) et c’est un problème ouvert au sens où aucune des études existantes ne permet d’y répondre. Plus formellement, un consensus de classements est un classement qui minimise le somme des distances entre ce consensus et chacun des classements en entrés. Nous avons considérés (comme une grande partie de l’état-de-art) la distance de Kendall-Tau généralisée, ainsi que des variantes, dans nos études. Plus précisément, cette thèse comporte trois contributions. Premièrement, nous proposons de nouveaux résultats de complexité associés aux cas que l'on rencontre dans les données réelles où les classements peuvent être incomplets et où plusieurs éléments peuvent être classés à égalité. Nous isolons les différents « paramètres » qui peuvent expliquer les variations au niveau des résultats produits par les algorithmes d’agrégation (par exemple, utilisation de la distance de Kendall-Tau généralisée ou de variantes, d’un pré-traitement des jeux de données par unification ou projection). Nous proposons un guide pour caractériser le contexte et le besoin d’un utilisateur afin de le guider dans le choix à la fois d’un pré-traitement de ses données mais aussi de la distance à choisir pour calculer le consensus. Nous proposons finalement une adaptation des algorithmes existants à ce nouveau contexte. Deuxièmement, nous évaluons ces algorithmes sur un ensemble important et varié de jeux de données à la fois réels et synthétiques reproduisant des caractéristiques réelles telles que similarité entre classements, la présence d'égalités, et différents pré-traitements. Cette large évaluation passe par la proposition d’une nouvelle méthode pour générer des données synthétiques avec similarités basée sur une modélisation en chaîne Markovienne. Cette évaluation a permis d'isoler les caractéristiques des jeux de données ayant un impact sur les performances des algorithmes d'agrégation et de concevoir un guide pour caractériser le besoin d'un utilisateur et le conseiller dans le choix de l'algorithme à privilégier. Une plateforme web permettant de reproduire et étendre ces analyses effectuée est disponible (rank-aggregation-with-ties.lri.fr). Enfin, nous démontrons l'intérêt d'utiliser l'approche d'agrégation de classements dans deux cas d'utilisation. Nous proposons un outil reformulant à-la-volé des requêtes textuelles d'utilisateur grâce à des terminologies biomédicales, pour ensuite interroger de bases de données biologiques, et finalement produire un consensus des résultats obtenus pour chaque reformulation (conqur-bio.lri.fr). Nous comparons l'outil à la plateforme de références et montrons une amélioration nette des résultats en qualité. Nous calculons aussi des consensus entre liste de workflows établie par des experts dans le contexte de la similarité entre workflows scientifiques. Nous observons que les consensus calculés sont très en accord avec les utilisateurs dans une large proportion de cas. / The rank aggregation problem is to build consensus among a set of rankings (ordered elements). Although this problem has numerous applications (consensus among user votes, consensus between results ordered differently by different search engines ...), computing an optimal consensus is rarely feasible in cases of real applications (problem NP-Hard). Many approximation algorithms and heuristics were therefore designed. However, their performance (time and quality of product loss) are quite different and depend on the datasets to be aggregated. Several studies have compared these algorithms but they have generally not considered the case (yet common in real datasets) that elements can be tied in rankings (elements at the same rank). Choosing a consensus algorithm for a given dataset is therefore a particularly important issue to be studied (many applications) and it is an open problem in the sense that none of the existing studies address it. More formally, a consensus ranking is a ranking that minimizes the sum of the distances between this consensus and the input rankings. Like much of the state-of-art, we have considered in our studies the generalized Kendall-Tau distance, and variants. Specifically, this thesis has three contributions. First, we propose new complexity results associated with cases encountered in the actual data that rankings may be incomplete and where multiple items can be classified equally (ties). We isolate the different "features" that can explain variations in the results produced by the aggregation algorithms (for example, using the generalized distance of Kendall-Tau or variants, pre-processing the datasets with unification or projection). We propose a guide to characterize the context and the need of a user to guide him into the choice of both a pre-treatment of its datasets but also the distance to choose to calculate the consensus. We finally adapt existing algorithms to this new context. Second, we evaluate these algorithms on a large and varied set of datasets both real and synthetic reproducing actual features such as similarity between rankings, the presence of ties and different pre-treatments. This large evaluation comes with the proposal of a new method to generate synthetic data with similarities based on a Markov chain modeling. This evaluation led to the isolation of datasets features that impact the performance of the aggregation algorithms, and to design a guide to characterize the needs of a user and advise him in the choice of the algorithm to be use. A web platform to replicate and extend these analyzes is available (rank-aggregation-with-ties.lri.fr). Finally, we demonstrate the value of using the rankings aggregation approach in two use cases. We provide a tool to reformulating the text user queries through biomedical terminologies, to then query biological databases, and ultimately produce a consensus of results obtained for each reformulation (conqur-bio.lri.fr). We compare the results to the references platform and show a clear improvement in quality results. We also calculate consensus between list of workflows established by experts in the context of similarity between scientific workflows. We note that the computed consensus agree with the expert in a very large majority of cases.
394

Garbage Collected CRDTs on the Web : Studying the Memory Efficiency of CRDTs in a Web Context

Rehn, Michael January 2020 (has links)
In today's connected society, where it is common to have several connected devices per capita, it is more important than ever that the data you need is omnipresent, i.e. its available when you need it, no matter where you are. We identify one key technology and platform that could be the future—peer-to-peer communication and the Web. Unfortunately, guaranteeing consistency and availability between users in a peer-to-peer network, where network partitions are bound to happen, can be a challenging problem to solve. To solve these problems, we turned to a promising category of data types called CRDTs—Conflict Free Replicated Data Types. By following the scientific tradition of reproduction, we build upon previous research of a CRDT framework, and adjust it work in a peer-to-peer Web environment, i.e. it runs on a Web browser. CRDTs makes use of meta-data to ensure consistency, and it is imperative to remove this meta-data once it no longer has any use—if not, memory usage grows unboundedly making the CRDT impractical for real-world use. There are different garbage collection techniques that can be applied to remove this meta-data. To investigate whether the CRDT framework and the different garbage collection techniques are suitable for the Web, we try to reproduce previous findings by running our implementation through a series of benchmarks. We test whether our implementation works correctly on the Web, as well as comparing the memory efficiency between different garbage collection techniques. In doing this, we also proved the correctness of one of these techniques. The results from our experiments showed that the CRDT framework was well-adjusted to the Web environment and worked correctly. However, while we could observe similar behaviour between different garbage collection techniques as previous research, we achieved lower relative memory savings than expected. An additional insight was that for long-running systems that often reset its shared state, it might be more efficient to not apply any garbage collection technique at all. There is still much work to be done to allow for omnipresent data on the Web, but we believe that this research contains two main takeaways. The first is that the general CRDT framework is well-suited for the Web and that it in practice might be more efficient to choose different garbage collection techniques, depending on your use-case. The second take-away is that by reproducing previous research, we can still advance the current state of the field and generate novel knowledge—indeed, by combining previous ideas in a novel environment, we are now one step closer to a future with omnipresent data. / I dagens samhälle är vi mer uppkopplade än någonsin. Tack vare det faktum att vi nu ofta har fler än en uppkopplad enhet per person, så är det viktigare än någonsin att ens data är tillgänglig på alla ens enheter–oavsett vart en befinner sig. Två tekniker som kan möjliggöra denna ``allnärvaro'' av data är Webben, alltså kod som körs på en Webbläsare, tillsammans med peer-to-peer-kommunikation; men att säkerställa att distribuerad data både är tillgänglig och likadan för alla enheter är svårt, speciellt när enhetens internetanslutning kan brytas när som helst. Conflict-free replicated data-types (CRDT:er) är en lovande klass av datatyper som löser just dessa typer av problem i distribuerade system; genom att använda sig av meta-data, så kan CRDT:er fortsätta fungera trots att internetanslutningen brutits. Dessutom är de garanterade att konvergera till samma sluttillstånd när anslutningen upprättas igen. Däremot lider CRDT:er av ett speciellt problem–denna meta-data tar upp mycket minne trots att den inte har någon användning efter en stund. För att göra datatypen mer minneseffektiv så kan meta-datan rensas bort i en process som kallas för skräpsamling. Vår idé var därför att reproducera tidigare forskning om ett ramverk för CRDT:er och försöka anpassa denna till att fungera på Webben. Vi reproducerar dessutom olika metoder för skräpsamling för att undersöka om de, för det första fungerar på Webben, och för det andra är lika effektiv i denna nya miljö som den tidigare forskningen pekar på. Resultaten från våra experiment visade att CRDT-ramverket och dess olika skräpsamlingsmetoder kunde anpassas till att fungera på Webben. Däremot så noterade vi något högre relativ minnesanvändning än vad vi har förväntat oss, trots att beteendet i stort var detsamma som den tidigare forskningen. En ytterligare upptäckt vad att i vissa specifika fall så kan det vara mer effektivt att inte applicera någon skräpsamling alls. Trots att det är mycket arbete kvar för att använder CRDT:er peer-to-peer på Webben för att möjliggöra ``allnärvarande'' data, så innehåller denna uppsats två huvudsakliga punkter. För det första så fungerar det att anpassa CRDT-ramverket och dess olika skräpsamlingsmetoder till Webben, men ibland är det faktiskt bättre att inte applicera någon skräpsamling alls. För det andra så visas vikten av att reproducera tidigare forskning–inte bara visar uppsatsen att tidigare CRDT-forskning kan appliceras i andra miljöer, dessutom kan ny kunskap hämtas ur en sådan reproducering.
395

Measurement of neutron flux spectra in a Tungsten Benchmark by neutron foil activation method

Negoita, Cezar Ciprian 19 August 2004 (has links)
The nuclear design of fusion devices such as ITER (International Thermonuclear Experimental Reactor), which is an experimental fusion reactor based on the "tokamak" concept, rely on the results of neutron physical calculations. These depend on the knowledge of the neutron and photon flux spectra which is particularly important because it permits to anticipate the possible answers of the whole structure to phenomena such as nuclear heating, tritium breeding, atomic displacements, radiation shielding, power generation and material activation. The flux spectra can be calculated with transport codes, but validating measurements are also required. An important constituent of structural materials and divertor areas of fusion reactors is tungsten. This thesis deals with the measurement of the neutron fluence and neutron energy spectrum in a tungsten assembly by means of multiple foil neutron activation technique. In order to check and qualify the experimental tools and the codes to be used in the tungsten benchmark experiment, test measurements in the D-T and D-D neutron fields of the neutron generator at Technische Universität Dresden were performed. The characteristics of the D-D and D-T reactions, used to produce monoenergetic neutrons, together with the selection of activation reactions suitable for fusion applications and details of the activation measurements are presented. Corrections related to the neutron irradiation process and those to the sample counting process are discussed, too. The neutron fluence and its energy distribution in a tungsten benchmark, irradiated at the Frascati Neutron Generator with 14 MeV neutrons produced by the T(d, n)4He reaction, are then derived from the measurements of the neutron induced γ-ray activity in the foils using the STAYNL unfolding code, based on the linear least-square-errors method, together with the IRDF-90.2 (International Reactor Dosimetry File) cross section library. The differences between the neutron flux spectra measured by means of neutron foil activation and the neutron flux spectra obtained in the same assembly, making use of an NE213 liquid-scintillation spectrometer were studied. The comparison of measured neutron spectra with the spectra calculated with the MCNP-4B (Monte Carlo neutron and photon transport) code, which allows a crucial test of the evaluated nuclear data used in fusion reactor design, is discussed, too. In conclusion, this thesis shows the applicability of the neutron foil activation technique for the measurement of neutron flux spectra inside a thick tungsten assembly irradiated with 14 MeV from a D-T generator. / Die Konstruktion von Fusionsreaktoren wie ITER (International Thermonuclear Experimental Reactor), der ein experimenteller Fusionsreaktor ist und auf dem "Tokamak"-Konzept beruht, basiert unter neutronenphysikalischen Gesichtspunkten auf den Ergebnissen von umfangreichen Simulationsrechnungen. Diese setzen die Kenntnis der Spektren des Neutronen- und Photonenflusses voraus die besonders wichtig ist, weil sie, die möglichen Antworten der ganzen Struktur auf physikalische Prozesse vorauszuberechnen erlaubt wie z.B.: Heizen durch nukleare Prozesse, Tritium-Brüten, Atomverschiebung, Abschirmung von Strahlung, Leistungserzeugung und Materialaktivierung. Die Flußspektren können mittels Transportcodes berechnet werden, aber es werden auch Messungen zu ihrer Bestätigung benötigt. Ein wichtiger Bestandteil des Strukturmaterials und der Divertor-Flächen der Fusionsreaktoren ist Wolfram. Diese Dissertation behandelt die Messungen der Neutronspektren und ?fluenz in einer Wolfram-Anordnung mittels der Multifolien-Neutronenaktivierungstechnik. Um die anzuwendenden experimentellen Geräte und die Codes, die im Wolfram-Benchmark-Experiment eingesetzt werden, zu überprüfen und zu bestimmen, wurden Testmessungen in den D-T und D-D Neutronenfeldern des Neutronengenerator der Technischen Universität Dresden durchgeführt. Die Eigenschaften der D-T und D-D Reaktionen, die für die Erzeugung von monoenergetischen Neutronen verwendet werden, sowie die Auswahl der Aktivierungsreaktionen, die für Fusionsanwendungen geeignet sind und die Aktivierungsmessung werden detailliert vorgestellt. Korrekturen, die sich auf den Neutronen-Bestrahlungsprozess und auf den Probenzählungsprozess beziehen, werden ebenfalls besprochen. Die Neutronenfluenz und ihre Energieverteilung in einem Wolfram-Benchmark, bestrahlt am Frascati Neutronen Generator mit 14 MeV-Neutronen aus der T(d, n)4He Reaktion, werden aus den Messungen der γ-Strahlenaktivität, die von Neutronen in den Folien induziert ist, durch den STAYNL Entfaltungscode, der auf der Methode der kleinsten Fehlerquadrate basiert, zusammen mit der IRDF-90.2 Wirkungsquerschnitt-Bibliothek abgeleitet. Die Unterschiede zwischen den Neutronenflußspektren, die mit Hilfe der Multifolien-Neutronenaktivierung ermittelt wurden, und den Neutronenflußspektren, gemessen im selben Aufbau mit einem NE-213 Flüssigszintillator, wurden untersucht. Die gemessenen Neutronenspektren werden den aus MCNP-4B Rechnungen (Monte Carlo neutron and photon transport) ermittelten Spektren gegenüber gestellt. Der Vergleich stellt einen wichtigen Test der evaluierten Kerndaten für Fusionsreaktorkonzepte dar. Zusammenfassend zeigt diese Arbeit die Anwendbarkeit der Multifolien-Neutronenaktivierungstechnik bei Messungen der Neutronenflussspektren innerhalb eines massiven Wolframblocks bei Bestrahlung mit schnellen Neutronen aus D-T Generatoren.
396

Variable-Density Flow Processes in Porous Media On Small, Medium and Regional Scales

Walther, Marc 03 November 2014 (has links) (PDF)
Nowadays society strongly depends on its available resources and the long term stability of the surrounding ecosystem. Numerical modelling has become a general standard for evaluating past, current or future system states for a large number of applications supporting decision makers in proper management. In order to ensure the correct representation of the investigated processes and results of a simulation, verification examples (benchmarks), that are based on observation data or analytical solutions, are utilized to evaluate the numerical modelling tool. In many parts of the world, groundwater is an important resource for freshwater. While it is not only limited in quantity, subsurface water bodies are often in danger of contamination from various natural or anthropogenic sources. Especially in arid regions, marine saltwater intrusion poses a major threat to groundwater aquifers which mostly are the exclusive source of freshwater in these dry climates. In contrast to common numerical groundwater modelling, density-driven flow and mass transport have to be considered as vital processes in the system and in scenario simulations for fresh-saltwater interactions. In the beginning of this thesis, the capabilities of the modelling tool OpenGeoSys are verified with selected benchmarks to represent the relevant non-linear process coupling. Afterwards, variable-density application and process studies on different scales are presented. Application studies comprehend regional groundwater modelling of a coastal aquifer system extensively used for agricultural irrigation, as well as hydro-geological model development and parametrization. In two process studies, firstly, a novel method to model gelation of a solute in porous media is developed and verified on small scale laboratory observation data, and secondly, investigations of thermohaline double-diffusive Rayleigh regimes on medium scale are carried out. With the growing world population and, thus, increasing pressure on non-renewable resources, intelligent management strategies intensify demand for potent simulation tools and development of novel methods. In that way, this thesis highlights not only OpenGeoSys’ potential of density-dependent process modelling, but the comprehensive importance of variable-density flow and transport processes connecting, both, avant-garde scientific research, and real-world application challenges.
397

Model palivového souboru tlakovodního reaktoru západní koncepce / PWR fuel assembly model

Cekl, Jakub January 2018 (has links)
PWR, fuel assembly, benchmark, burnup, lattice, SCALE, Polaris, validation, reactivity
398

Variable-Density Flow Processes in Porous Media On Small, Medium and Regional Scales

Walther, Marc 07 May 2014 (has links)
Nowadays society strongly depends on its available resources and the long term stability of the surrounding ecosystem. Numerical modelling has become a general standard for evaluating past, current or future system states for a large number of applications supporting decision makers in proper management. In order to ensure the correct representation of the investigated processes and results of a simulation, verification examples (benchmarks), that are based on observation data or analytical solutions, are utilized to evaluate the numerical modelling tool. In many parts of the world, groundwater is an important resource for freshwater. While it is not only limited in quantity, subsurface water bodies are often in danger of contamination from various natural or anthropogenic sources. Especially in arid regions, marine saltwater intrusion poses a major threat to groundwater aquifers which mostly are the exclusive source of freshwater in these dry climates. In contrast to common numerical groundwater modelling, density-driven flow and mass transport have to be considered as vital processes in the system and in scenario simulations for fresh-saltwater interactions. In the beginning of this thesis, the capabilities of the modelling tool OpenGeoSys are verified with selected benchmarks to represent the relevant non-linear process coupling. Afterwards, variable-density application and process studies on different scales are presented. Application studies comprehend regional groundwater modelling of a coastal aquifer system extensively used for agricultural irrigation, as well as hydro-geological model development and parametrization. In two process studies, firstly, a novel method to model gelation of a solute in porous media is developed and verified on small scale laboratory observation data, and secondly, investigations of thermohaline double-diffusive Rayleigh regimes on medium scale are carried out. With the growing world population and, thus, increasing pressure on non-renewable resources, intelligent management strategies intensify demand for potent simulation tools and development of novel methods. In that way, this thesis highlights not only OpenGeoSys’ potential of density-dependent process modelling, but the comprehensive importance of variable-density flow and transport processes connecting, both, avant-garde scientific research, and real-world application challenges.:Abstract Zusammenfassung Nomenclature List of Figures List of Tables I Background and Fundamentals 1 Introduction 1.1 Motivation 1.2 Structure of the Thesis 1.3 Variable-Density Flow in Literature 2 Theory and Methods 2.1 Governing Equations 2.2 Fluid Properties 2.3 Modelling and Visualization Tools 3 Benchmarks 3.1 Steady-state Unconfined Groundwater Table 3.2 Theis Transient Pumping Test 3.3 Transient Saltwater Intrusion 3.4 Development of a Freshwater Lens II Applications 4 Extended Inverse Distance Weighting Interpolation 4.1 Motivation 4.2 Extension of IDW Method 4.3 Artificial Test and Regional Scale Application 4.4 Summary and Conclusions 5 Modelling Transient Saltwater Intrusion 5.1 Background and Motivation 5.2 Methods and Model Setup 5.3 Simulation Results and Discussion 5.4 Summary, Conclusion and Outlook 6 Gelation of a Dense Fluid 6.1 Motivation 6.2 Methods and Model Setup 6.3 Results and Conclusions 7 Delineating Double-Diffusive Rayleigh Regimes 7.1 Background and Motivation 7.2 Methods and Model Setup 7.3 Results 7.4 Conclusions and Outlook III Summary and Conclusions 8 Important Achievements 9 Conclusions and Outlook Bibliography Publications Acknowledgements Appendix
399

An analytical research into the price risk management of the soft commodities futures markets

Rossouw, Werner 30 November 2007 (has links)
Agriculture is of inestimable value to South Africa because it is a major source of job creation and plays a key role in earning foreign exchange. The most significant contribution of agriculture, and in particular maize, is its ability to provide food for the nation. For a number of decades government legislation determined prices, and as such the trade of grains on the futures exchange requires market participants to adapt to a volatile environment. The research focuses on the ability of market participants to effectively mitigate price volatility on the futures exchange through the use of derivative instruments, and the possibility of developing risk management strategies that will outperform the return offered by the market. The study shows that market participants are unable to use derivative instruments in such a way that price volatility is minimised. The findings of the study also indicate that the development of derivative risk management strategies could result in better returns than those offered by the market, mainly by exploiting trends on the futures market. / Financial Accounting / M. Comm. (Business Management)
400

Contesting the efficient market hypothesis for the Chicago Board of Trade corn futures contract through the application of a derivative methodology

Rossouw, Werner 11 1900 (has links)
Corn production is scattered geographically over various continents, but most of it is grown in the United States. As such, the world price of corn futures contracts is largely dominated by North American corn prices as traded on the Chicago Board of Trade. In recent years, this market has been characterised by an increase in price volatility and magnitude of price movement as a result of decreasing stock levels. The development and implementation of an effective and successful derivative price risk management strategy based on the Chicago Board of Trade corn futures contract will therefore be of inestimable value to market stakeholders worldwide. The research focused on the efficient market hypothesis and the possibility of contesting this phenomenon through an application of a derivative price risk management methodology. The methodology is based on a combination of an analysis of market trends and technical oscillators with the objective of generating returns superior to that of a market benchmark. The study found that market participants are currently unable to exploit price movement in a manner which results in returns that contest the notion of efficient markets. The methodology proposed, however, does allow the user to consistently achieve returns superior to that of a predetermined market benchmark. The benchmark price for the purposes of this study was the average price offered by the market over the contract lifetime, and such, the efficient market hypothesis was successfully contested. / Business Management / D. Com. (Business Management)

Page generated in 0.0284 seconds