• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 570
  • 181
  • 54
  • 47
  • 23
  • 18
  • 10
  • 9
  • 9
  • 8
  • 4
  • 4
  • 4
  • 4
  • 4
  • Tagged with
  • 1208
  • 1208
  • 1208
  • 173
  • 172
  • 165
  • 128
  • 124
  • 120
  • 108
  • 102
  • 96
  • 86
  • 84
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1191

Some Contributions on Probabilistic Interpretation For Nonlinear Stochastic PDEs / Quelques contributions dans la représentation probabiliste des solutions d'EDPs non linéaires

Sabbagh, Wissal 08 December 2014 (has links)
L'objectif de cette thèse est l'étude de la représentation probabiliste des différentes classes d'EDPSs non-linéaires(semi-linéaires, complètement non-linéaires, réfléchies dans un domaine) en utilisant les équations différentielles doublement stochastiques rétrogrades (EDDSRs). Cette thèse contient quatre parties différentes. Nous traitons dans la première partie les EDDSRs du second ordre (2EDDSRs). Nous montrons l'existence et l'unicité des solutions des EDDSRs en utilisant des techniques de contrôle stochastique quasi- sure. La motivation principale de cette étude est la représentation probabiliste des EDPSs complètement non-linéaires. Dans la deuxième partie, nous étudions les solutions faibles de type Sobolev du problème d'obstacle pour les équations à dérivées partielles inteégro-différentielles (EDPIDs). Plus précisément, nous montrons la formule de Feynman-Kac pour l'EDPIDs par l'intermédiaire des équations différentielles stochastiques rétrogrades réfléchies avec sauts (EDSRRs). Plus précisément, nous établissons l'existence et l'unicité de la solution du problème d'obstacle, qui est considérée comme un couple constitué de la solution et de la mesure de réflexion. L'approche utilisée est basée sur les techniques de flots stochastiques développées dans Bally et Matoussi (2001) mais les preuves sont beaucoup plus techniques. Dans la troisième partie, nous traitons l'existence et l'unicité pour les EDDSRRs dans un domaine convexe D sans aucune condition de régularité sur la frontière. De plus, en utilisant l'approche basée sur les techniques du flot stochastiques nous démontrons l'interprétation probabiliste de la solution faible de type Sobolev d'une classe d'EDPSs réfléchies dans un domaine convexe via les EDDSRRs. Enfin, nous nous intéressons à la résolution numérique des EDDSRs à temps terminal aléatoire. La motivation principale est de donner une représentation probabiliste des solutions de Sobolev d'EDPSs semi-linéaires avec condition de Dirichlet nul au bord. Dans cette partie, nous étudions l'approximation forte de cette classe d'EDDSRs quand le temps terminal aléatoire est le premier temps de sortie d'une EDS d'un domaine cylindrique. Ainsi, nous donnons les bornes pour l'erreur d'approximation en temps discret. Cette partie se conclut par des tests numériques qui démontrent que cette approche est effective. / The objective of this thesis is to study the probabilistic representation (Feynman-Kac for- mula) of different classes ofStochastic Nonlinear PDEs (semilinear, fully nonlinear, reflected in a domain) by means of backward doubly stochastic differential equations (BDSDEs). This thesis contains four different parts. We deal in the first part with the second order BDS- DEs (2BDSDEs). We show the existence and uniqueness of solutions of 2BDSDEs using quasi sure stochastic control technics. The main motivation of this study is the probabilistic representation for solution of fully nonlinear SPDEs. First, under regularity assumptions on the coefficients, we give a Feynman-Kac formula for classical solution of fully nonlinear SPDEs and we generalize the work of Soner, Touzi and Zhang (2010-2012) for deterministic fully nonlinear PDE. Then, under weaker assumptions on the coefficients, we prove the probabilistic representation for stochastic viscosity solution of fully nonlinear SPDEs. In the second part, we study the Sobolev solution of obstacle problem for partial integro-differentialequations (PIDEs). Specifically, we show the Feynman-Kac formula for PIDEs via reflected backward stochastic differentialequations with jumps (BSDEs). Specifically, we establish the existence and uniqueness of the solution of the obstacle problem, which is regarded as a pair consisting of the solution and the measure of reflection. The approach is based on stochastic flow technics developed in Bally and Matoussi (2001) but the proofs are more technical. In the third part, we discuss the existence and uniqueness for RBDSDEs in a convex domain D without any regularity condition on the boundary. In addition, using the approach based on the technics of stochastic flow we provide the probabilistic interpretation of Sobolev solution of a class of reflected SPDEs in a convex domain via RBDSDEs. Finally, we are interested in the numerical solution of BDSDEs with random terminal time. The main motivation is to give a probabilistic representation of Sobolev solution of semilinear SPDEs with Dirichlet null condition. In this part, we study the strong approximation of this class of BDSDEs when the random terminal time is the first exit time of an SDE from a cylindrical domain. Thus, we give bounds for the discrete-time approximation error.. We conclude this part with numerical tests showing that this approach is effective.
1192

Observation error model selection by information criteria vs. normality testing

Lehmann, Rüdiger 17 October 2016 (has links) (PDF)
To extract the best possible information from geodetic and geophysical observations, it is necessary to select a model of the observation errors, mostly the family of Gaussian normal distributions. However, there are alternatives, typically chosen in the framework of robust M-estimation. We give a synopsis of well-known and less well-known models for observation errors and propose to select a model based on information criteria. In this contribution we compare the Akaike information criterion (AIC) and the Anderson Darling (AD) test and apply them to the test problem of fitting a straight line. The comparison is facilitated by a Monte Carlo approach. It turns out that the model selection by AIC has some advantages over the AD test.
1193

Neutronic study of the mono-recycling of americum in PWR and of the core conversion INMNSR using the MURE code / Étude neutronique du mono-recyclage de l'Américium en REP et la conversion du coeur MNSR à l'aide du code MURE

Sogbadji, Robert 11 July 2012 (has links)
Le code MURE est basé sur le couplage d’un code Monte Carlo statique et le calcul de l’évolution pendant l’irradiation et les différentes périodes du cycle (refroidissement, fabrication). Le code MURE est ici utilisé pour analyser deux différentes questions : le mono-recyclage de l’Am dans les réacteurs français de type REP et la conversion du coeur du MNSR (Miniature Neutron Source Reactor) au Ghana d’un combustible à uranium hautement enrichi (HEU) vers un combustible faiblement enrichi (LEU), dans le cadre de la lutte contre la prolifération. Dans les deux cas, une comparaison détaillée est menée sur les taux d’irradiation et les radiotoxicités induites (combustibles usés, déchets).Le combustible UOX envisagé est enrichi de telle sorte qu’il atteigne un taux d’irradiation de 46 GWj/t et 68 GWj/t. Le combustible UOX usé est retraité, et le retraitement standard consiste à séparer le plutonium afin de fabriquer un combustible MOX sur base d’uranium appauvri. La concentration du Pu dans le MOX est déterminée pour atteindre un taux d’irradiation du MOX de 46 et 68 GWj/t. L’impact du temps de refroidissement de l’UOX usé est étudié (5 à 30 ans), afin de quantifier l’impact de la disparition du 241PU (fissile) par décroissance radioactive (T=14,3 ans). Un refroidissement de 30 ans demande à augmenter la teneur en Pu dans le MOX. L’241Am, avec une durée de vie de 432 ans, jour un rôle important dans le dimensionnement du site de stockage des déchets vitrifiés et dans leur radiotoxicité à long terme. Il est le candidat principal à la transmutation, et nous envisageons donc son recyclage dans le MOX, avec le plutonium. Cette stratégie permet de minimiser la puissance résiduelle et la radiotoxicité des verres, en laissant l’Am disponible dans les MOX usés pour une transmutation éventuelle future dans les réacteurs rapides. Nous avons étudié l’impact neutronique d’un tel recyclage. Le temps de refroidissement de l’UOX est encore plus sensible ici car l’241Am recyclé est un fort poison neutronique qui dégrade les performances du combustible (taux d’irradiation, coefficients de vide et de température). Néanmoins, à l’exception de quelques configurations, le recyclage de l’Am ne dégrade pas les coefficients de sûreté de base. Le réacteur MNSR du Ghana fonctionne aujourd’hui avec de l’uranium enrichi à 90,2% (HEU), et nous étudions ici la possibilité de le faire fonctionner avec de l’uranium enrichi à 12,5%, en passant d’un combustible sur base d’aluminium à un oxyde. Les simulations ont été menées avec le code MURE, et montrent que le coeur LEU peut-être irradié plus longtemps, mais demande d’intervenir plus tôt sur le pilotage en jouant sur la quantité de béryllium en coeur. Les flux de neutrons dans les canaux d’irradiation sont similaires pour les coeurs HEU et LEU, de même pour les coefficients de vide. Le combustible LEU usé présente cependant une radiotoxicité et une chaleur résiduelle plus élevée, du fait de la production plus importante de transuraniens pendant l’irradiation. / The MURE code is based on the coupling of a Monte Carlo static code and the calculation of the evolution of the fuel during irradiation and cooling periods. The MURE code has been used to analyse two different questions, concerning the mono-recycling of Am in present French Pressurized Water Reactor, and the conversion of high enriched uranium (HEU) used in the Miniature Neutron Source Reactor in Ghana into low enriched uranium (LEU) due to proliferation resistance issues. In both cases, a detailed comparison is made on burnup and the induced radiotoxicity of waste or spent fuel. The UOX fuel assembly, as in the open cycle system, was designed to reach a burn-up of 46GWd/T and 68GWd/T. The spent UOX was reprocessed to fabricate MOX assemblies, by the extraction of Plutonium and addition of depleted Uranium to reach burn-ups of 46GWd/T and 68GWd/T, taking into account various cooling times of the spent UOX assembly in the repository. The effect of cooling time on burnup and radiotoxicity was then ascertained. Spent UOX fuel, after 30 years of cooling in the repository required higher concentration of Pu to be reprocessed into a MOX fuel due to the decay of Pu-241. Americium, with a mean half-life of 432 years, has high radiotoxic level, high mid-term residual heat and a precursor for other long lived isotope. An innovative strategy consists of reprocessing not only the plutonium from the UOX spent fuel but also the americium isotopes which dominate the radiotoxicity of present waste. The mono-recycling of Am is not a definitive solution because the once-through MOX cycle transmutation of Am in a PWR is not enough to destroy all the Am. The main objective is to propose a “waiting strategy” for both Am and Pu in the spent fuel so that they can be made available for further transmutation strategies. The MOXAm (MOX and Americium isotopes) fuel was fabricated to see the effect of americium in MOX fuel on the burn-up, neutronic behavior and on radiotoxicity. The MOXAm fuel showed relatively good indicators both on burnup and on radiotoxicity. A 68GWd/T MOX assembly produced from a reprocessed spent 46GWd/T UOX assembly showed a decrease in radiotoxicity as compared to the open cycle. All fuel types understudy in the PWR cycle showed good safety inherent feature with the exception of the some MOXAm assemblies which have a positive void coefficient in specific configurations, which could not be consistent with safety features. The core lifetimes of the current operating 90.2% HEU UAl fuel and the proposed 12.5% LEU UOX fuel of the MNSR were investigated using MURE code. Even though LEU core has a longer core life due to its higher core loading and low rate of uranium consumption, the LEU core will have it first beryllium top up to compensate for reactivity at earlier time than the HEU core. The HEU and LEU cores of the MNSR exhibited similar neutron fluxes in irradiation channels, negative feedback of temperature and void coefficients, but the LEU is more radiotoxic after fission product decay due to higher actinides presence at the end of its core lifetime.
1194

Specifika nastavení řešiče v systému Ansys Fluent pro nízké tlaky v EREM / Specifications of the Ansys Fluent Solution Solver for low pressures in EREM

Šimík, Marcel January 2017 (has links)
This thesis is focused on electron microscopy which issue is discussed at the beginning of work. The main attention is dedicated to the Environmental electron microscope, especially the differentially pumped chamber, which the thesis deals with. There is a production of an experimental chamber for analysis of shock waves on going therefore main goal of this thesis was to analyze the flow pattern in this chamber. Using the Ansys Fluent program, simulations of the characteristic flow that arises from the pumping of the vacuum chambers namely the ultrasonic flow at low pressures on which the most suitable turbulent module was applied as well as the degree of discretization was performed. The final analysis of this flow pattern is primarily focused on the localization of the shock wave which experimental evidence is to be lodged by shadow optical method as a part of the new concept of the chamber. The basis for the simulation of the chamber was taken over by Dr. Danilatos, with which the results were compared.
1195

Stanovení přesnosti měření v nanometrologii / Determination Accuracy of Measurement in Nanometrology

Šrámek, Jan January 2019 (has links)
The presented doctoral thesis deals with measurements of extremely small sizes in nanometrology using a touch probe, which constitutes a part of a three-coordinate measuring system. It addresses a newly developed method of exact measurements in nanometrology by touch probes. The aim of this work was to expand the measurement options of this device and design a methodology proposal for the measurement of small parts, including the determination of accuracy of measurement of this device when used in nanometrology. The work includes the new methodology for the calculation of uncertainty of measurement, which constitutes a keystone in determining the accuracy of measurement of a accuracy three coordinate measuring system (hereinafter only nano-CMM). The first part of the doctoral thesis analyzes the present situation in the area of evaluation of accuracy of measurement in very accurate length measurements. It defines and describes individual methods implemented in the determination of accuracy of measurement on the instrument nano-CMM. A great emphasis is placed on the methodology of the measurement uncertainty, which draws from the author’s experience as a metrologist working in the laboratories of the Department of Primary nanometrology and technical length, Czech Metrology Institute Brno (hereinafter only CMI Brno). The second part of the doctoral thesis focuses on the determination of accuracy of length measurement in nanometrology, using a large set of measurements that were carried out under the reproducibility and repeatability conditions. There is also described and tested a model procedure utilizing the Monte Carlo method to simulate the measuring system nano-CMM in order to extent the newly created methodology of the measurement of uncertainty using a touch probe on the instrument nano-CMM. A substantial part of this doctoral thesis provides a detailed evaluation of results obtained from experiments that were executed under the repeatability and reproducibility conditions, especially for the purposes of the determination of the uncertainty of measurement. In this doctoral thesis, the uncertainty of measurement is chosen to quantify the accuracy of measurement of the instrument nano-CMM. The final part of this thesis summarizes the knowledge obtained during the scientific research and provides its evaluation. For the methodology used to determine the accuracy of measurement in nanometrology, it also outlines the future development in the area of scientific research, including the practical use in metrological traceability and extremely accurate measurements for customers. Furthermore, it deals with the possible use of other scanning systems compatible with the instrument nano-CMM.
1196

Stanovení funkčních objemů nádrže s uvažováním nejistot vstupních dat / Determination of the functional volumes of the reservoir considering input data uncertainties

Paseka, Stanislav Unknown Date (has links)
Damaging changes and interventions in the water cycle in our landscape caused mainly in the last century together with uncertainties from climate change are the cause of more frequent occurrences of hydrological extremes. In Hydrology, the most urgent problem is that the values of the long-term mean flows are decreasing in rivers as well as the yield of groundwater sources, but on the other hand, we cannot forget to the problem of extreme floods. In these consequences developing methods and tools to uncertainty analysis of the reservoir yield and of the reservoir flood protection is very important, useful and desired. The main aim was to determine the functional volumes of the reservoir considering input data measurement uncertainties and to quantify them and was explained how uncertainty took into account in results. The active storage capacity was determined from the historical series of monthly flows that were affected by uncertainties, next were applied on water evaporation, seepage losses of the dam and morphological volume-area curves. The simulation-optimization reservoir model was developed and temporal reliability as reservoir yield performance measures was applied. This model will extend the existing UNCE_RESERVOIR software. The flood capacity was determined from random flood wave variations were obtained by repeatedly generating uncertainty on the flood hydrograph. Software was developed based on the modified Klemes method, which was able to transform flood waves. The measurement uncertainties of data inputs were created using Monte Carlo method in both softwares. By connecting two softwares, the functional volumes of the reservoir under conditions of measurement uncertainties were complexly determined. The case study was applied to the real water reservoir, in the Morava River Basin. The result will be whether the dam is resistant to the current conditions, or the optimal design of the functional volumes of reservoir under conditions uncertainties.
1197

Výpočet vyhořívání jaderného paliva reaktoru VVER 1000 pomoci programu KENO / Depletion calculation of VVER 1000 reactor fuel using KENO code

Janošek, Radek January 2016 (has links)
The introduction to operational nuclear reactors focusing on light-water pressurized reactor VVER 1000 is in the beginning of this Master´s thesis. This thesis covers basic technology of VVER 1000 reactor with focus on reactor core and nuclear fuel TVSA-T. A significant part of the thesis deal with basic concepts of nuclear safety and its methods. The main goal is to create a model of VVER 1000 reactor, which can be used in nuclear burn-up calculations using KENO code. Therefore a part of this thesis deals with explanation of statistical Monte Carlo method and the KENO code.
1198

On the formulation of the alternative hypothesis for geodetic outlier detection

Lehmann, Rüdiger January 2013 (has links)
The concept of outlier detection by statistical hypothesis testing in geodesy is briefly reviewed. The performance of such tests can only be measured or optimized with respect to a proper alternative hypothesis. Firstly, we discuss the important question whether gross errors should be treated as non-random quantities or as random variables. In the first case, the alternative hypothesis must be based on the common mean shift model, while in the second case, the variance inflation model is appropriate. Secondly, we review possible formulations of alternative hypotheses (inherent, deterministic, slippage, mixture) and discuss their implications. As measures of optimality of an outlier detection, we propose the premium and protection, which are briefly reviewed. Finally, we work out a practical example: the fit of a straight line. It demonstrates the impact of the choice of an alternative hypothesis for outlier detection. / Das Konzept der Ausreißererkennung durch statistische Hypothesentests in der Geodäsie wird kurz überblickt. Die Leistungsfähigkeit solch eines Tests kann nur gemessen oder optimiert werden in Bezug auf eine geeignete Alternativhypothese. Als erstes diskutieren wir die wichtige Frage, ob grobe Fehler als nicht-zufällige oder zufällige Größen behandelt werden sollten. Im ersten Fall muss die Alternativhypothese auf das Mean-Shift-Modell gegründet werden, im zweiten Fall ist das Variance-Inflation-Modell passend. Als zweites stellen wir mögliche Formulierungen von Alternativhypothesen zusammen und diskutieren ihre Implikationen. Als Optimalitätsmaß schlagen wir das Premium-Protection-Maß vor, welches kurz überblickt wird. Schließlich arbeiten wir ein praktisches Beispiel aus: Die Anpassung einer ausgleichenden Gerade. Es zeigt die Auswirkung der Wahl einer Alternativhypothese für die Ausreißererkennung.
1199

Observation error model selection by information criteria vs. normality testing

Lehmann, Rüdiger January 2015 (has links)
To extract the best possible information from geodetic and geophysical observations, it is necessary to select a model of the observation errors, mostly the family of Gaussian normal distributions. However, there are alternatives, typically chosen in the framework of robust M-estimation. We give a synopsis of well-known and less well-known models for observation errors and propose to select a model based on information criteria. In this contribution we compare the Akaike information criterion (AIC) and the Anderson Darling (AD) test and apply them to the test problem of fitting a straight line. The comparison is facilitated by a Monte Carlo approach. It turns out that the model selection by AIC has some advantages over the AD test.
1200

Interactive pattern mining of neuroscience data

Waranashiwar, Shruti Dilip 29 January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Text mining is a process of extraction of knowledge from unstructured text documents. We have huge volumes of text documents in digital form. It is impossible to manually extract knowledge from these vast texts. Hence, text mining is used to find useful information from text through the identification and exploration of interesting patterns. The objective of this thesis in text mining area is to find compact but high quality frequent patterns from text documents related to neuroscience field. We try to prove that interactive sampling algorithm is efficient in terms of time when compared with exhaustive methods like FP Growth using RapidMiner tool. Instead of mining all frequent patterns, all of which may not be interesting to user, interactive method to mine only desired and interesting patterns is far better approach in terms of utilization of resources. This is especially observed with large number of keywords. In interactive patterns mining, a user gives feedback on whether a pattern is interesting or not. Using Markov Chain Monte Carlo (MCMC) sampling method, frequent patterns are generated in an interactive way. Thesis discusses extraction of patterns between the keywords related to some of the common disorders in neuroscience in an interactive way. PubMed database and keywords related to schizophrenia and alcoholism are used as inputs. This thesis reveals many associations between the different terms, which are otherwise difficult to understand by reading articles or journals manually. Graphviz tool is used to visualize associations.

Page generated in 0.0543 seconds