• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92
  • 25
  • 20
  • 12
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 222
  • 222
  • 40
  • 35
  • 32
  • 31
  • 30
  • 24
  • 24
  • 24
  • 22
  • 20
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

The differentiation between variability uncertainty and knowledge uncertainty in life cycle assessment: A product carbon footprint of bath powder “Blaue Traube”

Budzinski, Maik January 2012 (has links)
The following thesis deals with methods to increase the reliability of the results in life cycle assessment. The paper is divided into two parts. The first part points out the typologies and sources of uncertainty in LCA and summarises the existing methods dealing with it. The methods are critically discussed and pros and cons are contrasted. Within the second part a case study is carried out. This study calculates the carbon footprint of a cosmetic product of Li-iL GmbH. Thereby the whole life cycle of the powder bath Blaue Traube is analysed. To increase the reliability of the result a procedure, derived from the first part, is applied. Recommendations to enhance the product´s sustainability are then given to the decision-makers of the company. Finally the applied procedure for dealing with uncertainty in LCAs is evaluated. The aims of the thesis are to make a contribution to the understanding of uncertainty in life cycle assessment and to deal with it in a more consistent manner. As well, the carbon footprint of the powder bath shall be based on appropriate assumptions and shall consider occurring uncertainties. Basing on discussed problems, a method is introduced to avoid the problematic merging of variability uncertainty and data uncertainty to generate probability distributions. The introduced uncertainty importance analysis allows a consistent differentiation of these types of uncertainty. Furthermore an assessment of the used data of LCA studies is possible. The method is applied at a PCF study of the bath powder Blaue Traube of Li-iL GmbH. Thereby the analysis is carried out over the whole life cycle (cradle-to-grave) as well as cradle-to-gate. The study gives a practical example to the company determining the carbon footprint of products. In addition, it meets the requirements of ISO guidelines of publishing the study and comparing it with other products. Within the PCF study the introduced method allows a differentiation of variability uncertainty and knowledge uncertainty. The included uncertainty importance analysis supports the assessment of each aggregated unit process within the analysed product system. Finally this analysis can provide a basis to collect additional, more reliable or uncertain data for critical processes.
162

Uncertainty Assessment of Hydrogeological Models Based on Information Theory

De Aguinaga, José Guillermo 03 December 2010 (has links)
There is a great deal of uncertainty in hydrogeological modeling. Overparametrized models increase uncertainty since the information of the observations is distributed through all of the parameters. The present study proposes a new option to reduce this uncertainty. A way to achieve this goal is to select a model which provides good performance with as few calibrated parameters as possible (parsimonious model) and to calibrate it using many sources of information. Akaike’s Information Criterion (AIC), proposed by Hirotugu Akaike in 1973, is a statistic-probabilistic criterion based on the Information Theory, which allows us to select a parsimonious model. AIC formulates the problem of parsimonious model selection as an optimization problem across a set of proposed conceptual models. The AIC assessment is relatively new in groundwater modeling and it presents a challenge to apply it with different sources of observations. In this dissertation, important findings in the application of AIC in hydrogeological modeling using different sources of observations are discussed. AIC is tested on ground-water models using three sets of synthetic data: hydraulic pressure, horizontal hydraulic conductivity, and tracer concentration. In the present study, the impact of the following factors is analyzed: number of observations, types of observations and order of calibrated parameters. These analyses reveal not only that the number of observations determine how complex a model can be but also that its diversity allows for further complexity in the parsimonious model. However, a truly parsimonious model was only achieved when the order of calibrated parameters was properly considered. This means that parameters which provide bigger improvements in model fit should be considered first. The approach to obtain a parsimonious model applying AIC with different types of information was successfully applied to an unbiased lysimeter model using two different types of real data: evapotranspiration and seepage water. With this additional independent model assessment it was possible to underpin the general validity of this AIC approach. / Hydrogeologische Modellierung ist von erheblicher Unsicherheit geprägt. Überparametrisierte Modelle erhöhen die Unsicherheit, da gemessene Informationen auf alle Parameter verteilt sind. Die vorliegende Arbeit schlägt einen neuen Ansatz vor, um diese Unsicherheit zu reduzieren. Eine Möglichkeit, um dieses Ziel zu erreichen, besteht darin, ein Modell auszuwählen, das ein gutes Ergebnis mit möglichst wenigen Parametern liefert („parsimonious model“), und es zu kalibrieren, indem viele Informationsquellen genutzt werden. Das 1973 von Hirotugu Akaike vorgeschlagene Informationskriterium, bekannt als Akaike-Informationskriterium (engl. Akaike’s Information Criterion; AIC), ist ein statistisches Wahrscheinlichkeitskriterium basierend auf der Informationstheorie, welches die Auswahl eines Modells mit möglichst wenigen Parametern erlaubt. AIC formuliert das Problem der Entscheidung für ein gering parametrisiertes Modell als ein modellübergreifendes Optimierungsproblem. Die Anwendung von AIC in der Grundwassermodellierung ist relativ neu und stellt eine Herausforderung in der Anwendung verschiedener Messquellen dar. In der vorliegenden Dissertation werden maßgebliche Forschungsergebnisse in der Anwendung des AIC in hydrogeologischer Modellierung unter Anwendung unterschiedlicher Messquellen diskutiert. AIC wird an Grundwassermodellen getestet, bei denen drei synthetische Datensätze angewendet werden: Wasserstand, horizontale hydraulische Leitfähigkeit und Tracer-Konzentration. Die vorliegende Arbeit analysiert den Einfluss folgender Faktoren: Anzahl der Messungen, Arten der Messungen und Reihenfolge der kalibrierten Parameter. Diese Analysen machen nicht nur deutlich, dass die Anzahl der gemessenen Parameter die Komplexität eines Modells bestimmt, sondern auch, dass seine Diversität weitere Komplexität für gering parametrisierte Modelle erlaubt. Allerdings konnte ein solches Modell nur erreicht werden, wenn eine bestimmte Reihenfolge der kalibrierten Parameter berücksichtigt wurde. Folglich sollten zuerst jene Parameter in Betracht gezogen werden, die deutliche Verbesserungen in der Modellanpassung liefern. Der Ansatz, ein gering parametrisiertes Modell durch die Anwendung des AIC mit unterschiedlichen Informationsarten zu erhalten, wurde erfolgreich auf einen Lysimeterstandort übertragen. Dabei wurden zwei unterschiedliche reale Messwertarten genutzt: Evapotranspiration und Sickerwasser. Mit Hilfe dieser weiteren, unabhängigen Modellbewertung konnte die Gültigkeit dieses AIC-Ansatzes gezeigt werden.
163

DEVELOPMENT OF DROPWISE ADDITIVE MANUFACTURING WITH NON-BROWNIAN SUSPENSIONS: APPLICATIONS OF COMPUTER VISION AND BAYESIAN MODELING TO PROCESS DESIGN, MONITORING AND CONTROL

Andrew J. Radcliffe (9080312) 24 July 2020 (has links)
<div>In the past two decades, the pharmaceutical industry has been engaged in modernization of its drug development and manufacturing strategies, spurred onward by changing market pressures, regulatory encouragement, and technological advancement. Concomitant with these changes has been a shift toward new modalities of manufacturing in support of patient-centric medicine and on-demand production. To achieve these objectives requires manufacturing platforms which are both flexible and scalable, hence the interest in development of small-scale, continuous processes for synthesis, purification and drug product production. Traditionally, the downstream steps begin with a crystalline drug powder – the effluent of the final purification steps – and convert this to tablets or capsules through a series of batch unit operations reliant on powder processing. As an alternative, additive manufacturing technologies provide the means to circumvent difficulties associated with dry powder rheology, while being inherently capable of flexible production.</div><div>Through the combination of physical knowledge, experimental work, and data-driven methods, a framework was developed for ink formulation and process operation in drop-on-demand manufacturing with non-Brownian suspensions. Motivated by the challenges at hand, application of novel computational image analysis techniques yielded insight into the effects of non-Brownian particles and fluid properties on rheology. Furthermore, the extraction of modal and statistical information provided insight into the stochastic events which appear to play a notable role in drop formation from such suspensions. These computer vision algorithms can readily be applied by other researchers interested in the physics of drop coalescence and breakup in order to further modeling efforts.</div><div>Returning to the realm of process development to deal with challenges of monitoring and quality control initiated by suspension-based manufacturing, these machine vision algorithms were combined with Bayesian modeling to enact a probabilistic control strategy at the level of each dosage unit by utilizing the real-time image data acquired by an online process image sensor. Drawing upon a large historical database which spanned a wide range of conditions, a hierarchical modeling approach was used to incorporate the various sources of uncertainty inherent to the manufacturing process and monitoring technology, therefore providing more reliable predictions for future data at in-sample and out-of-sample conditions.</div><div>This thesis thus contributes advances in three closely linked areas: additive manufacturing of solid oral drug products, computer vision methods for event recognition in drop formation, and Bayesian hierarchical modeling to predict the probability that each dosage unit produced is within specifications.</div><div><br></div>
164

Improving Reconstructive Surgery through Computational Modeling of Skin Mechanics

Taeksang Lee (9183377) 30 July 2020 (has links)
<div>Excessive deformation and stress of skin following reconstructive surgery plays a crucial role in wound healing, often leading to complications. Yet, despite of this concern, surgeries are still planned and executed based on each surgeon's training and experience rather than quantitative engineering tools. The limitations of current treatment planning and execution stem in part from the difficulty in predicting the mechanical behavior of skin, challenges in directly measuring stress in the operating room, and inability to predict the long term adaptation of skin following reconstructive surgery. Computational modeling of soft tissue mechanics has emerged as an ideal candidate to determine stress contours over sizable skin regions in realistic situations. Virtual surgeries with computational mechanics tools will help surgeons explore different surgeries preoperatively, make prediction of stress contours, and eventually aid the surgeon in planning for optimal wound healing. While there has been significant progress on computational modeling of both reconstructive surgery and skin mechanical and mechanobiological behavior, there remain major gaps preventing computational mechanics to be widely used in the clinical setting. At the preoperative stage, better calibration of skin mechanical properties for individual patients based on minimally invasive mechanical tests is still needed. One of the key challenges in this task is that skin is not stress-free in vivo. In many applications requiring large skin flaps, skin is further grown with the tissue expansion technique. Thus, better understanding of skin growth and the resulting stress-free state is required. The other most significant challenge is dealing with the inherent variability of mechanical properties and biological response of biological systems. Skin properties and adaptation to mechanical cues changes with patient demographic, anatomical location, and from one individual to another. Thus, the precise model parameters can never be known exactly, even if some measurements are available. Therefore, rather than expecting to know the exact model describing a patient, a probabilistic approach is needed. To bridge the gaps, this dissertation aims to advance skin biomechanics and computational mechanics tools in order to make virtual surgery for clinical use a reality in the near future. In this spirit, the dissertation constitutes three parts: skin growth and its incompatibility, acquisition of patient-specific geometry and skin mechanical properties, and uncertainty analysis of virtual surgery scenarios.</div><div>Skin growth induced by tissue expansion has been widely used to gain extra skin before reconstructive surgery. Within continuum mechanics, growth can be described with the split of the deformation gradient akin to plasticity. We propose a probabilistic framework to do uncertainty analysis of growth and remodeling of skin in tissue expansion. Our approach relies on surrogate modeling through multi-fidelity Gaussian process regression. This work is being used calibrate the computational model against animal model data. Details of the animal model and the type of data obtained are also covered in the thesis. One important aspect of the growth and remodeling process is that it leads to residual stress. It is understood that this stress arises due to the nonhomogeneous growth deformation. In this dissertation we characterize the geometry of incompatibility of the growth field borrowing concepts originally developed in the study of crystal plasticity. We show that growth produces unique incompatibility fields that increase our understanding of the development of residual stress and the stress-free configuration of tissues. We pay particular attention to the case of skin growth in tissue expansion.</div><div>Patient-specific geometry and material properties are the focus on the second part of the thesis. Minimally invasive mechanical tests based on suction have been developed which can be used in vivo, but these tests offer only limited characterization of an individual's skin mechanics. Current methods have the following limitations: only isotropic behavior can be measured, the calibration problem is done with inverse finite element methods or simple analytical calculations which are inaccurate, the calibration yields a single deterministic set of parameters, and the process ignores any previous information about the mechanical properties that can be expected for a patient. To overcome these limitations, we recast the calibration problem in a Bayesian framework. To sample from the posterior distribution of the parameters for a patient given a suction test, the method relies on an inexpensive Gaussian process surrogate. For the patient-specific geometry, techniques such as magnetic resonance imaging or computer tomography scans can be used. Such approaches, however, require specialized equipment and set up and are not affordable in many scenarios. We propose to use multi-view stereo (MVS) to capture patient-specific geometry.</div><div>The last part of the dissertation focuses on uncertainty analysis of the reconstructive procedure itself. To achieve uncertainty analysis in the clinical setting we propose to create surrogate and reduced order models, especially principal component analysis and Gaussian process regression. We first show the characterization of stress profiles under uncertainty for the three most common flap designs. For these examples we deal with idealized geometries. The probabilistic surrogates enable not only tasks such as fast prediction and uncertainty quantification, but also optimization. Based on a global sensitivity analysis we show that the direction of anisotropy of skin with respect to the flap geometry is the most important parameter controlled by the surgeon, and we show hot to optimize the flap in this idealized setting. We conclude with the application of the probabilistic surrogates to perform uncertainty analysis in patient-specific geometries. In summary, this dissertation focuses on some of the fundamental challenges that needed to be addressed to make virtual surgery models ready for clinical use. We anticipate that our results will continue to shape the way computational models continue to be incorporated in reconstructive surgery plans.</div>
165

Entwicklung einer Best-Estimate-Methode mit Unsicherheitsanalyse für DWR-Störfalluntersuchungen, basierend auf dem Störfallanalyseprogramm TRACE

Sporn, Michael 07 August 2019 (has links)
Mit deterministischen Sicherheitsanalysen werden Auslegungsstörfälle bei Kernkraftwerken anhand von Rechenmodellen am Computer berechnet, um damit die Funktionalität der installierten Sicherheitssysteme eines jeden Kraftwerkes zu überprüfen. Allerdings sind bei solchen Störfalluntersuchungen stets Unsicherheiten vorhanden, die den zu ermittelnden Störfallablauf stark beeinflussen können. Beispielsweise können aufgrund technisch bedingter Fertigungstoleranzen schwankende Geometrie- und Materialdaten entstehen, die bei der Modellierung des Rechenmodells zu Unsicherheiten führen. Weitere Unsicherheiten können auf die physikalischen Modelle eines Störfall-analyseprogrammes zurückgeführt werden. Insbesondere haben die empirischen Beziehungen Unsicherheiten, da diese aus experimentellen Daten ermittelt wurden. In der vorliegenden Arbeit wurden daher die empirischen Beziehungen des Programmes TRACE analysiert und dessen Unsicherheiten quantifiziert. Mit der entwickelten „Dynamic Best-Estimate Safety Analysis“-Methode (DYBESA-Methode) lässt sich diese programmspezifische Unsicherheit bei der Störfalluntersuchung berücksichtigen. Es wurden 13 verschiedene „Correlation, Identification and Ranking Table“ (CIRT) erstellt, die die relevanten Unsicherheiten bei den unterschiedlichen Auslegungsstörfällen für Druckwasserreaktoren kategorisieren. Damit können Unsicherheitsanalysen basierend auf dem statistischen Verfahren nach S. S. Wilks durchgeführt werden. Schlussendlich werden die sicherheitsrelevanten Rechenergebnisse realistisch und vor allem mit einer hohen Zuverlässigkeit, im Vergleich zu einer herkömmlichen konservativen Berechnungsmethode, ermittelt.:1 Einleitung 2 Sicherheitsanforderungen an Kernkraftwerke 2.1 Einschluss radioaktiver Stoffe und Abschirmung ionisierender Strahlung 2.2 Störfallkategorien 2.3 Thermohydraulische Nachweiskriterien 3 Analysetechniken für die Durchführung von Störfalluntersuchungen 3.1 Das Störfallanalyseprogramm TRACE 3.2 Identifikation von Unsicherheiten bei Störfalluntersuchungen 3.3 Konservative Methode und Best-Estimate-Methode mit Unsicherheitsanalyse 3.4 Stand von Wissenschaft und Technik der wesentlichen Best-Estimate-Methoden 3.4.1 CSAU-Methode 3.4.2 UMAE-Methode 3.4.3 CIAU-Methode 3.4.4 GRS-Methode 3.4.5 ASTRU-Methode 4 Entwicklung der DYBESA-Methode für DWR-Störfalluntersuchungen 4.1 Thermohydraulische Phänomene beim Störfallablauf 4.2 Regressionsverfahren für die experimentelle Datenanalyse 4.2.1 Vertrauens- und Vorhersagebereich 4.2.2 Statistische Toleranzgrenzen 4.3 Identifikation von empirischen Beziehungen und deren Bewertung 4.4 Erzeugen und Kombinieren von geeigneten Stichproben 4.5 Programm zur Vorbereitung und Auswertung von Unsicherheitsanalysen 4.6 Modifikation des Störfallanalyseprogramms TRACE für die Berücksichtigung der programmspezifischen Unsicherheit 4.7 Verifikation der DYBESA-Methode 5 Ergebnisse 5.1 FEBA 5.1.1 Verifikation Rechenmodell 5.1.2 Verifikation CIRT 5.2 Marviken-Test-Station 5.2.1 Verifikation Rechenmodell 5.2.2 Verifikation CIRT 5.3 Druckwasserreaktor 5.3.1 Mittlerer Bruch einer Hauptkühlmittelleitung 5.3.2 Notstromfall 6 Diskussion und Ausblick 7 Zusammenfassung 8 Quellenverzeichnis 9 Anlagenverzeichnis
166

Dynamic Probabilistic Risk Assessment of Nuclear Power Generation Stations

Elsefy, Mohamed HM January 2021 (has links)
Risk assessment is essential for nuclear power plants (NPPs) due to the complex dynamic nature of such systems-of-systems, as well as the devastating impacts of nuclear accidents on the environment, public health, and economy. Lessons learned from the Fukushima nuclear accident demonstrated the importance of enhancing current risk assessment methodologies and developing efficient early warning decision support tools. Static probabilistic risk assessment (PRA) techniques (e.g., event and fault tree analysis) have been extensively adopted in nuclear applications to ensure NPPs comply with safety regulations. However, numerous studies have highlighted the limitations of static PRA methods such as the lack of considering the dynamic hardware/software/operator interactions inside the NPP and the timing/sequence of events. In response, several dynamic probabilistic risk assessment (DPRA) methodologies have been developed and continuously evolved over the past four decades to overcome the limitations of static PRA methods. DPRA presents a comprehensive approach to assess the risks associated with complex, dynamic systems. However, current DPRA approaches are faced with challenges associated with the intra/interdependence within/between different NPP complex systems and the massive amount of data that needs to be analyzed and rapidly acted upon. In response to these limitations of previous work, the main objective of this dissertation is to develop a physics-based DPRA platform and an intelligent data-driven prediction tool for NPP safety enhancement under normal and abnormal operating conditions. The results of this dissertation demonstrate that the developed DPRA platform is capable of simulating the dynamic interaction between different NPP systems and estimating the temporal probability of core damage under different transients with significant analysis advantages from both the computational time and data storage perspectives. The developed platform can also explicitly account for uncertainties associated with the NPP's physical parameters and operating conditions on the plant's response and probability of its core damage. Furthermore, an intelligent decision support tool, developed based on artificial neural networks (ANN), can significantly improve the safety of NPPs by providing the plant operators with fast and accurate predictions that are specific to such NPP. Such rapid prediction will minimize the need to resort to idealized physics-based simulators to predict the underlying complex physical interactions. Moving forward, the developed ANN model can be trained under plant operational data, plants operating experience database, and data from rare event simulations to consider for example plant ageing with time, operational transients, and rare events in predicting the plant behavior. Such intelligent tool can be key for NPP operators and managers to take rapid and reliable actions under abnormal conditions. / Thesis / Doctor of Philosophy (PhD)
167

Analysis of uncertainties in fatigue load assessment : a study on one Kaplan hydro turbine during start operation

Gustafsson, Annica January 2019 (has links)
In the future, hydropower plants are expected to operate more flexibly. This will lead to a more varied operation of the turbine and the generator, such as more start and stop in order to stabilise the frequency in the grid. Studies show that these transient operations are more costly in terms of fatigue degradation, i.e. consumption of fatigue life. Vattenfall has developed a methodology with the aim to analyse fatigue loads, acting on the runner and the rotor in hydropower units during operation. With a numerical model, the loads are assessed with input data gathered from measurements together with given data on several parameters. Some of the input data are bearing structure stiffness, bearing oil properties, and point of action of forces, etc. Several of these input parameters are subject to a degree of uncertainty, which affect the assessed fatigue load, determined with the methodology. This study will focus on analysing one fatigue force component acting on the runner. The aim with this study is to answer the following research questions: (i) Which input parameters, that are subject to a degree of uncertainty, contribute the most to the combined standard uncertainty in the assessed fatigue force? (ii) How much does the combined standard uncertainty in the assessed fatigue force amount to? (iii) How does the uncertainty in the assessed fatigue force affect the fatigue damage?. The combined standard uncertainty in the fatigue force is determined with methods in uncertainty propagation. In order to evaluate the effect from the uncertainty in the fatigue load on the fatigue damage, a statistical analysis of the ratio between the fatigue damage associated with a probability of exceedance and the expected fatigue damage is conducted. From the results it can be observed that the governing uncertainty parameter is the offset of the shaft displacement signal, which amount to 40 % of the combined standard uncertainty in the fatigue force. Of the nine analysed uncertainty parameters, three parameters are bearing properties parameters, i.e. the bearing clearance, the oil film temperature and the point of action of bearing forces, which amount to 47.5 % of the combined standard uncertainty in the fatigue force. Therefore, in order to decrease the uncertainties, focus should be kept on the bearing properties. Given each parameters uncertainty, the ratio between the combined standard uncertainty in the fatigue force and the expected fatigue force amount to 7 %. This corresponds to a ratio between the standard uncertainty in the fatigue damage and the expected fatigue damage of 35 %, due to the value of the index of S-N curve of five. Given the standard uncertainty in the fatigue force together with the index of S-N-curve, the ratio between the fatigue force associated with a probability of exceedance and the expected fatigue force can be assessed, i.e. the fatigue force ratio. Consequently, the fatigue force ratio amount to 1.32 for a probability of 0.0032 %, 1.09 for a probability of 10 % and 1.04 for a probability of 30 %. These probabilities correspond to the fatigue damage ratios, i.e. the ratios between the fatigue damage associated with a probability of exceedance and the expected fatigue damage of 4, 1.56 and 1.20. Thereby, the uncertainty in the fatigue force can greatly affect the uncertainty in the fatigue damage, dependent on the value of the index of S-N-curve. The results from this study imply the importance of considering the uncertainties in fatigue load assessments. These results provide support for assessing load levels for runner dimensioning to finally, be able to derive a correct margin of safety. This in order to not underestimate fatigue damage and thereby decrease the risk for unexpected fatigue failure. / Det finns ett förväntat behov av att kraftproduktionen i vattenkraftverk skall vara mer flexibel i framtiden. Detta leder till mer varierande driftlägen för turbinen och generatorn, såsom fler start och stop med syfte att stabilisera frekvensen i elnätet. Studier påvisar att transienta driftlägen är mer kostsamma i form av utmattningsdegradering, d.v.s. konsumtion av utmattningsliv. Vattenfall har utvecklat en metodik för att analysera inverkan av utmattningslaster verkande på löphjulet och rotorn i vattenkraftsaggregat under drift. Med en numerisk modell kan utmattningslasterna bedömas. Den ingående datan till modellen är bland annat är uppmätta storheter och given data på parameterar. Några av de ingående storheterna är lagerstyvhet, angreppspunkter för lagerkrafter och lageroljans egenskaper, etc. Flera av dessa ingående parametrar innehar osäkerheter, vilket påverkar bedömningen av utmattningslasterna. Denna studie kommer att fokusera på en kraftkomponent verkande på löphjulet. Malet med detta arbete är att svara på följande forskningsfrågor: (i) Vilka ingående parametrar, som innehar en osäkerhet, bidrar med en styrande osäkerhet i den bedömda kraften? (ii) Hur mycket uppgår den kombinerade standardosäkerheten i den bedömda kraften till? (iii) Hur påverkar kraftens osäkerhet utmattningsskadan? Den kombinerade standardosäkerheten i kraften är beräknad med metoder i fortplantning av osäkerheter. För att kunna bedöma påverkan på delskadan givet osäkerheten i kraften, så sker en statistisk analys av förhållandet mellan delskadan sammanhängande med en sannolikhet för överskridande och den förväntade delskadan. Resultatet påvisar att den styrande ingående parametern är offset i signalen för axelförskjutning, vilken uppgår till 40 % av den kombinerade standardosäkerheten i utmattningskraften. Av de nio analyserade parametrarna härrör tre av dessa lageregenskaper, d.v.s. lagerspel, oljetemperatur och angreppspunkter för lagerkrafterna, vilka tillsammans uppgår till 47.5 % av den kombinerade standardosäkerheten i utmattningskraften. Därför, för att reducera den totala osäkerheten bör fokus ligga på lageregenskaperna. Givet alla standardosäkerheter i de analyserade parametrarna så uppgår förhållandet mellan standardosäkerheten i utmattningskraften och den förväntade utmattningskraften på löphjulet till 7 %. Detta motsvarar att förhållandet mellan standardosäkerheten i delskadan och väntevärdet för delskadan uppkommer till 35 %, givet ett index av S-N-kurvan på fem. Givet standardosäkerheten i kraften och index av S-N-kurvan, kan förhållandet mellan utmattningskraften förenad med en sannolikhet för överskridande, och den förväntade utmattningskraften, d.v.s. kvoten av utmattningskraften, utvärderas. Detta resulterar att kvoten av utmattningskraften uppgår till 1.32 för en sannolikhet för överskridande på 0.0032 %, 1.09 för en sannolikhet på 10 % och 1.04 för en sannolikhet på 30 %. Dessa sannolikheter motsvarar att kvoten av delskadan, d.v.s. kvoten mellan delskadan förenad med en sannolikhet för överskridande, och den förväntade delskadan uppgår till 4, 1.56 och 1.20. Därför kan osäkerheten i utmattningskraften påverka osäkerheten i delskadan med en betydande faktor, beroende på värdet på index av S-N-kurvan. Således, resultaten från denna studie påvisar betydelsen att beakta osäkerheterna i de ingående parameterna vid bedömning av utmattningslast. Dessa resultat tillhandahåller stöd vid bedömning av lastnivåer för dimensionering av löphjul, för att slutligen kunna erhålla en korrekt säkerhetsmarginal. Detta för att inte underskatta utmattningsskadan och därmed minska risken oför oväntat utmattningshaveri.
168

A Statistical Framework for Distinguishing Between Aleatory and Epistemic Uncertainties in the Best- Estimate Plus Uncertainty (BEPU) Nuclear Safety Analyses

Pun-Quach, Dan 11 1900 (has links)
In 1988, the US Nuclear Regulatory Commission approved an amendment that allowed the use of best-estimate methods. This led to an increased development, and application of Best Estimate Plus Uncertainty (BEPU) safety analyses. However, a greater burden was placed on the licensee to justify all uncertainty estimates. A review of the current state of the BEPU methods indicate that there exists a number of significant criticisms, which limits the BEPU methods from reaching its full potential as a comprehensive licensing basis. The most significant criticism relates to the lack of a formal framework for distinguishing between aleatory and epistemic uncertainties. This has led to a prevalent belief that such separation of uncertainties is for convenience, rather than one out of necessity. In this thesis, we address the above concerns by developing a statistically rigorous framework to characterize the different uncertainty types. This framework is grounded on the philosophical concepts of knowledge. Considering the Plato problem, we explore the use of probability as a means to gain knowledge, which allows us to relate the inherent distinctness in knowledge with the different uncertaintytypesforanycomplexphysicalsystem. Thisframeworkis demonstrated using nuclear analysis problems, and we show through the use of structural models that the separation of these uncertainties leads to more accurate tolerance limits relative to existing BEPU methods. In existing BEPU methods, where such a distinction is not applied, the total uncertainty is essentially treated as the aleatory uncertainty. Thus, the resulting estimated percentile is much larger than the actual (true) percentile of the system's response. Our results support the premise that the separation of these two distinct uncertainty types is necessary and leads to more accurate estimates of the reactor safety margins. / Thesis / Doctor of Philosophy (PhD)
169

Application of Modular Uncertainty Techniques to Engineering Systems

Long, William C 04 May 2018 (has links)
Uncertainty analysis is crucial to any thorough analysis of an engineering system. Traditional uncertainty analysis can be a tedious task involving numerous steps that can be error prone if conducted by hand. If conducted with the aid of a computer, these tasks can be computationally expensive. In either case, the process is quite rigid. If a parameter of the system is modified or the system configuration is changed, the entire uncertainty analysis process must be conducted again giving more opportunities for calculation errors or computation time. Modular uncertainty analysis provides a method to overcome all these obstacles of traditional uncertainty analysis. The modular technique is well suited for computation by a computer which makes the process somewhat automatic after the initial setup and computation errors are reduced. The modular technique implements matrix operations to conduct the analysis. This in turns makes the process more efficient than traditional methods because computers are well suited for matrix operations. Since the modular technique implements matrix operations, the method is adaptable to system parameter or configuration modifications. The modular technique also lends itself to quickly calculating other uncertainty analysis parameters such as the uncertainty magnification factor, and the uncertainty percent contribution. This dissertation will focuson the modular technique, the extension of the technique in the form the uncertainty magnification factor and uncertainty percent contribution, and the application of the modular technique to different type of energy systems. The modular technique is applied to an internal combustion engine with a bottoming organic Rankine cycle system, a combined heat and power system, and a heating, ventilation, and air conditioning system. The results show that the modular technique is well suited to evaluate complex engineering systems. The modular technique is also shown to perform well when system parameters or configurations are modified.
170

INVESTIGATION OF LATTICE PHYSICS PHENOMENA WITH UNCERTAINTY ANALYSIS AND SENSITIVITY STUDY OF ENERGY GROUP DISCRETIZATION FOR THE CANADIAN PRESSURE TUBE SUPERCRITICAL WATER-COOLED REACTOR

Moghrabi, Ahmad January 2018 (has links)
The Generation IV International Forum (GIF) has initiated an international collaboration for the research and development of the Generation IV future nuclear energy systems. The Canadian PT-SCWR is Canada’s contribution to the GIF as a GEN-IV advanced energy system. The PT-SCWR is a pressure tube reactor type and considered as an evolution of the conventional CANDU reactor. The PT-SCWR is characterized by bi-directional coolant flow through the High Efficiency Re-entrant Channel (HERC). The Canadian SCWR is a unique design involving high pressure and temperature coolant, a light water moderator, and a thorium-plutonium fuel, and is unlike any operating or conceptual reactor at this time. The SCWR does share some features in common with the BWR configuration (direct cycle, control blades etc…), CANDU (separate low temperature moderator), and the HTGR/HTR (coolant with high propensity to up-scatter), and so it represents a hybrid of many concepts. Because of its hybrid nature there have been subtle feedback effects reported in the literature which have not been fully analyzed and are highly dependent on these unique characteristics in the core. Also given the significant isotopic changes in the fuel it is necessary to understand how the feedback mechanisms evolve with fuel depletion. Finally, given the spectral differences from both CANDU and HTR reactors further study on the few-energy group homogenization is needed. The three papers in this thesis address each one of these issues identified in literature. Models were created using the SCALE (Standardized Computer Analysis for Licensing Evaluation) code package. Through this work, it was found that the lattice is affected by more than one large individual phenomenon but that these phenomena cancel one another to have a small net final change. These phenomena are highly affected by the coolant properties which have major roles in neutron thermalization process since the PT-SCWR is characterized by a tight lattice pitch. It was observed that fresh and depleted fuel have almost similar behaviour with small differences due to the Pu depletion and the production of minor actinides, 233U and xenon. It was also found that a higher thermal energy barrier is recommended for the two-energy-group structure since the PT-SCWR is characterized by a large coolant temperature compared to the conventional water thermal reactors. Two, three and four optimum energy group structure homogenizations were determined based on the behaviour of the neutron multiplication factor and other reactivity feedback coefficients. Robust numerical computations and experience in the physics of the problem were used in the few-energy group optimization methodology. The results show that the accuracy of the expected solution becomes highly independent of the number of energy groups with more than four energy groups used. / Thesis / Doctor of Philosophy (PhD)

Page generated in 0.0682 seconds