• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 10
  • 2
  • 1
  • Tagged with
  • 47
  • 47
  • 15
  • 12
  • 10
  • 9
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Three material decomposition in dual energy CT for brachytherapy using the iterative image reconstruction algorithm DIRA : Performance of the method for an anthropomorphic phantom

Westin, Robin January 2013 (has links)
Brachytherapy is radiation therapy performed by placing a radiation source near or inside a tumor. Difference between the current water-based brachytherapy dose formalism (TG-43) and new model based dose calculation algorithms (MBSCAs) can differ by more than a factor of 10 in the calculated doses. There is a need for voxel-by-voxel cross-section assignment, ideally, both the tissue composition and mass density of every voxel should be known for individual patients. A method for determining tissue composition via three material decomposition (3MD) from dual energy CT scans was developed at Linköping university. The method (named DIRA) is a model based iterative reconstruction algorithm that utilizes two photon energies for image reconstruction and 3MD for quantitative tissue classification of the reconstructed volumetric dataset. This thesis has investigated the accuracy of the 3MD method applied on prostate tissue in an anthropomorphic phantom when using two different approximations of soft tissues in DIRA. Also the distributions of CT-numbers for soft tissues in a contemporary dual energy CT scanner have been determined. An investigation whether these distributions can be used for tissue classification of soft tissues via thresholding has been conducted. It was found that the relative errors of mass energy absorption coefficient (MEAC) and linear attenuation coefficient (LAC) of the approximated mixture as functions of photon energy were less than 6 \% in the energy region from 1 keV to 1 MeV. This showed that DIRA performed well for the selected anthropomorphic phantom and that it was relatively insensitive to choice of base materials for the approximation of soft tissues. The distributions of CT-numbers of liver, muscle and kidney tissues overlapped. For example a voxel containing muscle could be misclassified as liver in 42 cases of 100. This suggests that pure thresholding is insufficient as a method for tissue classification of soft tissues and that more advanced methods should be used.
2

Nonlinear Uncertainty Quantification, Sensitivity Analysis, and Uncertainty Propagation of a Dynamic Electrical Circuit

Doty, Austin January 2012 (has links)
No description available.
3

Propagation of Unit Location Uncertainty in Dense Storage Environments

Reilly, Patrick 01 January 2015 (has links)
Effective space utilization is an important consideration in logistics systems and is especially important in dense storage environments. Dense storage systems provide high-space utilization; however, because not all items are immediately accessible, storage and retrieval operations often require shifting of other stored items in order to access the desired item, which results in item location uncertainty when asset tracking is insufficient. Given an initial certainty in item location, we use Markovian principles to quantify the growth of uncertainty as a function of retrieval requests and discover that the steady state probability distribution for any communicating class of storage locations approaches uniform. Using this result, an expected search time model is developed and applied to the systems analyzed. We also develop metrics that quantify and characterize uncertainty in item location to aid in understanding the nature of that uncertainty. By incorporating uncertainty into our logistics model and conducting numerical experiments, we gain valuable insights into the uncertainty problem such as the benefit of multiple item copies in reducing expected search time and the varied response to different retrieval policies in otherwise identical systems.
4

Orbit uncertainty propagation through sparse grids

Nevels, Matthew David 06 August 2011 (has links)
The system of sparse gridpoints was used to propagate uncertainty forward in time through orbital mechanics simulations. Propagation of initial uncertainty through a nonlinear dynamic model is examined in regards to the uncertainty of orbit estimation. The necessary underlying mechanics of orbital mechanics, probability, and nonlinear estimation theory are reviewed to allow greater understanding of the problem. The sparse grid method itself and its implementation is covered in detail, along with the necessary properties and how to best it to a given problem based on inputs and desired outputs. Three test cases were run in the form of a restricted two-body problem, a perturbed two-body problem, and a three-body problem in which the orbiting body is positioned at a Lagrange point. It is shown that the sparse grid method shows sufficient accuracy for all mean calculations in the given problems and that higher accuracy levels allow for accurate estimation of higher moments such as the covariance.
5

Propagation of Imprecise Probabilities through Black Box Models

Bruns, Morgan Chase 12 April 2006 (has links)
From the decision-based design perspective, decision making is the critical element of the design process. All practical decision making occurs under some degree of uncertainty. Subjective expected utility theory is a well-established method for decision making under uncertainty; however, it assumes that the DM can express his or her beliefs as precise probability distributions. For many reasons, both practical and theoretical, it can be beneficial to relax this assumption of precision. One possible means for avoiding this assumption is the use of imprecise probabilities. Imprecise probabilities are more expressive of uncertainty than precise probabilities, but they are also more computationally cumbersome. Probability Bounds Analysis (PBA) is a compromise between the expressivity of imprecise probabilities and the computational ease of modeling beliefs with precise probabilities. In order for PBA to be implemented in engineering design, it is necessary to develop appropriate computational methods for propagating probability boxes (p-boxes) through black box engineering models. This thesis examines the range of applicability of current methods for p-box propagation and proposes three alternative methods. These methods are applied towards the solution of three successively complex numerical examples.
6

Efficient Computational Methods for Structural Reliability and Global Sensitivity Analyses

Zhang, Xufang 25 April 2013 (has links)
Uncertainty analysis of a system response is an important part of engineering probabilistic analysis. Uncertainty analysis includes: (a) to evaluate moments of the response; (b) to evaluate reliability analysis of the system; (c) to assess the complete probability distribution of the response; (d) to conduct the parametric sensitivity analysis of the output. The actual model of system response is usually a high-dimensional function of input variables. Although Monte Carlo simulation is a quite general approach for this purpose, it may require an inordinate amount of resources to achieve an acceptable level of accuracy. Development of a computationally efficient method, hence, is of great importance. First of all, the study proposed a moment method for uncertainty quantification of structural systems. However, a key departure is the use of fractional moment of response function, as opposed to integer moment used so far in literature. The advantage of using fractional moment over integer moment was illustrated from the relation of one fractional moment with a couple of integer moments. With a small number of samples to compute the fractional moments, a system output distribution was estimated with the principle of maximum entropy (MaxEnt) in conjunction with the constraints specified in terms of fractional moments. Compared to the classical MaxEnt, a novel feature of the proposed method is that fractional exponent of the MaxEnt distribution is determined through the entropy maximization process, instead of assigned by an analyst in prior. To further minimize the computational cost of the simulation-based entropy method, a multiplicative dimensional reduction method (M-DRM) was proposed to compute the fractional (integer) moments of a generic function with multiple input variables. The M-DRM can accurately approximate a high-dimensional function as the product of a series low-dimensional functions. Together with the principle of maximum entropy, a novel computational approach was proposed to assess the complete probability distribution of a system output. Accuracy and efficiency of the proposed method for structural reliability analysis were verified by crude Monte Carlo simulation of several examples. Application of M-DRM was further extended to the variance-based global sensitivity analysis of a system. Compared to the local sensitivity analysis, the variance-based sensitivity index can provide significance information about an input random variable. Since each component variance is defined as a conditional expectation with respect to the system model function, the separable nature of the M-DRM approximation can simplify the high-dimension integrations in sensitivity analysis. Several examples were presented to illustrate the numerical accuracy and efficiency of the proposed method in comparison to the Monte Carlo simulation method. The last contribution of the proposed study is the development of a computationally efficient method for polynomial chaos expansion (PCE) of a system's response. This PCE model can be later used uncertainty analysis. However, evaluation of coefficients of a PCE meta-model is computational demanding task due to the involved high-dimensional integrations. With the proposed M-DRM, the involved computational cost can be remarkably reduced compared to the classical methods in literature (simulation method or tensor Gauss quadrature method). Accuracy and efficiency of the proposed method for polynomial chaos expansion were verified by considering several practical examples.
7

Nuclear data uncertainty quantification and data assimilation for a lead-cooled fast reactor : Using integral experiments for improved accuracy

Alhassan, Erwin January 2015 (has links)
For the successful deployment of advanced nuclear systems and optimization of current reactor designs, high quality nuclear data are required. Before nuclear data can be used in applications they must first be evaluated, tested and validated against a set of integral experiments, and then converted into formats usable for applications. The evaluation process in the past was usually done by using differential experimental data which was then complemented with nuclear model calculations. This trend is fast changing due to the increase in computational power and tremendous improvements in nuclear reaction models over the last decade. Since these models have uncertain inputs, they are normally calibrated using experimental data. However, these experiments are themselves not exact. Therefore, the calculated quantities of model codes such as cross sections and angular distributions contain uncertainties. Since nuclear data are used in reactor transport codes as input for simulations, the output of transport codes contain uncertainties due to these data as well. Quantifying these uncertainties is important for setting safety margins; for providing confidence in the interpretation of results; and for deciding where additional efforts are needed to reduce these uncertainties. Also, regulatory bodies are now moving away from conservative evaluations to best estimate calculations that are accompanied by uncertainty evaluations. In this work, the Total Monte Carlo (TMC) method was applied to study the impact of nuclear data uncertainties from basic physics to macroscopic reactor parameters for the European Lead Cooled Training Reactor (ELECTRA). As part of the work, nuclear data uncertainties of actinides in the fuel, lead isotopes within the coolant, and some structural materials have been investigated. In the case of the lead coolant it was observed that the uncertainty in the keff and the coolant void worth (except in the case of 204Pb), were large, with the most significant contribution coming from 208Pb. New 208Pb and 206Pb random nuclear data libraries with realistic central values have been produced as part of this work. Also, a correlation based sensitivity method was used in this work, to determine parameter - cross section correlations for different isotopes and energy groups. Furthermore, an accept/reject method and a method of assigning file weights based on the likelihood function are proposed for uncertainty reduction using criticality benchmark experiments within the TMC method. It was observed from the study that a significant reduction in nuclear data uncertainty was obtained for some isotopes for ELECTRA after incorporating integral benchmark information. As a further objective of this thesis, a method for selecting benchmark for code validation for specific reactor applications was developed and applied to the ELECTRA reactor. Finally, a method for combining differential experiments and integral benchmark data for nuclear data adjustments is proposed and applied for the adjustment of neutron induced 208Pb nuclear data in the fast energy region.
8

Nuclear data uncertainty propagation and uncertainty quantification in nuclear codes

Fiorito, Luca 03 October 2016 (has links)
Uncertainties in nuclear model responses must be quantified to define safety limits, minimize costs and define operational conditions in design. Response uncertainties can also be used to provide a feedback on the quality and reliability of parameter evaluations, such as nuclear data. The uncertainties of the predictive model responses sprout from several sources, e.g. nuclear data, model approximations, numerical solvers, influence of random variables. It was proved that the largest quantifiable sources of uncertainty in nuclear models, such as neutronics and burnup calculations, are the nuclear data, which are provided as evaluated best estimates and uncertainties/covariances in data libraries. Nuclear data uncertainties and/or covariances must be propagated to the model responses with dedicated uncertainty propagation tools. However, most of the nuclear codes for neutronics and burnup models do not have these capabilities and produce best-estimate results without uncertainties. In this work, the nuclear data uncertainty propagation was concentrated on the SCK•CEN code burnup ALEPH-2 and the Monte Carlo N-Particle code MCNP.Two sensitivity analysis procedures, i.e. FSAP and ASAP, based on linear perturbation theory were implemented in ALEPH-2. These routines can propagate nuclear data uncertainties in pure decay models. ASAP and ALEPH-2 were tested and validated against the decay heat and uncertainty quantification for several fission pulses and for the MYRRHA subcritical system. The decay uncertainty is necessary to define the reliability of the decay heat removal systems and prevent overheating and mechanical failure of the reactor components. It was proved that the propagation of independent fission yield and decay data uncertainties can be carried out with ASAP also in neutron irradiation models. Because of the ASAP limitations, the Monte Carlo sampling solver NUDUNA was used to propagate cross section covariances. The applicability constraints of ASAP drove our studies towards the development of a tool that could propagate the uncertainty of any nuclear datum. In addition, the uncertainty propagation tool was supposed to operate with multiple nuclear codes and systems, including non-linear models. The Monte Carlo sampling code SANDY was developed. SANDY is independent of the predictive model, as it only interacts with the nuclear data in input. Nuclear data are sampled from multivariate probability density functions and propagated through the model according to the Monte Carlo sampling theory. Not only can SANDY propagate nuclear data uncertainties and covariances to the model responses, but it is also able to identify the impact of each uncertainty contributor by decomposing the response variance. SANDY was extensively tested against integral parameters and was used to quantify the neutron multiplication factor uncertainty of the VENUS-F reactor.Further uncertainty propagation studies were carried out for the burnup models of light water reactor benchmarks. Our studies identified fission yields as the largest source of uncertainty for the nuclide density evolution curves of several fission products. However, the current data libraries provide evaluated fission yields and uncertainties devoid of covariance matrices. The lack of fission yield covariance information does not comply with the conservation equations that apply to a fission model, and generates inconsistency in the nuclear data. In this work, we generated fission yield covariance matrices using a generalised least-square method and a set of physical constraints. The fission yield covariance matrices solve the inconsistency in the nuclear data libraries and reduce the role of the fission yields in the uncertainty quantification of burnup models responses. / Doctorat en Sciences de l'ingénieur et technologie / info:eu-repo/semantics/nonPublished
9

The Role of Constitutive Model in Traumatic Brain Injury Prediction

Kacker, Shubhra 28 October 2019 (has links)
No description available.
10

Power Station Thermal Efficiency Performance Method Evaluation

Heerlall, Heeran 16 February 2022 (has links)
Due to global warming, there is an escalated need to move towards cleaner energy solutions. Almost 85% of South Africa's electric energy is provided via Eskom's conventional coal-fired power plants. Globally, coal-fired power plants have a significant share in the power generation energy mix and this will be the case over the next 20 years. A study, aligned with the aspiration of improving the thermal efficiency of the coal-fired power plants, was initiated, with a focus on the accuracy of energy accounting. The goal is that: if we can accurately quantify efficiency losses, the effort can be prioritized to resolve the inefficiencies. Eskom's thermal accounting tool, the STEP model, was reviewed against relevant industry standards (BS 2885, BS EN 12952-15, IEC 60953-0/Ed1) to evaluate the model uncertainty for losses computed via standard correlations. Relatively large deviations were noted for the boiler radiation, turbine deterioration and make-up water losses. A specific review of OEM (Original Equipment Manufacturer) heat rate correction curves was carried out for the determination of turbine plant losses, where these curves were suspected to have high uncertainty, especially when extrapolated to points of significant deviation from design values. For an evaluated case study, the final feed water correction curves were adjusted based on an analysis done with the use of power plant thermodynamic modelling tools namely: EtaPro Virtual Plant® and Steam Pro®. A Python® based computer model was developed to separately propagate systematic (instrument) and combined uncertainties (including temporal) through the STEP model using a numerical technique called sequential perturbation. The study revealed that the uncertainties associated with thermal efficiency, heat rate and individual thermal losses are very specific to the state of operations, as demonstrated by individual unit performance and the power plant's specific design baseline performance curves. Whilst the uncertainties cannot be generalized, a methodology has been developed to evaluate any case. A 3600 MWe wet-cooled power plant (6 x 600 MWe units) situated in Mpumalanga was selected to study the impact of uncertainties on the STEP model outputs. The results from the case study yielded that the thermal efficiency computed by the “direct method”, had an instrument uncertainty of 0.756% absolute (abs) versus the indirect method of 0.201% abs when computed at the station level for a 95% confidence interval. For an individual unit, the indirect efficiency uncertainty was as high as 0.581% abs. A study was conducted to find an optimal resolution (segment size) for the thermal performance metrics to be computed, by discretizing the monthly data into smaller segment sizes and studying the movement of the mean STEP model outputs and the temporal uncertainty. It was found that the 3-hour segment size is optimal as it gives the maximum movement of the mean of performance metrics without resulting in large temporal uncertainties. When considering the combined uncertainty (temporal and instrument uncertainty) at a data resolution of 1 minute and segment size of 3 hours, the “direct method”, had a combined thermal efficiency uncertainty of 0.768% abs versus the indirect method of 0.218% abs when computed at the station level for a 95% confidence interval. This would mean that the temporal uncertainty contribution to the combined uncertainty is 2.915% for the “direct method” and 14.919% for the “indirect method” of the above-stated uncertainties. The term “STEP Factor” can be used synonymously with effectiveness (percentage of the actual efficiency relative to the target efficiency). For the case evaluated, the mean “indirect method” STEP Factor at the station level moved from 86.698% (using monthly aggregated process data) to 86.135% (when discretized to 3-hour segments) which is roughly a 0.189% abs change in the station's thermal efficiency. This would appear fairly small on the station's overall efficiency but had a significant impact on the evaluation of the STEP Factor losses and the cost impact by the change in the plant efficiency, e.g. the final feed water STEP Factor loss at a unit level moved from 2.6% abs to 3.5% abs which is significant for diagnostic and business case motivations. Later the discrepancy between the direct STEP Factor and indirect STEP Factor were investigated as the uncertainty bands did not overlap as expected. The re-evaluation of the baseline component performance data resulted in the final feed water and the condenser back-pressure heat rate correction curves being adjusted. The exercise revealed that there could be potentially be significant baseline performance data uncertainty. The corrected indirect STEP Factor instrument uncertainty was now found to be 0.468% abs which translates to 0.164% abs overall efficiency. The combined uncertainty was corrected to 0.485% abs at a 3-hour segment size which translates to 0.171% abs overall efficiency. It has been deduced that the figures stated above are case-specific. However, the models have been developed to analyse any coal-fired power plant at various operating conditions. Furthermore, the uncertainty propagation module can be used to propagate uncertainty through any other discontinuous function or computer model. Various recommendations have been made to improve: the model uncertainty of STEP, data acquisition, systematic uncertainty, temporal uncertainty and baseline data uncertainty.

Page generated in 0.1332 seconds