Spelling suggestions: "subject:"[een] QUANTIFICATION"" "subject:"[enn] QUANTIFICATION""
181 |
Design and Characterization of a Miniaturized Fluorescence Analysis System for Measurement of Cell-Free DNABondi, Parker 30 November 2018 (has links)
Sepsis is a dysregulated systemic response to infection and is one of the leading causes of in-hospital mortality in Canada. Accurate distinction between survivors and non-survivors of sepsis has recently been demonstrated through quantification of cell-free DNA (cfDNA) concentration in blood. In an analysis of 80 septic patients, non-survivors of sepsis had significantly higher cfDNA concentration levels than that of survivors or healthy patients. Real time separation of cfDNA from contaminants in blood has also been done using a cross channel microfluidic device. Current methods for DNA quantification utilize time consuming and complicated laboratory equipment and therefore are not suitable for bedside real-time testing. Thus a handheld cfDNA fluorescence device coined the Sepsis Check was designed that can perform DNA characterization in a reservoir device and DNA detection in a microfluidic cross channel device. The goal is to use this system along with the cross channel devices to set apart survivors or healthy donors from non-survivors in patients with sepsis.
The design consists of a 470𝑛𝑚 light emitting diode (LED) with 170𝑚𝑊 of optical power (LED470L – ThorLabs), an aspherical uncoated lens with a focal length of 15𝑚𝑚 (LA1540-ML – ThorLabs), a 488𝑛𝑚 bandpass filter with a 3𝑛𝑚 full width at half maximum (FWHM) (FL05488-3 – ThorLabs), an aspherical uncoated lens with a focal length of 25𝑚𝑚 (LA1560-ML – ThorLabs), an aspherical uncoated lens with a focal length of 35𝑚𝑚 (LA1027-ML – ThorLabs), a 525𝑛𝑚 longpass filter with an optical density >4.0 (F84744 – Edmund Optics), and a Raspberry Pi Camera V2 (Raspberry Pi Foundation). The Sepsis Check is made to excite the dsDNA specific PicoGreen fluorophore which has a peak absorbance at 502𝑛𝑚 and a peak emission at 523𝑛𝑚. In summary, the Sepsis Check in this thesis is capable of calibrating dsDNA concentration from 1𝜇𝑔/𝑚𝐿 to 10𝜇𝑔/𝑚𝐿 and detect DNA accumulation of 5𝜇𝑔/𝑚𝐿 and 10𝜇𝑔/𝑚𝐿 in the cross channel device. This tool can be a valuable addition to the ICU to rapidly assess the severity of sepsis for informed decision making. / Thesis / Master of Applied Science (MASc)
|
182 |
High-resolution Sequence Stratigraphy, Facies Analysis, and Sediment Quantification of the Cretaceous Gallup System, New Mexico, U.S.A.Lin, Wen January 2018 (has links)
The quantification of sediment budget in a well-defined ancient source-to-sink (S2S) system is vital to understand Earth history and basin evolution. Fulcrum analysis is an effective approach to estimate sediment volumes of depositional systems, given total mass balance throughout source areas to basins. The key to this approach is to quantify sediment in a closed S2S system with time controls. We analyzed Allomember E of the Cretaceous Dunvegan Alloformation in the Western Canadian Sedimentary Basin to test this sediment estimation approach. The results indicate that the sediment transported by the trunk-river generically matches the sediment estimated to be deposited in the basin. The upper-range estimate may suggest mud dispersal southward by geostrophic currents.
Deciphering the relationships between traditional lithostratigraphy and sequence stratigraphy is the key to correctly understanding time-stratigraphic relationships. High-resolution sequence stratigraphic analysis of the Cretaceous Gallup system documents the high-frequency depositional cyclicity using detailed facies analysis in extensively exposed outcrops in northwestern New Mexico, US. We identified thirteen stratigraphic sequences, consisting of twenty-six parasequence and sixty-one parasequences. Shoreline trajectories are evaluated based on the geometry of the parasequences. The results show the previously identified sandstone tongues are equivalent to high-frequency sequence sets. The depositional duration estimates of respective sequence stratigraphic units, associated with the estimated changes in relative sea level, imply that Milankovitch-cycle-dominated glacio-eustasy may be the predominant control on the high-frequency sequence stratigraphy.
Shoreline processes are more dynamic and complicated with mixed-energy dominance. The re-evaluation of the depositional environments of the Gallup system and the reconstructions of the paleogeography with temporal controls help to examine the depositional evolution in space and time. Paleogeographic reconstructions at parasequence scales allow for the documentation of the process-based lateral facies variations and the depositional evolution. The distinction between different wave-dominated facies associations is proposed based on this process-based facies analysis. / Dissertation / Doctor of Philosophy (PhD)
|
183 |
A Variational Approach to Estimating Uncertain Parameters in Elliptic Systemsvan Wyk, Hans-Werner 25 May 2012 (has links)
As simulation plays an increasingly central role in modern science and engineering research, by supplementing experiments, aiding in the prototyping of engineering systems or informing decisions on safety and reliability, the need to quantify uncertainty in model outputs due to uncertainties in the model parameters becomes critical. However, the statistical characterization of the model parameters is rarely known. In this thesis, we propose a variational approach to solve the stochastic inverse problem of obtaining a statistical description of the diffusion coefficient in an elliptic partial differential equation, based noisy measurements of the model output. We formulate the parameter identification problem as an infinite dimensional constrained optimization problem for which we establish existence of minimizers as well as first order necessary conditions. A spectral approximation of the uncertain observations (via a truncated Karhunen-Loeve expansion) allows us to estimate the infinite dimensional problem by a smooth, albeit high dimensional, deterministic optimization problem, the so-called 'finite noise' problem, in the space of functions with bounded mixed derivatives. We prove convergence of 'finite noise' minimizers to the appropriate infinite dimensional ones, and devise a gradient based, as well as a sampling based strategy for locating these numerically. Lastly, we illustrate our methods by means of numerical examples. / Ph. D.
|
184 |
Sequential learning, large-scale calibration, and uncertainty quantificationHuang, Jiangeng 23 July 2019 (has links)
With remarkable advances in computing power, computer experiments continue to expand the boundaries and drive down the cost of various scientific discoveries. New challenges keep arising from designing, analyzing, modeling, calibrating, optimizing, and predicting in computer experiments. This dissertation consists of six chapters, exploring statistical methodologies in sequential learning, model calibration, and uncertainty quantification for heteroskedastic computer experiments and large-scale computer experiments. For heteroskedastic computer experiments, an optimal lookahead based sequential learning strategy is presented, balancing replication and exploration to facilitate separating signal from input-dependent noise. Motivated by challenges in both large data size and model fidelity arising from ever larger modern computer experiments, highly accurate and computationally efficient divide-and-conquer calibration methods based on on-site experimental design and surrogate modeling for large-scale computer models are developed in this dissertation. The proposed methodology is applied to calibrate a real computer experiment from the gas and oil industry. This on-site surrogate calibration method is further extended to multiple output calibration problems. / Doctor of Philosophy / With remarkable advances in computing power, complex physical systems today can be simulated comparatively cheaply and to high accuracy through computer experiments. Computer experiments continue to expand the boundaries and drive down the cost of various scientific investigations, including biological, business, engineering, industrial, management, health-related, physical, and social sciences. This dissertation consists of six chapters, exploring statistical methodologies in sequential learning, model calibration, and uncertainty quantification for heteroskedastic computer experiments and large-scale computer experiments. For computer experiments with changing signal-to-noise ratio, an optimal lookahead based sequential learning strategy is presented, balancing replication and exploration to facilitate separating signal from complex noise structure. In order to effectively extract key information from massive amount of simulation and make better prediction for the real world, highly accurate and computationally efficient divide-and-conquer calibration methods for large-scale computer models are developed in this dissertation, addressing challenges in both large data size and model fidelity arising from ever larger modern computer experiments. The proposed methodology is applied to calibrate a real computer experiment from the gas and oil industry. This large-scale calibration method is further extended to solve multiple output calibration problems.
|
185 |
On Methodology for Verification, Validation and Uncertainty Quantification in Power Electronic Converters ModelingRashidi Mehrabadi, Niloofar 18 September 2014 (has links)
This thesis provides insight into quantitative accuracy assessment of the modeling and simulation of power electronic converters. Verification, Validation, and Uncertainty quantification (VVandUQ) provides a means to quantify the disagreement between computational simulation results and experimental results in order to have quantitative comparisons instead of qualitative comparisons. Due to the broad applications of modeling and simulation in power electronics, VVandUQ is used to evaluate the credibility of modeling and simulation results. The topic of VVandUQ needs to be studied exclusively for power electronic converters. To carry out this work, the formal procedure for VVandUQ of power electronic converters is presented. The definition of the fundamental words in the proposed framework is also provided.
The accuracy of the switching model of a three-phase Voltage Source Inverter (VSI) is quantitatively assessed following the proposed procedure. Accordingly, this thesis describes the hardware design and development of the switching model of the three-phase VSI. / Master of Science
|
186 |
Data-driven Methods in Mechanical Model Calibration and Prediction for Mesostructured MaterialsKim, Jee Yun 01 October 2018 (has links)
Mesoscale design involving control of material distribution pattern can create a statistically heterogeneous material system, which has shown increased adaptability to complex mechanical environments involving highly non-uniform stress fields. Advances in multi-material additive manufacturing can aid in this mesoscale design, providing voxel level control of material property. This vast freedom in design space also unlocks possibilities within optimization of the material distribution pattern. The optimization problem can be divided into a forward problem focusing on accurate predication and an inverse problem focusing on efficient search of the optimal design. In the forward problem, the physical behavior of the material can be modeled based on fundamental mechanics laws and simulated through finite element analysis (FEA). A major limitation in modeling is the unknown parameters in constitutive equations that describe the constituent materials; determining these parameters via conventional single material testing has been proven to be insufficient, which necessitates novel and effective approaches of calibration.
A calibration framework based in Bayesian inference, which integrates data from simulations and physical experiments, has been applied to a study involving a mesostructured material fabricated by fused deposition modeling. Calibration results provide insights on what values these parameters converge to as well as which material parameters the model output has the largest dependence on while accounting for sources of uncertainty introduced during the modeling process. Additionally, this statistical formulation is able to provide quick predictions of the physical system by implementing a surrogate and discrepancy model. The surrogate model is meant to be a statistical representation of the simulation results, circumventing issues arising from computational load, while the discrepancy is aimed to account for the difference between the simulation output and physical experiments. In this thesis, this Bayesian calibration framework is applied to a material bending problem, where in-situ mechanical characterization data and FEA simulations based on constitutive modeling are combined to produce updated values of the unknown material parameters with uncertainty. / Master of Science / A material system obtained by applying a pattern of multiple materials has proven its adaptability to complex practical conditions. The layer by layer manufacturing process of additive manufacturing can allow for this type of design because of its control over where material can be deposited. This possibility then raises the question of how a multi-material system can be optimized in its design for a given application. In this research, we focus mainly on the problem of accurately predicting the response of the material when subjected to stimuli. Conventionally, simulations aided by finite element analysis (FEA) were relied upon for prediction, however it also presents many issues such as long run times and uncertainty in context-specific inputs of the simulation. We instead have adopted a framework using advanced statistical methodology able to combine both experimental and simulation data to significantly reduce run times as well as quantify the various uncertainties associated with running simulations.
|
187 |
Uncertainty quantification in dynamical models. An application to cocaine consumption in SpainRubio Monzó, María 13 October 2015 (has links)
[EN] The present Ph.D. Thesis considers epidemiological mathematical models based on ordinary differential equations and shows its application to understand the cocaine consumption epidemic in Spain. Three mathematical models are presented to predict the evolution of the epidemic in the near future in order to select the model that best reflects the data. By the results obtained for the selected model, if there are not changes in cocaine consumption policies or in the economic environment, the cocaine consumption will increase in Spain over the next few years. Furthermore, we use different techniques to estimate 95% confidence intervals and, consequently, quantify the uncertainty in the predictions. In addition, using several techniques, we conducted a model sensitivity analysis to determine which parameters are those that most influence the cocaine consumption in Spain. These analysis reveal that prevention actions on cocaine consumer population can be the most effective strategy to control this trend. / [ES] La presente Tesis considera modelos matemáticos epidemiológicos basados en ecuaciones diferenciales ordinarias y muestra su aplicación para entender la epidemia del consumo de cocaína en España. Se presentan tres modelos matemáticos para predecir la evolución de dicha epidemia en un futuro próximo, con el objetivo de seleccionar el modelo que mejor refleja los datos. Por los resultados obtenidos para el modelo seleccionado, si no hay cambios en las políticas del consumo de cocaína ni en el ámbito económico, el consumo de cocaína aumentará en los próximos años. Además, utilizamos diferentes técnicas para estimar los intervalos de confianza al 95% y, de esta forma, cuantificar la incertidumbre en las predicciones. Finalmente, utilizando diferentes técnicas, hemos realizado un análisis de sensibilidad para determinar qué parámetros son los que más influyen en el consumo de cocaína. Estos análisis revelan que las acciones de prevención sobre la población de consumidores de cocaína pueden ser la estrategia más efectiva para controlar esta tendencia. / [CA] La present Tesi considera models matemàtics epidemiològics basats en equacions diferencials ordinàries i mostra la seua aplicació per a entendre l'epidèmia del consum de cocaïna en Espanya. Es presenten tres models matemàtics per a predir l'evolució d'aquesta epidèmia en un futur pròxim, amb l'objectiu de seleccionar el model que millor reflecteix les dades. Pels resultats obtinguts per al model seleccionat, si no hi ha canvis en les polítiques de consum de cocaïna ni en l'àmbit econòmic, el consum de cocaïna augmentarà en els pròxims anys. A més, utilitzem diferents tècniques per a estimar els intervals de confiança al 95% i, d'aquesta manera, quantificar la incertesa en les prediccions. Finalment, utilitzant diferents tècniques, hem realitzat un anàlisi de sensibilitat per a determinar quins paràmetres són els que més influencien el consum de cocaïna. Aquestos anàlisis revelen que les accions de prevenció en la població de consumidors de cocaïna poden ser l'estratègia més efectiva per a controlar aquesta tendència. / Rubio Monzó, M. (2015). Uncertainty quantification in dynamical models. An application to cocaine consumption in Spain [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/55844
|
188 |
On the importance of blind testing in archaeological science: the example from lithic functional studiesEvans, Adrian A. January 2014 (has links)
Yes / Blind-testing is an important tool that should be used by all analytical fields as an approach for validating method. Several fields do this well outside of archaeological science. It is unfortunate that many applied methods do not have a strong underpinning built on, what should be considered necessary, blind-testing. Historically lithic microwear analysis has been subjected to such testing, the results of which stirred considerable debate. However, putting this aside, it is argued here that the tests have not been adequately exploited. Too much attention has been focused on basic results and the implications of those rather than using the tests as a powerful tool to improve the method. Here the tests are revisited and reviewed in a new light. This approach is used to highlight specific areas of methodological weakness that can be targeted by developmental research. It illustrates the value in having a large dataset of consistently designed blind-tests in method evaluation and suggests that fields such as lithic microwear analysis would greatly benefit from such testing. Opportunity is also taken to discuss recent developments in quantitative methods within lithic functional studies and how such techniques might integrate with current practices.
|
189 |
SIR-models and uncertainty quantificationJakobsson, Per Henrik, Wärnberg, Anton January 2024 (has links)
This thesis applies the theory of uncertainty quantification and sensitivity analysis on the SIR-model and SEIR-model for the spread of diseases. We attempt to determine if we can apply this theory to estimate the model parameters to an acceptable degree of accuracy. Using sensitivity analysis we determine which parameters of the models are the most significant for some quantity of interest. We apply forward uncertainty quantification to determine how the uncertainty of the model parameters propagates to the quantities of interests. And lastly, we apply uncertainty quantification based on the maximum likelihood method to estimate the model parameters. To easily verify the results, we use synthetic data when estimating the parameters. After applying these methods we see that the importance of the model parameters heavily depend on the choice of quantity of interest. We also note that the uncertainty method reduces the uncertainty in the quantities of interests, although there are a lot of sources of errors that still needs to be considered.
|
190 |
Évaluation de la perte du volume cérébral en IRM comme marqueur individuel de neurodégénérescence des patients atteints de sclérose en plaques. / Evaluation of brain volume loss on MRI as an individual marker of neurodegeneration in multiple sclerosisDurand-Dubief, Françoise 20 December 2011 (has links)
La mesure de la perte du volume cérébral est un marqueur IRM de la neurodégénérescence dans la sclérose en plaques. Les techniques actuelles permettent de quantifier soit directement la perte de volume cérébral entre deux examens, soit de la mesurer indirectement à partir du volume cérébral de chaque examen. La fiabilité de ces techniques reste difficile à évaluer en l’absence de gold standard. Ce travail a consisté premièrement, en une étude de reproductibilité réalisée chez 9 patients à partir d’acquisitions semestrielles (3 IRM), sur deux machines différentes et post-traitées par sept algorithmes : BBSI, FreeSurfer, Intégration Jacobienne, KNBSI, un algorithme Segmentation / Classification, SIENA et SIENAX. Deuxièmement, un suivi longitudinal et prospectif a été effectué chez 90 patients SEP. L’étude des variabilités inter-techniques et inter-sites a montré que les techniques de mesures indirectes (Segmentation/Classification, FreeSurfer) et SIENAX fournissaient des pourcentages d’atrophie hétérogènes. A l’inverse, les techniques de mesures directes telles que BBSI, KNBSI, Intégration Jacobienne et à un moindre degré SIENA obtenaient des résultats reproductibles. Toutefois BBSI, KNBSI et l’Intégration Jacobienne obtenaient des pourcentages faibles, suggérant une possible sous-estimation de l’atrophie. L’évaluation de la perte du volume cérébral par Intégration Jacobienne a montré sur 2½ ans de suivi, une atrophie de 1,21% pour les 90 patients et de 1,55%, 1,51%, 0,84%, 1,21% respectivement pour les patients CIS, RR, SP et PP. A l’avenir l’évaluation de la perte de volume cérébral impose des défis d’ordre technique afin d’améliorer la fiabilité des algorithmes actuels. / Brain volume loss is currently a MRI marker of neurodegeneration in MS. The available algorithms for its quantification perfom either direct measurements, or indirect measurements. Their reliability remains difficult to assess especially since there is no gold standard technique. This work consisted first, in a reproducibility study performed on nine patients’ biannual MRI acquisitions (3 time points). These acquisitions were performed on two different MRI systems. Post-processing was applied using seven algorithms: BBSI, FreeSurfer, Jacobian Integration, KNBSI, an algorithm based on segmentation/classification, SIENA and SIENAX. Second, a longitudinal and prospective study was performed in 90 MS patients. The study of inter-technique and inter-site variabilities showed that direct measurement techniques and SIENAX provided heterogeneous values of atrophy. In contrast, indirect measurement algorithms such as BBSI, KNBSI, Jacobian Integration and to a lesser extent SIENA obtained reproducible results. However BBSI, KNBSI and Jacobian Integration algorithms showed lower percentages, suggesting a possible underestimation of atrophy. The evaluation of brain volume loss by Jacobian Integration has shown an atrophy rate of 1.21% over 2 ½ years of the 90 patients’ follow up, and of 1.55%, 1.51%, 0.84%, 1.21% for CIS, RR, SP and PP patients respectively. Jacobian Integration showed its importance in individual monitoring. In the future, assessing brain volume loss requires overcoming of some technical challenges to improve the reliability of the currently available algorithms.
|
Page generated in 0.046 seconds