Spelling suggestions: "subject:"[een] QUANTIFICATION"" "subject:"[enn] QUANTIFICATION""
481 |
Biofilm in urinary catheters : impacts on health care and methods for quantification / Biofilm i urinkatetrar : inverkan på sjukvård och metoder för kvantifieringLönn, Gustaf, Kalmaru, Edvin January 2014 (has links)
Biofilm is an increasing problem in the healthcare and have in urinary catheters long been associated with nosocomial urinary tract infections. The infections caused in 2002 alone 13,000 deaths in the US and annual costs have been estimated to over $400 million. These costs are however most likely underestimated. The analysis of biofilm is important to aid the work on increasing patient safety and reducing the financial implications. A literature study was conducted in order to recommend a method for quantification that was fast, accurate and versatile. Methods used for biofilm quantification are primarily based upon light absorption, light scattering and changes in impedance. A few methods utilizing these properties are spectrophotometry, flow cytometry and coulter counters. Samples of biofilm are usually collected via traditional scraping with a sterile blade or with sonication (ultrasound). Flow cytometry was considered the superior method for quantification along with sonication for sample collection. The survey therefore came to the conclusion that biofilm sample collection should be done with sonication and analysis with flow cytometry. / Biofilm är ett ökande problem inom sjukvården och har i urinkatetrar länge varit associerademed sjukvårdsrelaterade urinvägsinfektioner. Infektionerna orsakade under 200213,000 dödsfall i USA och de ekonomiska kostnaderna har uppskattats till över $400miljoner. Kostnaderna antas dock vara underskattade. Analysen av biofilm är viktig förarbetet med att förbättra patientsäkerhet och minska kostnader relaterade till biofilm.En litteraturstudie användes för att rekommendera en metod som var snabb, noggrannoch mångsidig. Mätmetoder som används för kvantifiering är i huvudsak baserade påljusabsorption, ljusspridning samt förändringar i elektrisk impedans. Några metodersom använder detta är t.ex. spektrofotometri, flödescytometri samt coulter counters.Prover av biofilm samlas ofta in via traditionell skrapning med ett sterilt knivblad ellermed hjälp av ultraljud. Flödescytometri ansågs vara den bästa metoden för kvantifieringtillsammans med ultraljud för provtagning. Utifrån undersökningen drogs slutsatsen attprovtagning bör ske med ultraljud och analys med flödescytometri.
|
482 |
Efficient Uncertainty quantification with high dimensionalityJianhua Yin (12456819) 25 April 2022 (has links)
<p>Uncertainty exists everywhere in scientific and engineering applications. To avoid potential risk, it is critical to understand the impact of uncertainty on a system by performing uncertainty quantification (UQ) and reliability analysis (RA). However, the computational cost may be unaffordable using current UQ methods with high-dimensional input. Moreover, current UQ methods are not applicable when numerical data and image data coexist. </p>
<p>To decrease the computational cost to an affordable level and enable UQ with special high dimensional data (e.g. image), this dissertation develops three UQ methodologies with high dimensionality of input space. The first two methods focus on high-dimensional numerical input. The core strategy of Methodology 1 is fixing the unimportant variables at their first step most probable point (MPP) so that the dimensionality is reduced. An accurate RA method is used in the reduced space. The final reliability is obtained by accounting for the contributions of important and unimportant variables. Methodology 2 addresses the issue that the dimensionality cannot be reduced when most of the variables are important or when variables equally contribute to the system. Methodology 2 develops an efficient surrogate modeling method for high dimensional UQ using Generalized Sliced Inverse Regression (GSIR), Gaussian Process (GP)-based active learning, and importance sampling. A cost-efficient GP model is built in the latent space after dimension reduction by GSIR. And the failure boundary is identified through active learning that adds optimal training points iteratively. In Methodology 3, a Convolutional Neural Networks (CNN) based surrogate model (CNN-GP) is constructed for dealing with mixed numerical and image data. The numerical data are first converted into images and the converted images are then merged with existing image data. The merged images are fed to CNN for training. Then, we use the latent variables of the CNN model to integrate CNN with GP to quantify the model error using epistemic uncertainty. Both epistemic uncertainty and aleatory uncertainty are considered in uncertainty propagation. </p>
<p>The simulation results indicate that the first two methodologies can not only improve the efficiency but also maintain adequate accuracy for the problems with high-dimensional numerical input. GSIR with active learning can handle the situations that the dimensionality cannot be reduced when most of the variables are important or the importance of variables are close. The two methodologies can be combined as a two-stage dimension reduction for high-dimensional numerical input. The third method, CNN-GP, is capable of dealing with special high-dimensional input, mixed numerical and image data, with the satisfying regression accuracy and providing an estimate of the model error. Uncertainty propagation considering both epistemic uncertainty and aleatory uncertainty provides better accuracy. The proposed methods could be potentially applied to engineering design and decision making. </p>
|
483 |
Subgrid models for heat transfer in multiphase flows with immersed geometryLane, William 21 June 2016 (has links)
Multiphase flows are ubiquitous across engineering disciplines: water-sediment river flows in civil engineering, oil-water-sand transportation flows in petroleum engineering; and sorbent-flue gas reactor flows in chemical engineering. These multiphase flows can include a combination of momentum, heat, and mass transfer. Studying and understanding the behavior of multiphase, multiphysics flow configurations can be crucial for safe and efficient engineering design.
In this work, a framework for the development and validation, verification and uncertainty quantification (VVUQ) of subgrid models for heat transfer in multiphase flows is presented. The framework is developed for a carbon capture reactor; however, the concepts and methods described in this dissertation can be generalized and applied broadly to multiphase/multiphysics problems. When combined with VVUQ methods, these tools can provide accurate results at many length scales, enabling large upscaling problems to be simulated accurately and with calculable errors.
The system of interest is a post-combustion solid-sorbent carbon capture reactor featuring a solid-sorbent bed that is fluidized with post-combustion flue gas. As the flue gas passes through the bed, the carbon dioxide is exothermically adsorbed onto the sorbent particle’s surface, and the clean gas is passed onto further processes. To prevent overheating and degradation of the sorbent material, cooling cylinders are immersed in the flow to regulate temperatures.
Simulating a full-scale, gas-particle reactor using traditional methods is computationally intractable due to the long time scale and variations in length scales: reactor, O(10 m); cylinders, O(1 cm); and sorbent particles, O(100 um). This research developed an efficient subgrid method for simulating such a system. A constitutive model was derived to predict the effective suspension-cylinder Nusselt number based on the local flow and material properties and the cylinder geometry, analogous to single-phase Nusselt number correlations. This model was implemented in an open source computational fluid dynamics code, MFIX, and has undergone VVUQ. Verification and validation showed great agreement with comparable highly-resolved simulations, achieving speedups of up to 10,000 times faster. Our model is currently being used to simulate a 1 MW, solid-sorbent carbon capture unit and is outperforming previous methods in both speed and physically accuracy. / 2017-06-21T00:00:00Z
|
484 |
Uncertainty Qualification of Photothermal Radiometry Measurements Using Monte Carlo Simulation and Experimental RepeatabilityFleming, Austin 01 May 2014 (has links)
Photothermal Radiometry is a common thermal property measurement technique which is used to measure the properties of layered materials. Photothermal Radiometry uses a modulated laser to heat a sample, in which the thermal response can be used to determine the thermal properties of layers in the sample. The motivation for this work is to provide a better understanding of the accuracy and the repeatability of the Photothermal Radiometry measurement technique. Through this work the sensitivity of results to input uncertainties will be determined. Additionally, using numerical simulations the overall uncertainty on a theoretical measurement will be determined.
The repeatability of Photothermal Radiometry measurements is tested with the use of a proton irradiated zirconium carbide sample. Due to the proton irradiation this sample contains two layers with a thermal resistance between the layers. This sample has been independently measured by three different researchers, in three different countries and the results are compared to determine the repeatability of Photothermal Radiometry measurements. Finally, from sensitivity and uncertainty analysis experimental procedures and suggestions are provided to reduce the uncertainty in experimentally measured results.
|
485 |
Faire preuve par le chiffre ? Le cas des expérimentations aléatoires en économie / Evidence by numbers? The case of randomized controlled trialsJatteau, Arthur 05 December 2016 (has links)
Par l’intermédiaire d’Esther Duflo et de son laboratoire le J-PAL, les expérimentations aléatoires ont connu un essor remarquable depuis les années 2000 en économie et sont présentées par leurs promoteurs comme une méthode particulièrement robuste dans l’évaluation d’impact. Combinant méthodologies quantitatives et qualitatives, cette thèse examine la construction sociale de la preuve expérimentale et apporte une contribution à une épistémologie sociale et historique des expérimentations aléatoires, ainsi qu’à la socio-économie de la quantification. Dans une première partie, nous développons une socio-histoire de cette méthode. Les origines des expérimentations aléatoires sont pluridisciplinaires et antérieures à leur utilisation massive en médecine depuis les années 1940, puis en économie depuis la fin des années 1960. Nous en tirons des enseignements méthodologiques éclairant la pratique actuelle des expérimentations aléatoires. Dans un second temps, nous nous intéressons aux acteurs de cette méthode, en nous penchant sur les chercheurs du J-PAL. En procédant à une analyse prosopographique, complétée par une analyse de réseau, nous montrons que les capitaux académiques élevés de ces chercheurs et l’existence de leaders permettent de contrôler et promouvoir la diffusion de la méthode. Dans une dernière partie, nous interrogeons la production de la preuve par les expérimentations aléatoires. En nous attachant à saisir les pratiques expérimentales, nous montrons que les validités interne et externe sont souvent problématiques. Enfin, nous analysons les liens contrariés entre expérimentations aléatoires et politique(s). / With Esther Duflo and her lab (the J-PAL), randomized controlled trials (RCTs) became trendy from the the 2000’s onward in economics and are presented by their advocates as the most robust method for impact evaluation. Relying on mixed methods, this thesis investigates the social construction of experimental evidence and contributes to a social and historical epistemology of RCTs and to the socio-economy of quantification.The first part develops a socio-history of this method. The origins of RCTs are multidisciplinary and precede their extensive use in medicine from the 1940s and in economics from the 1960s onward. This allows us to gain a deeper undestanding of the current use of RCTs.In the second part, we examine the stakeholders of this method, chiefly J-PAL researchers. Our prosopographical analysis, supplemented by a network analysis, demonstrates that their high level of academic capital and the presence of leaders allow for the control and the diffusion of RCTs.In the last part, we scrutinize the production of experimental evidence. By examining RCTs in operation, we show that both their internal and external validity are in many cases compromized. Finally, we explore the convoluted links between RCTs, policy and politics.
|
486 |
Chasing individuation : mathematical description of physical systems / A la poursuite de l’individuation : description mathématique des systèmes physiquesZalamea, Federico 23 November 2016 (has links)
Résumé: Ce travail se veut une analyse conceptuelle de certains développements récents dans les fondements mathématiques de la Mécanique Classique et de la Mécanique Quantique qui ont permis de formuler ces deux théories dans un même langage. Du point de vue algébrique, l’ensemble des observables d’un système physique, soit-il classique ou quantique, est décrit par une algèbre de Jordan-Lie. Du point de vue géométrique, l’espace des états de tout système est décrit par un espace uniforme de Poisson avec transition de probabilité. Ces deux structures mathématiques sont ici interprétées comme une manifestation du double rôle constitutif des propriétés en physique : elles sont à la fois des quantités et des transformations. Il s’agit alors de comprendre l’articulation précise entre ces deux rôles. Au cours de l’analyse, il apparaîtra que la Mécanique Quantique peut être vue comme se distinguant de la Mécanique Classique par une condition de compatibilité entres les quantités et les transformations.D’autre part, cette thèse met en évidence l’existence d’une tension fondamentale entre une certaine façon abstraite de concevoir les structures mathématiques, présente dans la pratique de la physique mathématique, et la nécessité de spécifier des états ou des observables particulières. Il devient alors important de comprendre comment, dans le formalisme, se construit un schéma d’indexation. La “poursuite de l’individuation” est l’analyse de différentes techniques mathématiques vues comme tentatives de résolution ce problème. En particulier,nous discuterons comment la théorie des groupes permet d’y apporter une solution partielle. / This work is a conceptual analysis of certain recent developments in the mathematical foundations of Classical and Quantum Mechanics which have allowed to formulate both theories in a common language. From the algebraic point of view, the set of observables of a physical system, be it classical or quantum, is described by a Jordan-Lie algebra. From the geometric point of view, the space of states of any system is described by a uniform Poisson space with transition probability. Both these structures are here perceived as formal translations of the fundamental two fold role of properties in Mechanics: they are at the same time quantities and transformations. The question becomes then to understand the precise articulation between these two roles. The analysis will show that Quantum Mechanics canbe thought as distinguishing itself from Classical Mechanics by a compatibility condition between properties-as-quantities and properties-as-transformations. Moreover, this dissertation shows the existence of a tension between a certain ‘abstractway’ of conceiving mathematical structures, used in the practice of mathematical physics, and the necessary capacity to specify particular states or observables. It then becomes important to understand how, within the formalism, one can construct a labelling scheme. The “Chasefor Individuation” is the analysis of diferent mathematical techniques which attempt to overcome this tension. In particular, we discuss how group theory furnishes a partial solution
|
487 |
Modélisation et commande de l'anesthésie en milieu clinique / On the modelling and control of anesthsia in clinical settingsZabi, Saïd 01 December 2016 (has links)
Cette thèse traite de la modélisation et la commande de l'anesthésie sous un angle théorique et formel. L'anesthésie générale d'un patient au cours d'une opération consiste pour le médecin anesthésiste à contrôler l'état d'endormissement et d'analgésie du patient (éviter un sur ou un sous dosage) en ajustant la perfusion des substances analgésiques et/ou hypnotiques en fonction d'indicateurs cliniques tels que le BIS pour l'hypnose ou la variation de la surface pupillaire pour l'analgésie. Ce manuscrit se compose de trois parties. La première définit les concepts et mots clés utilisés dans le domaine de l'anesthésie, présente une introduction à la modélisation et la commande de l'anesthésie d'un point de vue de l'ingénierie des systèmes, rappelle les caractéristiques et contraintes de la commande des systèmes d'anesthésie et établit un état de l'art des travaux de la littérature. La seconde partie concerne la commande de l'hypnose qui est effectuée en deux phases : induction puis maintenance. Dans la première phase (induction) une commande en temps minimal est calculée pour ramener le patient de son état de réveil vers le voisinage d'un point cible correspondant à un objectif du BIS. Une fois l'état du patient proche de l'état cible, la deuxième phase (maintenance) consiste à garantir que l'état du patient reste dans un ensemble où le BIS est garanti entre 40 et 60 et, éventuellement, suit des références constantes. Dans cette phase, la synthèse des lois de commandes (de type retour d'état et retour de sortie dynamique) prend en compte la saturation de la commande, la positivité du système, la variabilité des patients, ... Dans la troisième partie, on commence par la proposition d'un nouvel indicateur pour la profondeur de l'analgésie et d'une modélisation de la variation de la surface pupillaire. Compte tenu de la quantification de la mesure de cet indicateur, on propose la synthèse d'un retour de sortie dynamique. Ensuite, on fait une analyse de stabilité prenant en compte l'échantillonnage de la mesure. / This thesis deals with the modelling and control of anesthesia from a theoretical and formal angle. The general anesthesia of a patient during an operation consists for the anesthesiologist in checking the hypnotic and analgesic state of the patient (avoid over or under dosage) by adjusting the perfusion of analgesic and/or hypnotics substances based on clinical indicators such as BIS for hypnosis or pupillary surface variation for analgesia. This manuscript consists of three parts. The first defines the concepts and key words used in the field of anesthesia, presents an introduction to the modelling and control of anesthesia from the viewpoint of a control systems engineering, recalls the characteristics and constraints of control of the anesthesia systems and establishes a state of the art of works of the literature. The second part concerns the control of hypnosis which is performed in two phases : induction and maintenance. In the first phase (induction), a minimal time control is calculated to bring the patient from his awakening state to the neighbourhood of a target equilibrium corresponding to an objective of the BIS. Once the patient state is close to the target state, the second phase (maintenance) consists in ensuring that the patient state remains in an invariant set where the BIS is guaranteed between 40 and 60 and possibly follows constant references . In this phase, the synthesis of the control laws (state feedback and dynamic output feedback) takes into account the saturation of the control, the positivity of the system, the variability of the patients, ... In the third part, we begin by proposing a novel indicator for the depth of analgesia and modelling the variation of the pupillary surface. Taking into account the quantification of the measures of this indicator, we propose the synthesis of a dynamic output feedback control. Then, a stability analysis is carried out taking into account the sampling of the measures.
|
488 |
Statistical analysis of river discharge change in the Indochinese Peninsula using largo ensemble future climate projections / 多数アンサンブル将来気候予測情報を用いたインドシナ半島での河川流量変化の統計的分析Hanittinan, Patinya 25 September 2017 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第20677号 / 工博第4374号 / 新制||工||1680(附属図書館) / 京都大学大学院工学研究科社会基盤工学専攻 / (主査)教授 立川 康人, 教授 中北 英一, 准教授 森 信人 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
|
489 |
How Much for Joint Action?Assessing the Cost of Working TogetherMayr, Riley C. January 2019 (has links)
No description available.
|
490 |
Effective Field Theory Truncation Errors and Why They MatterMelendez, Jordan Andrew 09 July 2020 (has links)
No description available.
|
Page generated in 0.047 seconds