• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 34
  • 28
  • 6
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 167
  • 167
  • 26
  • 26
  • 24
  • 21
  • 20
  • 14
  • 14
  • 13
  • 12
  • 10
  • 10
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Têmpera e partição de ferros fundidos nodulares: microestrutura e cinética. / Quenching and partitioning of ductile cast irons: microstructure and kinetics.

Nishikawa, Arthur Seiji 01 October 2018 (has links)
Este trabalho está inserido em um projeto que procura estudar a viabilidade técnica da aplicação de um relativamente novo conceito de tratamento térmico, chamado de Têmpera e Partição (T&P), como alternativa para o processamento de ferros fundidos nodulares com alta resistência mecânica. O processo T&P tem por objetivo a obtenção de microestruturas multifásicas constituídas de martensita e austenita retida, estabilizada em carbono. A martensita confere elevada resistência mecânica, enquanto a austenita confere ductilidade. No processo T&P, após a austenitização total ou parcial da liga, o material é temperado até uma temperatura de têmpera TT entre as temperaturas Ms e Mf para produzir uma mistura controlada de martensita e austenita. Em seguida, na etapa de partição, o material é mantido isotermicamente em uma temperatura igual ou mais elevada (denominada temperatura de partição TP) para permitir a partição de carbono da martensita para a austenita. O carbono em solução sólida diminui a temperatura Ms da austenita, estabilizando-a à temperatura ambiente. O presente trabalho procurou estudar aspectos de transformações de fases -- com ênfase na evolução microestrutural e cinética das reações -- do tratamento térmico de Têmpera e Partição (T&P) aplicado a uma liga de ferro fundido nodular (Fe-3,47%C-2,47%Si-0,2%Mn). Tratamentos térmicos consistiram de austenitização a 880 oC por 30 min, seguido de têmpera a 140, 170 e 200 oC e partição a 300, 375 e 450 oC por até 2 h. A caracterização microestrutural foi feita por microscopia óptica (MO), eletrônica de varredura (MEV), difração de elétrons retroespalhados (EBSD) e análise de microssonda eletrônica (EPMA). A análise cinética foi feita por meio de ensaios de dilatometria de alta resolução e difração de raios X in situ usando radiação síncrotron. Resultados mostram que a ocorrência de reações competitivas -- reação bainítica e precipitação de carbonetos na martensita -- é inevitável durante a aplicação do tratamento T&P à presente liga de ferro fundido nodular. A cinética da reação bainítica é acelerada pela presença da martensita formada na etapa de têmpera. A reação bainítica acontece, a baixas temperaturas, desacompanhada da precipitação de carbonetos e contribui para o enriquecimento em carbono, e consequente estabilização, da austenita. Devido à precipitação de carbonetos na martensita, a formação de ferrita bainítica é o principal mecanismo de enriquecimento em carbono da austenita. A microssegregação proveniente da etapa de solidificação permanece no material tratado termicamente e afeta a distribuição da martensita formada na etapa de têmpera e a cinética da reação bainítica. Em regiões correspondentes a contornos de célula eutética são observadas menores quantidades de martensita e a reação bainítica é mais lenta. A microestrutura final produzida pelo tratamento T&P aplicado ao ferro fundido consiste de martensita revenida com carbonetos, ferrita banítica e austenita enriquecida estabilizada pelo carbono. Adicionalmente, foi desenvolvido um modelo computacional que calcula a redistribuição local de carbono durante a etapa de partição do tratamento T&P, assumindo os efeitos da precipitação de do crescimento de placas de ferrita bainítica a partir da austenita. O modelo mostrou que a cinética de partição de carbono da martensita para a austenita é mais lenta quando os carbonetos precipitados são mais estáveis e que, quando a energia livre dos carbonetos é suficientemente baixa, o fluxo de carbono acontece da austenita para a martensita. A aplicação do modelo não se limita às condições estudadas neste trabalho e pode ser aplicada para o planejamento de tratamentos T&P para aços. / The present work belongs to a bigger project whose main goal is to study the technical feasibility of the application of a relatively new heat treating concept, called Quenching and Partitioning (Q&P), as an alternative to the processing of high strength ductile cast irons. The aim of the Q&P process is to obtain multiphase microstructures consisting of martensite and carbon enriched retained austenite. Martensite confers high strength, whereas austenite confers ductility. In the Q&P process, after total or partial austenitization of the alloy, the material is quenched in a quenching temperature TQ between the Ms and Mf temperatures to produce a controlled mixture of martensite and austenite. Next, at the partitioning step, the material is isothermally held at a either equal or higher temperature (so called partitioning temperature TP) in order to promote the carbon diffusion (partitioning) from martensite to austenite. The present work focus on the study of phase transformations aspects -- with emphasis on the microstructural evolution and kinetics of the reactions -- of the Q&P process applied to a ductile cast iron alloy (Fe-3,47%C-2,47%Si-0,2%Mn). Heat treatments consisted of austenitization at 880 oC for 30 min, followed by quenching at 140, 170, and 200 oC and partitioning at 300, 375 e 450 oC up to 2 h. The microstructural characterization was carried out by optical microscopy (OM), scanning electron microscopy (SEM), backscattered diffraction (EBSD), and electron probe microanalysis (EPMA). The kinetic analysis was studied by high resolution dilatometry tests and in situ X-ray diffraction using a synchrotron light source. Results showed that competitive reactions -- bainite reaction and carbides precipitation in martensite -- is unavoidable during the Q&P process. The bainite reaction kinetics is accelerated by the presence of martensite formed in the quenching step. The bainite reaction occurs at low temperatures without carbides precipitation and contributes to the carbon enrichment of austenite and its stabilization. Due to carbides precipitation in martensite, growth of bainitic ferrite is the main mechanism of carbon enrichment of austenite. Microsegregation inherited from the casting process is present in the heat treated material and affects the martensite distribution and the kinetics of the bainite reaction. In regions corresponding to eutectic cell boundaries less martensite is observed and the kinetics of bainite reaction is slower. The final microestructure produced by the Q&P process applied to the ductile cast iron consists of tempered martensite with carbides, bainitic ferrite, and carbon enriched austenite. Additionally, a computational model was developed to calculate the local kinetics of carbon redistribution during the partitioning step, considering the effects of carbides precipitation and bainite reaction. The model showed that the kinetics of carbon partitioning from martensite to austenite is slower when the tempering carbides are more stable and that, when the carbides free energy is sufficiently low, the carbon diffuses from austenite to martensite. The model is not limited to the studied conditions and can be applied to the development of Q&P heat treatments to steels.
32

Development of a highly resolved 3-D computational model for applications in water quality and ecosystems

Hernandez Murcia, Oscar Eduardo 01 July 2014 (has links)
This dissertation presents the development and application of a computational model called BioChemFOAM developed using the computation fluid dynamic software OpenFOAM (Open source Field Operation And Manipulation). BioChemFOAM is a three dimensional incompressible unsteady-flow model that is coupled with a water-quality model via the Reynolds Average Navier-Stokes (RANS) equations. BioChemFOAM was developed to model nutrient dynamics in inland riverine aquatic ecosystems. BioChemFOAM solves the RANS equations for the hydrodynamics with an available library in OpenFOAM and implements a new library to include coupled systems of species transport equations with reactions. Simulation of the flow and multicomponent reactive transport are studied in detail for fundamental numerical experiments as well as for a real application in a backwater area of the Mississippi River. BioChemFOAM is a robust model that enables the flexible parameterization of processes for the nitrogen cycle. The processes studied include the following main components: algae, organic carbon, phosphorus, nitrogen, and dissolved oxygen. In particular, the research presented has three phases. The first phase involves the identification of the common processes that influence the nitrogen removal. The second phase covers the development and validation of the model that uses common parameterization to simulate the main features of an aquatic ecosystem. The main processes considered in the model and implemented in BioChemFOAM are: fully resolved hydraulic parameters (velocity and pressure), temperature variation, light's influence on the ecosystem, nutrients dynamics, algae growth and death, advection and diffusion of species, and isotropic turbulence (using a two-equation k-epsilon model). The final phase covers the application and analysis of the model and is divided in two sub stages: 1) a qualitative comparison of the main processes involved in the model (validation with the exact solution of different components of the model under different degrees of complexity) and 2) the quantification of main processes affecting nitrate removal in a backwater floodplain lake (Round Lake) in Pool 8 of the Mississippi River near La Crosse, WI. The BioChemFOAM model was able to reproduce different levels of complexity in an aquatic ecosystem and expose several main features that may help understand nutrient dynamics. The validation process with fabricated numerical experiments, discussed in Chapter 4, not only presents a detailed evaluation of the equations and processes but also introduces a step-by-step method of validating the model, given a level of complexity and parameterization when modeling nutrient dynamics in aquatic ecosystems. The study cases maintain fixed coefficients and characteristic values of the concentration in order to compare the influences that increasing or decreasing complexity has on the model, BioChemFOAM. Chapter 4, which focuses on model validation with numerical experiments, demonstrates that, with characteristic concentration and coefficients, some processes do not greatly influence the nutrient dynamics for algae. Chapters 5 and 6 discuss how BioChemFOAM was subsequently applied to an actual field case in the Mississippi River to show the model's ability to reproduce real world conditions when nitrate samples are available and other concentrations are used from typical monitored values. The model was able to reproduce the main processes affecting nutrient dynamics in the proposed scenarios and for previous studies in the literature. First, the model was adapted to simulate one species, nitrate, and its concentration was comparable to measured data. Second, the model was tested under different initial conditions. The model shows independence on initial conditions when reaching a steady mass flow rate for nitrate. Finally, a sensitivity analysis was performed using all eleven species in the model. The sensitivity takes as its basis the influence of processes on nitrate fate and transport and it defines eight scenarios. It was found in the present parameterization that green algae as modeled does not have a significant influence on improving nitrate spatial distributions and percentage of nitrate removal (PNR). On the other hand, reaction rates for denitrification at the bed and nitrification in the water shows an important influence on the nitrate spatial distribution and the PNR. One physical solution, from the broad range of scenarios defined in the sensitivity analysis, was selected as most closely reproducing the backwater natural system. The selection was based on published values of the percentage of nitrate removal (PNR), nitrate spatial concentrations, total nitrogen spatial concentrations and mass loading rate balances. The scenario identified as a physically valid solution has a reaction rate of nitrification and denitrification at the bed of 2.37x10-5 s-1. The PNR was found to be 39% when reaching a steady solution for the species transport. The denitrification at the bed process was about 6.7% of the input nitrate mass loading rate and the nitrification was about 7.7% of the input nitrate mass loading rate. The present research and model development highlight the need for additional detailed field measurements to reduce the uncertainty of common processes included in advanced models (see Chapter 2 for a review of models and Chapter 3 for the proposed model). The application presented in Chapter 6 utilizes only spatial variations of nitrate and total nitrogen to validate the model, which limits the validation of the remaining species. Despite the fact that some species are not known a priori, numerical experiments serve as a guide that helps explain how the aquatic ecosystem responds under different initial and boundary conditions. In addition, the PNR curves presented in this research were useful when defining realistic removal rates in a backwater area. BioChemFOAM's ability to formulate scenarios under different driving forces makes the model invaluable in terms of understanding the potential connections between species concentration and flow variables. In general, the case study presents trends in spatial and temporal distributions of non-sampled species that were comparable to measured data.
33

DETERMINATION OF OPTIMAL PARAMETER ESTIMATES FOR MEDICAL INTERVENTIONS IN HUMAN METABOLISM AND INFLAMMATION

Torres, Marcella 01 January 2019 (has links)
In this work we have developed three ordinary differential equation models of biological systems: body mass change in response to exercise, immune system response to a general inflammatory stimulus, and the immune system response in atherosclerosis. The purpose of developing such computational tools is to test hypotheses about the underlying biological processes that drive system outcomes as well as possible real medical interventions. Therefore, we focus our analysis on understanding key interactions between model parameters and outcomes to deepen our understanding of these complex processes as a means to developing effective treatments in obesity, sarcopenia, and inflammatory diseases. We develop a model of the dynamics of muscle hypertrophy in response to resistance exercise and have shown that the parameters controlling response vary between male and female group means in an elderly population. We further explore this individual variability by fitting to data from a clinical obesity study. We then apply logistic regression and classification tree methods to the analysis of between- and within-group differences in underlying physiology that lead to different long-term body composition outcomes following a diet or exercise program. Finally, we explore dieting strategies using optimal control methods. Next, we extend an existing model of inflammation to include different macrophage phenotypes. Complications with this phenotype switch can result in the accumulation of too many of either type and lead to chronic wounds or disease. With this model we are able to reproduce the expected timing of sequential influx of immune cells and mediators in a general inflammatory setting. We then calibrate this base model for the sequential response of immune cells with peritoneal cavity data from mice. Next, we develop a model for plaque formation in atherosclerosis by adapting the current inflammation model to capture the progression of macrophages to inflammatory foam cells in response to cholesterol consumption. The purpose of this work is ultimately to explore points of intervention that can lead to homeostasis.
34

A Bayesian belief network computational model of social capital in virtual communities

Daniel Motidyang, Ben Kei 31 July 2007
The notion of social capital (SC) is increasingly used as a framework for describing social issues in terrestrial communities. For more than a decade, researchers use the term to mean the set of trust, institutions, social norms, social networks, and organizations that shape the interactions of actors within a society and that are considered to be useful and assets for communities to prosper both economically and socially. Despite growing popularity of social capital especially, among researchers in the social sciences and the humanities, the concept remains ill-defined and its operation and benefits limited to terrestrial communities. In addition, proponents of social capital often use different approaches to analyze it and each approach has its own limitations. <p>This thesis examines social capital within the context of technology-mediated communities (also known as virtual communities) communities. It presents a computational model of social capital, which serves as a first step in the direction of understanding, formalizing, computing and discussing social capital in virtual communities. The thesis employs an eclectic set of approaches and procedures to explore, analyze, understand and model social capital in two types of virtual communities: virtual learning communities (VLCs) and distributed communities of practice (DCoP). <p>There is an intentional flow to the analysis and the combination of methods described in the thesis. The analysis includes understanding what constitutes social capital in the literature, identifying and isolating variables that are relevant to the context of virtual communities, conducting a series of studies to further empirically examine various components of social capital identified in three kinds of virtual communities and building a computational model. <p>A sensitivity analysis aimed at examining the statistical variability of the individual variables in the model and their effects on the overall level of social capital are conducted and a series of evidence-based scenarios are developed to test and update the model. The result of the model predictions are then used as input to construct a final empirical study aimed at verifying the model.<p>Key findings from the various studies in the thesis indicated that SC is a multi-layered, multivariate, multidimensional, imprecise and ill-defined construct that has emerged from a rather murky swamp of terminology but it is still useful for exploring and understanding social networking issues that can possibly influence our understanding of collaboration and learning in virtual communities. Further, the model predictions and sensitivity analysis suggested that variables such as trust, different forms of awareness, social protocols and the type of the virtual community are all important in discussion of SC in virtual communities but each variable has different level of sensitivity to social capital. <p>The major contributions of the thesis are the detailed exploration of social capital in virtual communities and the use of an integrated set of approaches in studying and modelling it. Further, the Bayesian Belief Network approach applied in the thesis can be extended to model other similar complex online social systems.
35

A Bayesian belief network computational model of social capital in virtual communities

Daniel Motidyang, Ben Kei 31 July 2007 (has links)
The notion of social capital (SC) is increasingly used as a framework for describing social issues in terrestrial communities. For more than a decade, researchers use the term to mean the set of trust, institutions, social norms, social networks, and organizations that shape the interactions of actors within a society and that are considered to be useful and assets for communities to prosper both economically and socially. Despite growing popularity of social capital especially, among researchers in the social sciences and the humanities, the concept remains ill-defined and its operation and benefits limited to terrestrial communities. In addition, proponents of social capital often use different approaches to analyze it and each approach has its own limitations. <p>This thesis examines social capital within the context of technology-mediated communities (also known as virtual communities) communities. It presents a computational model of social capital, which serves as a first step in the direction of understanding, formalizing, computing and discussing social capital in virtual communities. The thesis employs an eclectic set of approaches and procedures to explore, analyze, understand and model social capital in two types of virtual communities: virtual learning communities (VLCs) and distributed communities of practice (DCoP). <p>There is an intentional flow to the analysis and the combination of methods described in the thesis. The analysis includes understanding what constitutes social capital in the literature, identifying and isolating variables that are relevant to the context of virtual communities, conducting a series of studies to further empirically examine various components of social capital identified in three kinds of virtual communities and building a computational model. <p>A sensitivity analysis aimed at examining the statistical variability of the individual variables in the model and their effects on the overall level of social capital are conducted and a series of evidence-based scenarios are developed to test and update the model. The result of the model predictions are then used as input to construct a final empirical study aimed at verifying the model.<p>Key findings from the various studies in the thesis indicated that SC is a multi-layered, multivariate, multidimensional, imprecise and ill-defined construct that has emerged from a rather murky swamp of terminology but it is still useful for exploring and understanding social networking issues that can possibly influence our understanding of collaboration and learning in virtual communities. Further, the model predictions and sensitivity analysis suggested that variables such as trust, different forms of awareness, social protocols and the type of the virtual community are all important in discussion of SC in virtual communities but each variable has different level of sensitivity to social capital. <p>The major contributions of the thesis are the detailed exploration of social capital in virtual communities and the use of an integrated set of approaches in studying and modelling it. Further, the Bayesian Belief Network approach applied in the thesis can be extended to model other similar complex online social systems.
36

A comprehensive Model of the Spatio-Temporal Stem Cell and Tissue Organisation in the Intestinal Crypt

Buske, Peter 30 May 2012 (has links) (PDF)
We introduce a novel dynamic model of stem cell and tissue organisation in murine intestinal crypts. Integrating the molecular, cellular and tissue level of description, this model links a broad spectrum of experimental observations encompassing spatially confined cell proliferation, directed cell migration, multiple cell lineage decisions and clonal competition. Using computational simulations we demonstrate that the model is capable of quantitatively describing and predicting the dynamic behaviour of the intestinal tissue during steady state as well as after cell damage and following selective gain or loss of gene function manipulations affecting Wnt- and Notch-signalling. Our simulation results suggest that reversibility and flexibility of cellular decisions are key elements of robust tissue organisation of the intestine. We predict that the tissue should be able to fully recover after complete elimination of cellular subpopulations including subpopulations deemed to be functional stem cells. This challenges current views of tissue stem cell organisation.
37

Ανάπτυξη υπολογιστικού μοντέλου προσωμοίωσης φθοριζόντων υλικών ανιχνευτών ιατρικής απεικόνισης με τεχνικές Monte Carlo / Development of computerized simulation model on phosphor materials detectors of medical imaging by Monte Carlo methods

Λιαπαρίνος, Παναγιώτης Φ. 23 October 2007 (has links)
Οι ενδογενείς ιδιότητες των φθοριζόντων υλικών ανιχνευτών ιατρικής απεικόνισης, παίζουν πολύ σημαντικό ρόλο στην απόδοση των ενισχυτικών πινακίδων που χρησιμοποιούνται σε ιατρικά απεικονιστικά συστήματα. Σε προηγούμενες μελέτες φθοριζόντων υλικών κοκκώδους μορφής, είτε με αναλυτικές μεθόδους είτε με τεχνικές Monte Carlo, οι τιμές των οπτικών παραμέτρων καθώς και οι πιθανότητες αλληλεπίδρασης του φωτός υπολογίστηκαν με τη βοήθεια τεχνικών προσαρμογής (fitting) σε πειραματικά δεδομένα. Ωστόσο, είχε παρατηρηθεί ότι στηριζόμενοι σε πειραματικά δεδομένα και τεχνικές προσαρμογής, οι οπτικοί παράμετροι ενός συγκεκριμένου υλικού μεταβάλλονται εντός ενός σημαντικού εύρους τιμών (π.χ. είχαν δημοσιευτεί, για το ίδιο πάχος υλικού διαφορετικές τιμές ενεργούς διατομής οπτικής σκέδασης). Στην παρούσα διδακτορική διατριβή αναπτύχθηκε ένα υπολογιστικό μοντέλο προσωμοίωσης φθοριζόντων υλικών κοκκώδους μορφής, με τεχνικές Monte Carlo, με σκοπό τη μελέτη διάδοσης των ακτίνων-χ και του φωτός. Το μοντέλο στηρίχθηκε μόνο στις φυσικές ιδιότητες των φθοριζόντων υλικών. Κάνοντας χρήση της θεωρίας σκέδασης Mie και με τη βοήθεια του μιγαδικού συντελεστή διάθλασης των υλικών, χρησιμοποιήθηκαν μικροσκοπικές πιθανότητες αλληλεπίδρασης του φωτός. Η εγκυρότητα του μοντέλου πιστοποιήθηκε συγκρίνοντας αποτελέσματα (π.χ. ποσοστό απορρόφησης ακτίνων-χ, στατιστική κατανομή μετατροπής των ακτίνων-χ σε φωτόνια φωτός, αριθμός εκπεμπόμενων οπτικών φωτονίων, κατανομή του φωτός στην έξοδο του ανιχνευτή) με δημοσιευμένα πειραματικά δεδομένα για το φθορίζον υλικό Gd2O2S:Tb (ενισχυτική πινακίδα τύπου Kodak Min-R). Τα αποτελέσματα έδειξαν την εξάρτηση της συνάρτησης μεταφοράς διαμόρφωσης (MTF) από το μέγεθος του κόκκου και από τον αριθμό των κόκκων ανα μονάδα μάζας (πακετοποιημένη πυκνότητα: packing density). Προβλέφθηκε ότι ενισχυτικές πινακίδες με φθορίζον υλικό υψηλού αριθμού κόκκων ανά μονάδα όγκου και χαμηλής τιμής μεγέθους κόκκου μπορούν να παρουσιάσουν καλύτερη απόδοση ως προς την ποσότητα και την κατανομή του εκπεμπόμενου φωτός σε σχέση με τις συμβατικές ενισχυτικές πινακίδες, κάτω απ’ τις ίδιες πειραματικές συνθήκες (π.χ. ενέργεια ακτίνων-χ, πάχος ενισχυτικής πινακίδας). / The intrinsic phosphor properties are of significant importance for the performance of phosphor screens used in medical imaging systems. In previous analytical-theoretical and Monte Carlo studies on granular phosphor materials, values of optical properties and light interaction cross sections were found by fitting to experimental data. These values were then employed for the assessment of phosphor screen imaging performance. However, it was found that, depending on the experimental technique and fitting methodology, the optical parameters of a specific phosphor material varied within a wide range of values, i.e. variations of light scattering with respect to light absorption coefficients were often observed for the same phosphor material. In this study, x-ray and light transport within granular phosphor materials were studied by developing a computational model using Monte Carlo methods. The model was based on the intrinsic physical characteristics of the phosphor. Input values required to feed the model can be easily obtained from tabulated data. The complex refractive index was introduced and microscopic probabilities for light interactions were produced, using Mie scattering theory. Model validation was carried out by comparing model results on x-ray and light parameters (x-ray absorption, statistical fluctuations in the x-ray to light conversion process, number of emitted light photons, output light spatial distribution) with previous published experimental data on Gd2O2S:Tb phosphor material (Kodak Min-R screen). Results showed the dependence of the modulation transfer function (MTF) on phosphor grain size and material packing density. It was predicted that granular Gd2O2S:Tb screens of high packing density and small grain size may exhibit considerably better resolution and light emission properties than the conventional Gd2O2S:Tb screens, under similar conditions (x-ray incident energy, screen thickness).
38

Uncertainty in the Bifurcation Diagram of a Model of Heart Rhythm Dynamics

Ring, Caroline January 2014 (has links)
<p>To understand the underlying mechanisms of cardiac arrhythmias, computational models are used to study heart rhythm dynamics. The parameters of these models carry inherent uncertainty. Therefore, to interpret the results of these models, uncertainty quantification (UQ) and sensitivity analysis (SA) are important. Polynomial chaos (PC) is a computationally efficient method for UQ and SA in which a model output Y, dependent on some independent uncertain parameters represented by a random vector &xi;, is approximated as a spectral expansion in multidimensional orthogonal polynomials in &xi;. The expansion can then be used to characterize the uncertainty in Y.</p><p>PC methods were applied to UQ and SA of the dynamics of a two-dimensional return-map model of cardiac action potential duration (APD) restitution in a paced single cell. Uncertainty was considered in four parameters of the model: three time constants and the pacing stimulus strength. The basic cycle length (BCL) (the period between stimuli) was treated as the control parameter. Model dynamics was characterized with bifurcation analysis, which determines the APD and stability of fixed points of the model at a range of BCLs, and the BCLs at which bifurcations occur. These quantities can be plotted in a bifurcation diagram, which summarizes the dynamics of the model. PC UQ and SA were performed for these quantities. UQ results were summarized in a novel probabilistic bifurcation diagram that visualizes the APD and stability of fixed points as uncertain quantities.</p><p>Classical PC methods assume that model outputs exist and reasonably smooth over the full domain of &xi;. Because models of heart rhythm often exhibit bifurcations and discontinuities, their outputs may not obey the existence and smoothness assumptions on the full domain, but only on some subdomains which may be irregularly shaped. On these subdomains, the random variables representing the parameters may no longer be independent. PC methods therefore must be modified for analysis of these discontinuous quantities. The Rosenblatt transformation maps the variables on the subdomain onto a rectangular domain; the transformed variables are independent and uniformly distributed. A new numerical estimation of the Rosenblatt transformation was developed that improves accuracy and computational efficiency compared to existing kernel density estimation methods. PC representations of the outputs in the transformed variables were then constructed. Coefficients of the PC expansions were estimated using Bayesian inference methods. For discontinuous model outputs, SA was performed using a sampling-based variance-reduction method, with the PC estimation used as an efficient proxy for the full model.</p><p>To evaluate the accuracy of the PC methods, PC UQ and SA results were compared to large-sample Monte Carlo UQ and SA results. PC UQ and SA of the fixed point APDs, and of the probability that a stable fixed point existed at each BCL, was very close to MC UQ results for those quantities. However, PC UQ and SA of the bifurcation BCLs was less accurate compared to MC results.</p><p>The computational time required for PC and Monte Carlo methods was also compared. PC analysis (including Rosenblatt transformation and Bayesian inference) required less than 10 total hours of computational time, of which approximately 30 minutes was devoted to model evaluations, compared to approximately 65 hours required for Monte Carlo sampling of the model outputs at 1 &times; 10<super>6</super> &xi; points.</p><p>PC methods provide a useful framework for efficient UQ and SA of the bifurcation diagram of a model of cardiac APD dynamics. Model outputs with bifurcations and discontinuities can be analyzed using modified PC methods. The methods applied and developed in this study may be extended to other models of heart rhythm dynamics. These methods have potential for use for uncertainty and sensitivity analysis in many applications of these models, including simulation studies of heart rate variability, cardiac pathologies, and interventions.</p> / Dissertation
39

A theoretical and experimental study of automotive catalytic converters

Clarkson, Rory John January 1995 (has links)
In response to the increasingly widespread use of catalytic converters for meeting automotive exhaust emission regulations considerable attention is currently being directed towards improving their performance. Experimental analysis is costly and time consuming. A desirable alternative is computational modelling. This thesis describes the development of a fully integrated computational model for simulating monolith type automotive catalytic converters. Two commercial CFD codes, PHOENICS and STAR-CD, were utilised to implement established techniques for modelling the flow field in catalyst assemblies. To appraise the accuracy of the flow field predictions an isothermal steady flow rig was designed and developed. A selection of axisymmetric inlet diffusers and 180o expansions were tested, with the velocity profile across the monolith, the wall static pressure distribution along the inlet section and the total pressure drop across the assembly being measured. These datum sets were compared with predictions using a variety of turbulence models and solution algorithms. The closest agreement was achieved with a two-layer near wall approach, coupled to the fully turbulent version of the RNG k-ε model, and a nominally second order differencing scheme. Even with these approaches the predicted velocity profiles were too flat, the maximum velocity being as much as 17.5% too low. Agreement on pressure drops was better, the error being consistently less than 10%. These results illustrate that present modelling techniques are insufficiently reliable for accurate predictions. It is suggested that the major reason for the relatively poor performance of these techniques is the neglecting of channel entrance effects in the monolith pressure drop term. Despite these weaknesses it was possible to show that the model reproduces the correct trends, and magnitude of change, in pressure drop and velocity distributions as the catalyst geometry changes. The PHONETICS flow field model was extended to include the heat transfer, mass transfer and chemical reactions associated with catalysts. The methodology is based on an equivalent continuum approach. The result is a reacting model capable of simulating the three-dimensional distribution of solid and gas temperatures, species concentrations and flow field variables throughout the monolith mat and the effects that moisture has on the transient warm-up of the monolith. To assess the reacting model’s accuracy use was made of published light-off data from a catalyst connected to a test bed engine. Comparison with predicted results showed that the model was capable of reproducing the correct type, and time scales, of temperature and conversion efficiency behaviour during the warm-up cycle. From these predictions it was possible to show that the flow distribution across the monolith can significantly change during light-off. Following the identification, and subsequent modelling, of the condensation and evaporation of water during the warm-up process it was possible to show that, under the catalyst conditions tested, these moisture effects do not affect light-off times. Conditions under which moisture might affect light-off have been suggested. Although the general level of model accuracy may be acceptable for studying many catalyst phenomena, known deficiencies in the reaction kinetics used, errors in the flow field predictions, uncertainty over many of the physical constants and necessary model simplifications mean that accurate quantitative predictions are still lacking. Improving the level of accuracy will require a systematic experimental approach followed by model refinements.
40

Characterization of Evoked Potentials During Deep Brain Stimulation in the Thalamus

Kent, Alexander Rafael January 2013 (has links)
<p>Deep brain stimulation (DBS) is an established surgical therapy for movement disorders. The mechanisms of action of DBS remain unclear, and selection of stimulation parameters is a clinical challenge and can result in sub-optimal outcomes. Closed-loop DBS systems would use a feedback control signal for automatic adjustment of DBS parameters and improved therapeutic effectiveness. We hypothesized that evoked compound action potentials (ECAPs), generated by activated neurons in the vicinity of the stimulating electrode, would reveal the type and spatial extent of neural activation, as well as provide signatures of clinical effectiveness. The objective of this dissertation was to record and characterize the ECAP during DBS to determine its suitability as a feedback signal in closed-loop systems. The ECAP was investigated using computer simulation and <italic>in vivo</italic> experiments, including the first preclinical and clinical ECAP recordings made from the same DBS electrode implanted for stimulation. </p><p>First, we developed DBS-ECAP recording instrumentation to reduce the stimulus artifact and enable high fidelity measurements of the ECAP at short latency. <italic>In vitro</italic> and <italic>in vivo</italic> validation experiments demonstrated the capability of the instrumentation to suppress the stimulus artifact, increase amplifier gain, and reduce distortion of short latency ECAP signals.</p><p>Second, we characterized ECAPs measured during thalamic DBS across stimulation parameters in anesthetized cats, and determined the neural origin of the ECAP using pharmacological interventions and a computer-based biophysical model of a thalamic network. This model simulated the ECAP response generated by a population of thalamic neurons, calculated ECAPs similar to experimental recordings, and indicated the relative contribution from different types of neural elements to the composite ECAP. Signal energy of the ECAP increased with DBS amplitude or pulse width, reflecting an increased extent of activation. Shorter latency, primary ECAP phases were generated by direct excitation of neural elements, whereas longer latency, secondary phases were generated by post-synaptic activation.</p><p>Third, intraoperative studies were conducted in human subjects with thalamic DBS for tremor, and the ECAP and tremor responses were measured across stimulation parameters. ECAP recording was technically challenging due to the presence of a wide range of stimulus artifact magnitudes across subjects, and an electrical circuit equivalent model and finite element method model both suggested that glial encapsulation around the DBS electrode increased the artifact size. Nevertheless, high fidelity ECAPs were recorded from acutely and chronically implanted DBS electrodes, and the energy of ECAP phases was correlated with changes in tremor. </p><p>Fourth, we used a computational model to understand how electrode design parameters influenced neural recording. Reducing the diameter or length of recording contacts increased the magnitude of single-unit responses, led to greater spatial sensitivity, and changed the relative contribution from local cells or passing axons. The effect of diameter or contact length varied across phases of population ECAPs, but ECAP signal energy increased with greater contact spacing, due to changes in the spatial sensitivity of the contacts. In addition, the signal increased with glial encapsulation in the peri-electrode space, decreased with local edema, and was unaffected by the physical presence of the highly conductive recording contacts.</p><p>It is feasible to record ECAP signals during DBS, and the correlation between ECAP characteristics and tremor suggests that this signal could be used in closed-loop DBS. This was demonstrated by implementation in simulation of a closed-loop system, in which a proportional-integral-derivative (PID) controller automatically adjusted DBS parameters to obtain a target ECAP energy value, and modified parameters in response to disturbances. The ECAP also provided insight into neural activation during DBS, with the dominant contribution to clinical ECAPs derived from excited cerebellothalamic fibers, suggesting that activation of these fibers is critical for DBS therapy.</p> / Dissertation

Page generated in 0.0904 seconds