• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92
  • 25
  • 20
  • 12
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 222
  • 222
  • 40
  • 35
  • 32
  • 31
  • 30
  • 24
  • 24
  • 24
  • 22
  • 20
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Application of Bayesian Inference Techniques for Calibrating Eutrophication Models

Zhang, Weitao 26 February 2009 (has links)
This research aims to integrate mathematical water quality models with Bayesian inference techniques for obtaining effective model calibration and rigorous assessment of the uncertainty underlying model predictions. The first part of my work combines a Bayesian calibration framework with a complex biogeochemical model to reproduce oligo-, meso- and eutrophic lake conditions. The model accurately describes the observed patterns and also provides realistic estimates of predictive uncertainty for water quality variables. The Bayesian estimations are also used for appraising the exceedance frequency and confidence of compliance of different water quality criteria. The second part introduces a Bayesian hierarchical framework (BHF) for calibrating eutrophication models at multiple systems (or sites of the same system). The models calibrated under the BHF provided accurate system representations for all the scenarios examined. The BHF allows overcoming problems of insufficient local data by “borrowing strength” from well-studied sites. Both frameworks can facilitate environmental management decisions.
152

Uncertainty Assessment of Hydrogeological Models Based on Information Theory / Bewertung der Unsicherheit hydrogeologischer Modelle unter Verwendung informationstheoretischer Grundlagen

De Aguinaga, José Guillermo 17 August 2011 (has links) (PDF)
There is a great deal of uncertainty in hydrogeological modeling. Overparametrized models increase uncertainty since the information of the observations is distributed through all of the parameters. The present study proposes a new option to reduce this uncertainty. A way to achieve this goal is to select a model which provides good performance with as few calibrated parameters as possible (parsimonious model) and to calibrate it using many sources of information. Akaike’s Information Criterion (AIC), proposed by Hirotugu Akaike in 1973, is a statistic-probabilistic criterion based on the Information Theory, which allows us to select a parsimonious model. AIC formulates the problem of parsimonious model selection as an optimization problem across a set of proposed conceptual models. The AIC assessment is relatively new in groundwater modeling and it presents a challenge to apply it with different sources of observations. In this dissertation, important findings in the application of AIC in hydrogeological modeling using different sources of observations are discussed. AIC is tested on ground-water models using three sets of synthetic data: hydraulic pressure, horizontal hydraulic conductivity, and tracer concentration. In the present study, the impact of the following factors is analyzed: number of observations, types of observations and order of calibrated parameters. These analyses reveal not only that the number of observations determine how complex a model can be but also that its diversity allows for further complexity in the parsimonious model. However, a truly parsimonious model was only achieved when the order of calibrated parameters was properly considered. This means that parameters which provide bigger improvements in model fit should be considered first. The approach to obtain a parsimonious model applying AIC with different types of information was successfully applied to an unbiased lysimeter model using two different types of real data: evapotranspiration and seepage water. With this additional independent model assessment it was possible to underpin the general validity of this AIC approach. / Hydrogeologische Modellierung ist von erheblicher Unsicherheit geprägt. Überparametrisierte Modelle erhöhen die Unsicherheit, da gemessene Informationen auf alle Parameter verteilt sind. Die vorliegende Arbeit schlägt einen neuen Ansatz vor, um diese Unsicherheit zu reduzieren. Eine Möglichkeit, um dieses Ziel zu erreichen, besteht darin, ein Modell auszuwählen, das ein gutes Ergebnis mit möglichst wenigen Parametern liefert („parsimonious model“), und es zu kalibrieren, indem viele Informationsquellen genutzt werden. Das 1973 von Hirotugu Akaike vorgeschlagene Informationskriterium, bekannt als Akaike-Informationskriterium (engl. Akaike’s Information Criterion; AIC), ist ein statistisches Wahrscheinlichkeitskriterium basierend auf der Informationstheorie, welches die Auswahl eines Modells mit möglichst wenigen Parametern erlaubt. AIC formuliert das Problem der Entscheidung für ein gering parametrisiertes Modell als ein modellübergreifendes Optimierungsproblem. Die Anwendung von AIC in der Grundwassermodellierung ist relativ neu und stellt eine Herausforderung in der Anwendung verschiedener Messquellen dar. In der vorliegenden Dissertation werden maßgebliche Forschungsergebnisse in der Anwendung des AIC in hydrogeologischer Modellierung unter Anwendung unterschiedlicher Messquellen diskutiert. AIC wird an Grundwassermodellen getestet, bei denen drei synthetische Datensätze angewendet werden: Wasserstand, horizontale hydraulische Leitfähigkeit und Tracer-Konzentration. Die vorliegende Arbeit analysiert den Einfluss folgender Faktoren: Anzahl der Messungen, Arten der Messungen und Reihenfolge der kalibrierten Parameter. Diese Analysen machen nicht nur deutlich, dass die Anzahl der gemessenen Parameter die Komplexität eines Modells bestimmt, sondern auch, dass seine Diversität weitere Komplexität für gering parametrisierte Modelle erlaubt. Allerdings konnte ein solches Modell nur erreicht werden, wenn eine bestimmte Reihenfolge der kalibrierten Parameter berücksichtigt wurde. Folglich sollten zuerst jene Parameter in Betracht gezogen werden, die deutliche Verbesserungen in der Modellanpassung liefern. Der Ansatz, ein gering parametrisiertes Modell durch die Anwendung des AIC mit unterschiedlichen Informationsarten zu erhalten, wurde erfolgreich auf einen Lysimeterstandort übertragen. Dabei wurden zwei unterschiedliche reale Messwertarten genutzt: Evapotranspiration und Sickerwasser. Mit Hilfe dieser weiteren, unabhängigen Modellbewertung konnte die Gültigkeit dieses AIC-Ansatzes gezeigt werden.
153

Investigations on Supersonic Flow in Miniature Shock Tubes

Subburaj, Janardhanraj January 2015 (has links) (PDF)
The emerging paradigms of shockwave research have opened up new horizons for interdisciplinary applications. This has inevitably driven research towards studying the propagation of shockwaves in miniature shock tubes (tube diameters typically in the range of 1−10 ). Studies have revealed that while operating at this diameter range and low initial pressures (typically 1 < 100 ) leading to low values of characteristic Reynolds numbers (typically ′ < 23,000 −1), results in the boundary layer playing a major role in shockwave attenuation. But there are very few studies addressing shockwave attenuation when shock tubes are operated at higher Reynolds number. Pressure measurements and visualization studies in shock tubes of these length scales are also seldom attempted due to practical difficulties. Given that premise, in the present work the shockwave attenuation due to wall effects and non-ideal diaphragm rupture in shock tubes of hydraulic diameters 2 , 6 and 10 has been investigated at ambient initial driven section conditions ( 1 = 300 and 1 = 1 resulting in Reynolds number in the range 70,212 −1 – 888,627 −1). In this study pressure measurements and high-speed visualization have been carried out to find the effect of the pressure ratio, temperature ratio and molecular weights of driver gas on the shock attenuation processes. In order to study the effects of the driver/driven gas temperature ratios on the shock attenuation process, a new in-situ oxyhydrogen (hydrogen and oxygen gases in the ratio 2:1) generator has been developed. Using this innovative device, the miniature shock tubes are also run in the detonation mode (forward facing detonation wave). The results obtained using helium and nitrogen driver gases for these shock tubes reveal that as the hydraulic diameter of the shock tube is reduced, a larger diaphragm pressure ratio is required to obtain a particular strength of shockwave. The attenuation in the shockwave is found to be a function of the driver gas properties namely specific heat ratio ( 4), molecular weight ( 4), temperature ( 4) as well as the diaphragm opening time of the shock tube in addition to the parameters , 21, / , and 1 as already suggested in previous reports. The visualization studies reveal that the effect of diaphragm opening time leading to longer shock formation distances appears to influence the shockwave attenuation process at these shock tube diameters. Further, it is also found that the strength of the shockwave reduces when the ratio 4/ 1 is higher. It is also seen that the length of the driven sections must be less than twice the length of the driver sections to reduce attenuation. Based on the understanding of the nature of supersonic flow in a miniature shock tubes, a novel shock/blast wave device has been developed for certain innovative biotechnology applications such as needleless vaccine delivery and cell transformations. The new device has an internal diameter of 6 and by varying the length of the driver/driven sections either shock or blast waves of requisite strength and impulse can be generated at the open end of the tube. In the shock tube mode of operation, shockwaves with steady time duration of up to 30 have been generated. In the blast tube mode of operation, where the entire tube is filled with oxyhydrogen mixture, shockwaves with peak pressures of up to 550 have been obtained with good repeatability. An attempt to power this device using solar energy has also given successful results. Visualization of the open end of the detonation driven shock tube reveals features typical of flow from the open end of shock tubes and has helped in quantifying the density field. The subsequent instants of the flow resemble a precursor flow in gun muzzle blast and flash. Typical energy levels of the shock/blast waves coming out this device is found to be about 34 for an oxyhydrogen fill pressure of 5.1 in the shock tube operation mode. Transformation of E.coli, Salmonella Typhimurium and Pseudomonas aeruginosa bacterial strains using the device by introducing plasmid DNA through their cell walls has been successfully carried out. There is more than twofold increase in the transformation efficiency using the device as compared to conventional methods. Using the same device, needleless vaccine delivery in mice using Salmonella has also been demonstrated successfully. Overall, in the present thesis, a novel method for generating shockwaves in a repeatable and controllable manner in miniature scales for interdisciplinary applications has been proposed. Also, it is the first time that experiments with the different diameter miniature shock tubes have been carried out to demonstrate the attenuation of shockwaves as the hydraulic diameter of the shock tube decreases. Future research endeavors will focus on quantitative measurement of the particle velocity behind the shock waves, and also on the nature of the boundary layers to further resolve the complex flow physics associated with supersonic flows in these miniature shock tubes.
154

Investigation on uncertainty and sensitivity analysis of complex systems / Enquête sur l'incertitude et l'analyse de sensibilité des systèmes complexes

Zhu, Yueying 23 October 2017 (has links)
Par un développement en série de Taylor, une relation analytique générale est établie pour calculer l’incertitude de la réponse du modèle, en assumant l'indépendance des entrées. En utilisant des relations de puissances et exponentielles, il est démontré que l’approximation souvent utilisée permet d’évaluer de manière satisfaisante l’incertitude sur la réponse du modèle pourvu que l’incertitude d’entrée soit négligeable ou que le modèle soit presque linéaire. La méthode est appliquée à l’étude d’un réseau de distribution électrique et à un modèle d’ordre économique.La méthode est étendue aux cas où les variables d’entrée sont corrélées. Avec la méthode généralisée, on peux déterminer si les corrélations d'entrée doivent ou non être considérées pour des applications pratiques. Des exemples numériques montrent l'efficacité et la validation de notre méthode dans l'analyse des modèles tant généraux que spécifiques tels que le modèle déterministe du VIH. La méthode est ensuite comparée à celle de Sobol. Les résultats montrent que la méthode de Sobol peut surévaluer l’incidence des divers facteurs, mais sous-estimer ceux de leurs interactions dans le cas d’interactions non linéaires entre les paramètres d’entrée. Une modification est alors introduite, aidant à comprendre la différence entre notre méthode et celle de Sobol. Enfin, un modèle numérique est établi dans le cas d’un jeu virtuel prenant en compte la formation de la dynamique de l'opinion publique. L’analyse théorique à l’aide de la méthode de modification d'un paramètre à la fois. La méthode basée sur l'échantillonnage fournit une analyse globale de l'incertitude et de la sensibilité des observations. / By means of taylor series expansion, a general analytic formula is derived to characterise the uncertaintypropagation from input variables to the model response,in assuming input independence. By using power-lawand exponential functions, it is shown that the widelyused approximation considering only the first ordercontribution of input uncertainty is sufficiently good onlywhen the input uncertainty is negligible or the underlyingmodel is almost linear. This method is then applied to apower grid system and the eoq model.The method is also extended to correlated case. Withthe extended method, it is straightforward to identify theimportance of input correlations in the model response.This allows one to determine whether or not the inputcorrelations should be considered in practicalapplications. Numerical examples suggest theeffectiveness and validation of our method for generalmodels, as well as specific ones such as thedeterministic hiv model.The method is then compared to Sobol’s one which isimplemented with sampling based strategy. Resultsshow that, compared to our method, it may overvaluethe roles of individual input factors but underestimatethose of their interaction effects when there arenonlinear coupling terms of input factors. A modificationis then introduced, helping understand the differencebetween our method and Sobol’s one.Finally, a numerical model is designed based on avirtual gambling mechanism, regarding the formation ofopinion dynamics. Theoretical analysis is proposed bythe use of one-at-a-time method. Sampling-basedmethod provides a global analysis of output uncertaintyand sensitivity.
155

Uncertainly analysis : towards more accurate predictions for the synthesis of superheavy nuclei / Analyse d'incertitude : vers des prédictions plus précises pour la synthèse des noyaux super-lourds

Cauchois, Bartholome 25 June 2018 (has links)
Les théories de réaction nucléaire décrivant la synthèse des noyaux superlourds ne sont pas fermement établies. Bien qu'un consensus existe sur les caractéristiques qualitatives de la fusion-évaporation, les prédictions quantitatives des modèles disponibles sont encore insatisfaisantes. La section efficace de production est le produit de la section efficace de capture, de la probabilité de formation et de la probabilité de survie. Des études antérieures ont établi que la partie dominante des divergences restantes provenait de notre incapacité à contraindre correctement la probabilité de formation. L'objectif principal de cette thèse est de contraindre théoriquement cette quantité. Celui-ci a été atteint en examinant les incertitudes associées à la section efficace de capture ainsi qu'à la probabilité de survie par le biais de l'analyse de régression. La barrière de fission étant le facteur le plus influent dans les calculs de probabilité de survie, on supposera qu'elle est la seule source de ses incertitudes. Et puisque la différence entre les masses du fondamental et du point-selle définit la barrière de fission, nous avons commencé par étudier les incertitudes d'un modèle de type goutte liquide afin d'obtenir les incertitudes sur les masses. Sur la base de cette analyse, nous avons affiné une méthode permettant de contraindre les énergies de correction de couches. Afin de déterminer les incertitudes sur les barrières de fission, un modèle microscopique-macroscopique simplifié a été utilisé. Les incertitudes sur la phase de capture ont été obtenues à l'aide d'un modèle basé sur une paramétrisation de la distribution de barrières. Les contraintes portant sur la probabilité de formation ont été ensuite déduites à partir de la propagation des incertitudes sur la section efficace de capture et sur la barrière de fission. Par ailleurs, les effets de l'inertie sur la probabilité de formation ont été étudiés en utilisant la théorie des perturbations et un nouveau mécanisme réduisant l'entrave à la fusion a été décrit comme un décalage de la condition initiale dans l'approximation de Smoluchowski. Enfin, sur la base de cette approche, une explication de la dépendance en énergie du point d'injection phénoménologique a été obtenue. / The nuclear reaction theories describing the synthesis of superheavy nuclei are not firmly established. Although, the basic qualitative features of fusion-evaporation have reached a consensus, the quantitative predictions of the available models are still unsatisfactory. The production cross-section is the product of the capture cross-section, the formation probability and survival probability. Previous studies have shown that the dominating part of the remaining discrepancies came from our inability to properly constrain the formation probability. The main goal of this thesis is to theoretically constrain this quantity. This is achieved by examining the uncertainties in the capture cross-section and the survival probability using regression analysis. The fission barrier being the most influential factor in survival probability calculations, it is assumed to be the only source of uncertainties. Since the fission barrier is the difference between the ground-state and saddle-point masses, we started investigating the uncertainties in the liquid drop model. Based on this analysis we have refined a method to constrain the shell correction energies. To determine the uncertainties in the fission barriers, a simplified phenomenological macroscopic-microscopic model was used. The uncertainties in the capture step were determined using a model based on a parametrization of the barrier distribution. From the propagation of the uncertainties in the capture cross-section and fission barrier, the constraints on the formation probability were determined. Separately, the effects of inertia on the formation probability were investigated using perturbation theory and a new mechanism reducing fusion hindrance was described as a shift in the initial condition within the Smoluchowski approximation. Additionally, based on this approach, an explanation for the phenomenological energy dependent parametrization of the injection point was found.
156

Ponderação bayesiana de modelos utilizando diferentes séries de precipitação aplicada à simulação chuva-vazão na Bacia do Ribeirão da Onça / Ponderação bayesiana de modelos utilizando diferentes séries de precipitação aplicada à simulação chuva-vazão na Bacia do Ribeirão da Onça

Antônio Alves Meira Neto 11 July 2013 (has links)
Neste trabalho foi proposta uma estratégia de modelagem hidrológica para a transformação chuva vazão da Bacia do Ribeirão da Onça (B.R.O) utilizando-se técnicas de auto calibração com análise de incertezas e de ponderação de modelos. Foi utilizado o modelo hidrológico Soil and Water Assessment Tool (SWAT), por ser um modelo que possui uma descrição física e de maneira distribuída dos processos hidrológicos da bacia. Foram propostas cinco diferentes séries de precipitação e esquemas de interpolação espacial a serem utilizados como dados de entrada para o modelo SWAT. Em seguida, utilizou-se o método semiautomático Sequential Uncertainty Fitting ver.-2 (SUFI-2) para a auto calibração e análise de incertezas dos parâmetros do modelo e produção de respostas com intervalos de incerteza para cada uma das séries de precipitação utilizadas. Por fim, foi utilizado o método de ponderação bayesiana de modelos (BMA) para o pós-processamento estocástico das respostas. Os resultados da análise de incerteza dos parâmetros do modelo SWAT indicam uma não adequação do método Soil Conservation Service (SCS) para simulação da geração do escoamento superficial, juntamente com uma necessidade de maior investigação das propriedades físicas do solo da bacia. A análise da precisão e acurácia dos resultados das séries de precipitação em comparação com a resposta combinada pelo método BMA sugerem a última como a mais adequada para a simulação chuva-vazão na B.R.O. / This study proposed an approach to the hydrological modeling of the Ribeirão da Onças Basin (B.R.O) based on automatic calibration and uncertainty analysis methods, together with model averaging. The Soil and Water Assessment Tool (SWAT) was used due to its distributed nature and physical description of hydrologic processes. An ensemble, composed by five different precipitation schemes, based on different sources and spatial interpolation methods was used. The Sequential Uncertainty Fitting ver-2 (SUFI-2) procedure was used for automatic calibration and uncertainty analysis of the SWAT model parameters, together with generation of streamflow simulations with uncertainty intervals. Following, the Bayesian Model Averaging (BMA) was used to merge the different responses into a single probabilistic forecast. The results of the uncertainty analysis for the SWAT parameters show that the Soil Conservation Service (SCS) model for surface runoff prediction may not be suitable for the B.R.O, and that more investigations about the soil physical properties at the Basin are recommended. An analysis of the accuracy and precision of the simulations produced by the precipitation ensemble members against the BMA simulation supports the use of the latter as a suitable framework for streamflow simulations at the B.R.O.
157

Mitigação de incertezas atraves da integração com ajuste de historico de produção / Uncertainty mitigation through the integration with production history matching

Becerra, Gustavo Gabriel 12 July 2007 (has links)
Orientadores: Denis Jose Schiozer, Celio Maschio / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica e Instituto de Geociencias / Made available in DSpace on 2018-08-12T07:35:35Z (GMT). No. of bitstreams: 1 Becerra_GustavoGabriel_M.pdf: 16760750 bytes, checksum: 0609c24d13d46b9121f71356ce9d42a1 (MD5) Previous issue date: 2007 / Resumo: A escassez de informações de qualidade introduz risco ao processo de previsão da produção de petróleo tornando imprescindível o ajuste de histórico de produção, que é a calibração do modelo a partir da resposta produtiva registrada. O ajuste é um problema inverso, em que diferentes combinações dos valores dos parâmetros do reservatório podem conduzir a respostas aceitáveis, especialmente quando o grau de incerteza desses parâmetros é elevado. A integração do ajuste de histórico com a análise probabilística dos cenários representativos conduz à obtenção de uma metodologia para detecção dos modelos calibrados dentro de uma faixa de aceitaçãodefinida. O tratamento de atributos interdependentes de influência global e local e o avanço por etapas são necessários. Desta forma, o objetivo deste trabalho é apresentar uma metodologia que integra a análise de incertezas com o ajuste de histórico em modelos de reservatórios complexos. Este procedimento auxilia a detectar os atributos incertos críticos e sua possível variação com o intuito de estimar a faixa representativa das reservas a desenvolver. Não é alvo obter o melhor ajuste determinístico, mas refletir como o histórico possibilita uma mitigação das incertezas. Assim, a meta é usar modelos mais complexos e aprimorar a metodologia iniciada por Moura Filho (2006), desenvolvida para um modelo teórico simples. São utilizados dois casos de estudo de complexidade similar. Um deles referente ao reservatório do Campo de Namorado, utilizado para verificar e validar, em nível global, a aplicação da metodologia. Na etapa de aplicação, é usado um modelo sintético construído a partir de dados de afloramentos reais no Brasil e compreendendo informações de campos análogos com sistemas turbidíticos depositados em águas profundas. Os métodos aplicados, mediante a redefinição das probabilidades associadas e níveis dos atributos incertos, permitem: (1) reduzir a faixa de ajustes possíveis e obter modelos mais confiáveis; (2) identificar e condicionar à incerteza presente em função dos dados registrados; (3) diminuir os intervalos de incerteza dos parâmetros críticos identificados; (4) demarcar os limites seguros do desempenho futuro do reservatório. A conseqüência é um aumento da confiança no uso da simulação como ferramenta auxiliar do processo decisório. Além disso, procura-se fornecer à equipe multidisciplinar uma metodologia para reduzir o tempo empregado no gerenciamento de múltiplos atributos incertos na etapa de ajuste do modelo. / Abstract: The lack of reliable data or with high degree of uncertainty yields risk to the process of production prediction making the history matching, the model calibration from the registered field production indispensable. History matching is an inverse problem and, in general, different combinations of reservoir attributes can lead acceptable solutions, especially whit high degree of uncertainty of these attributes. The integration of history matching with a probabilistic analysis of representative models yields a way to detect matched models inside an acceptance interval, providing more efficient framework for predictions. It is necessary to consider dependences between global and local attributes. The scope of this work is to present a methodology that integrates the uncertainty analysis with the history matching process in complex models. This procedure helps to detect critical subsurface attributes and their possible variation, in order to estimate a representative range of the additional reserves to be developed. . It is not an objective to obtain the best deterministic model, but to mitigate uncertainties by using observed data. The objective is to improve the methodology initiated by Moura Filho (2006), applied to a simple model. The methodology presented in this work is applied in two study cases with similar complexity. Firstly, the methodology is verified and validated, on global scale, in Namorado Field. Then, at the application stage, it is chosen a synthetic reservoir model made from real outcrop data of Brazil and involving information from analog fields with turbiditic systems deposited in deep waters. The methodology allows the redefinition of the probability and levels of the dynamic and static attributes in order: (1) to reduce the group of possible history matching obtaining more realistic models; (2) to identify the existent uncertainty as a function of observed data; (3) to decrease the uncertainty range of critical reservoir parameters; (4) to increase the confidence in production forecast. One contribution of this work is to present a quantitative approach to increase the reliability on the use of reservoir simulation as an auxiliary tool in decision processes. Another purpose of this work is to provide a procedure to reduce the consumed time to handle multiples uncertainty attributes during the history matching. / Mestrado / Reservatórios e Gestão / Mestre em Ciências e Engenharia de Petróleo
158

Caractérisation des systèmes hydro-climatiques à l'échelle locale dans l'Himalaya népalais / Characterization hydro-climatic systems at the local scale in the Nepalese Himalayas

Eeckman, Judith 30 October 2017 (has links)
La partie centrale de la chaîne himalayenne présente d’importantes hétérogénéités, en particulier en termes de topographie et de climatologie. La caractérisation des processus hydro-climatiques dans cette région est limitée par le manque de descriptif des milieux. La variabilité locale est alors difficilement représentée par les modélisations mises en œuvre à une échelle régionale.L’approche proposée dans ce travail est de caractériser les systèmes hydro-climatiques à l’échelle locale pour réduire les incertitudes liées à l’hétérogénéité du milieu. L’intégration de données localement précises est testée pour la modélisation de bassins versants peu instrumentés et fortement hétérogènes.Deux sous-bassins du bassin de la Dudh Koshi (Népal) sont utilisés comme échantillons représentatifs des milieux de haute et moyenne montagne, hors contribution glaciaire. Le schéma de surface ISBA est appliqué à la simulation des réponses hydrologiques des types de surface décrits à partir d’observations de terrain. Des mesures de propriétés physiques des sols sont intégrées pour préciser la paramétrisation des surfaces dans le modèle. Les données climatiques nécessaires sont interpolées à partir des observations in situ disponibles. Une approche non déterministe est appliquée pour quantifier les incertitudes liées à l’influence de la topographie sur les précipitations, ainsi que leur propagation aux variables simulées. Enfin, les incertitudes liées à la structure des modèles sont évaluées à l’échelle locale à travers la comparaison des paramétrisations et des résultats de simulation obtenus d'une part avec le schéma de surface ISBA, couplé à un module de routage à réservoir et d'autre part avec le modèle hydrologique J2000. / The central part of the Hindukush-Himalaya region presents tremendous heterogeneity, in particular in terms of topography and climatology. The representation of hydro-climatic processes for Himalayan catchments is limited due to a lack of knowledge regarding their hydrological behavior. Local variability is thus difficult to characterize based on modeling studies done at a regional scale. The proposed approach is to characterize hydro-climatic systems at the local scale to reduce uncertainties associated with environmental heterogeneity.The integration of locally reliable data is tested to model sparsely instrumented, highly heterogeneous catchments. Two sub-catchments of the Dudh Koshi River basin (Nepal) are used as representative samples of high and mid-mountain environments, with no glacier contribution. The ISBA surface scheme is applied to simulate hydrological responses of the surfaces that are described based on in-situ observations. Measurements of physical properties of soils are integrated to precise surface parametrization in the model. Necessary climatic data is interpolated based on available in-situ measurements. A non deterministic approach is applied to quantify uncertainties associated with the effect of topography on precipitation and their propagation through the modeling chain. Finally, uncertainties associated with model structure are estimated at the local scale by comparing simulation methods and results obtained on the one hand with the ISBA model, coupled with a reservoir routing module, and on the other hand, with the J2000 hydrological model.
159

Towards Individualized Transcranial Electric Stimulation Therapy through Computer Simulation

Kalloch, Benjamin 29 November 2021 (has links)
Transkranielle Elektrostimulation (tES) beschreibt eine Gruppe von Hirnstimulationstechniken, die einen schwachen elektrischen Strom über zwei nicht-invasiv am Kopf angebrachten Elektroden applizieren. Handelt es sich dabei um einen Gleichstrom, spricht man von transkranieller Gleichstromstimulation, auch tDCS abgekürzt. Die allgemeine Zielstellung aller Hirnstimulationstechniken ist Hirnfunktion durch ein Verstärken oder Dämpfen von Hirnaktivität zu beeinflussen. Unter den Stimulationstechniken wird die transkranielle Gleichstromstimulation als ein adjuvantes Werkzeug zur Unterstützung der mikroskopischen Reorganisation des Gehirnes in Folge von Lernprozessen und besonders der Rehabilitationstherapie nach einem Schlaganfall untersucht. Aktuelle Herausforderungen dieser Forschung sind eine hohe Variabilität im erreichten Stimulationseffekt zwischen den Probanden sowie ein unvollständiges Verständnis des Zusammenspiels der der Stimulation zugrundeliegenden Mechanismen. Als Schlüsselkomponente für das Verständnis der Stimulationsmechanismen wird das zwischen den Elektroden im Kopf des Probanden aufgebaute elektrische Feld erachtet. Einem grundlegenden Konzept folgend wird angenommen, dass Hirnareale, die einer größeren elektrischen Feldstärke ausgesetzt sind, ebenso einen höheren Stimulationseffekt erfahren. Damit kommt der Positionierung der Elektroden eine entscheidende Rolle für die Stimulation zu. Allerdings verteilt sich das elektrische Feld wegen des heterogenen elektrischen Leitfähigkeitsprofil des menschlichen Kopfes nicht uniform im Gehirn der Probanden. Außerdem ist das Verteilungsmuster auf Grund anatomischer Unterschiede zwischen den Probanden verschieden. Die triviale Abschätzung der Ausbreitung des elektrischen Feldes anhand der bloßen Position der Stimulationselektroden ist daher nicht ausreichend genau für eine zielgerichtete Stimulation. Computerbasierte, biophysikalische Simulationen der transkraniellen Elektrostimulation ermöglichen die individuelle Approximation des Verteilungsmusters des elektrischen Feldes in Probanden basierend auf deren medizinischen Bildgebungsdaten. Sie werden daher zunehmend verwendet, um tDCS-Anwendungen zu planen und verifizieren, und stellen ein wesentliches Hilfswerkzeug auf dem Weg zu individualisierter Schlaganfall-Rehabilitationstherapie dar. Softwaresysteme, die den dahinterstehenden individualisierten Verarbeitungsprozess erleichtern und für ein breites Feld an Forschern zugänglich machen, wurden in den vergangenen Jahren für den Anwendungsfall in gesunden Erwachsenen entwickelt. Jedoch bleibt die Simulation von Patienten mit krankhaftem Hirngewebe und strukturzerstörenden Läsionen eine nicht-triviale Aufgabe. Daher befasst sich das hier vorgestellte Projekt mit dem Aufbau und der praktischen Anwendung eines Arbeitsablaufes zur Simulation transkranieller Elektrostimulation. Dabei stand die Anforderung im Vordergrund medizinische Bildgebungsdaten insbesondere neurologischer Patienten mit krankhaft verändertem Hirngewebe verarbeiten zu können. Der grundlegende Arbeitsablauf zur Simulation wurde zunächst für gesunde Erwachsene entworfen und validiert. Dies umfasste die Zusammenstellung medizinischer Bildverarbeitungsalgorithmen zu einer umfangreichen Verarbeitungskette, um elektrisch relevante Strukturen in den Magnetresonanztomographiebildern des Kopfes und des Oberkörpers der Probanden zu identifizieren und zu extrahieren. Die identifizierten Strukturen mussten in Computermodelle überführt werden und das zugrundeliegende, physikalische Problem der elektrischen Volumenleitung in biologischen Geweben mit Hilfe numerischer Simulation gelöst werden. Im Verlauf des normalen Alterns ist das Gehirn strukturellen Veränderungen unterworfen, unter denen ein Verlust des Hirnvolumens sowie die Ausbildung mikroskopischer Veränderungen seiner Nervenfaserstruktur die Bedeutendsten sind. In einem zweiten Schritt wurde der Arbeitsablauf daher erweitert, um diese Phänomene des normalen Alterns zu berücksichtigen. Die vordergründige Herausforderung in diesem Teilprojekt war die biophysikalische Modellierung der veränderten Hirnmikrostruktur, da die resultierenden Veränderungen im Leitfähigkeitsprofil des Gehirns bisher noch nicht in der Literatur quantifiziert wurden. Die Erweiterung des Simulationsablauf zeichnete sich vorrangig dadurch aus, dass mit unsicheren elektrischen Leitfähigkeitswerten gearbeitet werden konnte. Damit war es möglich den Einfluss der ungenau bestimmbaren elektrischen Leitfähigkeit der verschiedenen biologischen Strukturen des menschlichen Kopfes auf das elektrische Feld zu ermitteln. In einer Simulationsstudie, in der Bilddaten von 88 Probanden einflossen, wurde die Auswirkung der veränderten Hirnfaserstruktur auf das elektrische Feld dann systematisch untersucht. Es wurde festgestellt, dass sich diese Gewebsveränderungen hochgradig lokal und im Allgemeinen gering auswirken. Schließlich wurden in einem dritten Schritt Simulationen für Schlaganfallpatienten durchgeführt. Ihre großen, strukturzerstörenden Läsionen wurden dabei mit einem höheren Detailgrad als in bisherigen Arbeiten modelliert und physikalisch abermals mit unsicheren Leitfähigkeiten gearbeitet, was zu unsicheren elektrischen Feldabschätzungen führte. Es wurden individuell berechnete elektrische Felddaten mit der Hirnaktivierung von 18 Patienten in Verbindung gesetzt, unter Berücksichtigung der inhärenten Unsicherheit in der Bestimmung der elektrischen Felder. Das Ziel war zu ergründen, ob die Hirnstimulation einen positiven Einfluss auf die Hirnaktivität der Patienten im Kontext von Rehabilitationstherapie ausüben und so die Neuorganisierung des Gehirns nach einem Schlaganfall unterstützen kann. Während ein schwacher Zusammenhang hergestellt werden konnte, sind weitere Untersuchungen nötig, um diese Frage abschließend zu klären.:Kurzfassung Abstract Contents 1 Overview 2 Anatomical structures in magnetic resonance images 2 Anatomical structures in magnetic resonance images 2.1 Neuroanatomy 2.2 Magnetic resonance imaging 2.3 Segmentation of MR images 2.4 Image morphology 2.5 Summary 3 Magnetic resonance image processing pipeline 3.1 Introduction to human body modeling 3.2 Description of the processing pipeline 3.3 Intermediate and final outcomes in two subjects 3.4 Discussion, limitations & future work 3.5 Conclusion 4 Numerical simulation of transcranial electric stimulation 4.1 Electrostatic foundations 4.2 Discretization of electrostatic quantities 4.3 The numeric solution process 4.4 Spatial discretization by volume meshing 4.5 Summary 5 Simulation workflow 5.1 Overview of tES simulation pipelines 5.2 My implementation of a tES simulation workflow 5.3 Verification & application examples 5.4 Discussion & Conclusion 6 Transcranial direct current stimulation in the aging brain 6.1 Handling age-related brain changes in tES simulations 6.2 Procedure of the simulation study 6.3 Results of the uncertainty analysis 6.4 Findings, limitations and discussion 7 Transcranial direct current stimulation in stroke patients 7.1 Bridging the gap between simulated electric fields and brain activation in stroke patients 7.2 Methodology for relating simulated electric fields to functional MRI data 7.3 Evaluation of the simulation study and correlation analysis 7.4 Discussion & Conclusion 8 Outlooks for simulations of transcranial electric stimulation List of Figures List of Tables Glossary of Neuroscience Terms Glossary of Technical Terms Bibliography / Transcranial electric current stimulation (tES) denotes a group of brain stimulation techniques that apply a weak electric current over two or more non-invasively, head-mounted electrodes. When employing a direct-current, this method is denoted transcranial direct current stimulation (tDCS). The general aim of all tES techniques is the modulation of brain function by an up- or downregulation of brain activity. Among these, transcranial direct current stimulation is investigated as an adjuvant tool to promote processes of the microscopic reorganization of the brain as a consequence of learning and, more specifically, rehabilitation therapy after a stroke. Current challenges of this research are a high variability in the achieved stimulation effects across subjects and an incomplete understanding of the interplay between its underlying mechanisms. A key component to understanding the stimulation mechanism is considered the electric field, which is exerted by the electrodes and distributes in the subjects' heads. A principle concept assumes that brain areas exposed to a higher electric field strength likewise experience a higher stimulation. This attributes the positioning of the electrodes a decisive role for the stimulation. However, the electric field distributes non-uniformly across subjects' brains due to the heterogeneous electrical conductivity profile of the human head. Moreover, the distribution pattern is variable between subjects due to their individual anatomy. A trivial estimation of the distribution of the electric field solely based on the position of the stimulating electrodes is, therefore, not precise enough for a well-targeted stimulation. Computer-based biophysical simulations of transcranial electric stimulation enable the individual approximation of the distribution pattern of the electric field in subjects based on their medical imaging data. They are, thus, increasingly employed for the planning and verification of tDCS applications and constitute an essential tool on the way to individualized stroke rehabilitation therapy. Software pipelines facilitating the underlying individualized processing for a wide range of researchers have been developed for use in healthy adults over the past years, but, to date, the simulation of patients with abnormal brain tissue and structure disrupting lesions remains a non-trivial task. Therefore, the presented project was dedicated to establishing and practically applying a tES simulation workflow. The processing of medical imaging data of neurological patients with abnormal brain tissue was a central requirement in this process. The basic simulation workflow was first designed and validated for the simulation of healthy adults. This comprised compiling medical image processing algorithms into a comprehensive workflow to identify and extract electrically relevant physiological structures of the human head and upper torso from magnetic resonance images. The identified structures had to be converted to computational models. The underlying physical problem of electric volume conduction in biological tissue was solved by means of numeric simulation. Over the course of normal aging, the brain is subjected to structural alterations, among which a loss of brain volume and the development of microscopic alterations of its fiber structure are the most relevant. In a second step, the workflow was, thus, extended to incorporate these phenomena of normal aging. The main challenge in this subproject was the biophysical modeling of the altered brain microstructure as the resulting alterations to the conductivity profile of the brain were so far not quantified in the literature. Therefore, the augmentation of the workflow most notably included the modeling of uncertain electrical properties. With this, the influence of the uncertain electrical conductivity of the biological structures of the human head on the electric field could be assessed. In a simulation study, including imaging data of 88 subjects, the influence of the altered brain fiber structure on the electric field was then systematically investigated. These tissue alterations were found to exhibit a highly localized and generally low impact. Finally, in a third step, tDCS simulations of stroke patients were conducted. Their large, structure-disrupting lesions were modeled in a more detailed manner than in previous stroke simulation studies, and they were physically, again, modeled by uncertain electrical conductivity resulting in uncertain electric field estimates. Individually simulated electric fields were related to the brain activation of 18 patients, considering the inherently uncertain electric field estimations. The goal was to clarify whether the stimulation exerts a positive influence on brain function in the context of rehabilitation therapy supporting brain reorganization following a stroke. While a weak correlation could be established, further investigation will be necessary to answer that research question.:Kurzfassung Abstract Contents 1 Overview 2 Anatomical structures in magnetic resonance images 2 Anatomical structures in magnetic resonance images 2.1 Neuroanatomy 2.2 Magnetic resonance imaging 2.3 Segmentation of MR images 2.4 Image morphology 2.5 Summary 3 Magnetic resonance image processing pipeline 3.1 Introduction to human body modeling 3.2 Description of the processing pipeline 3.3 Intermediate and final outcomes in two subjects 3.4 Discussion, limitations & future work 3.5 Conclusion 4 Numerical simulation of transcranial electric stimulation 4.1 Electrostatic foundations 4.2 Discretization of electrostatic quantities 4.3 The numeric solution process 4.4 Spatial discretization by volume meshing 4.5 Summary 5 Simulation workflow 5.1 Overview of tES simulation pipelines 5.2 My implementation of a tES simulation workflow 5.3 Verification & application examples 5.4 Discussion & Conclusion 6 Transcranial direct current stimulation in the aging brain 6.1 Handling age-related brain changes in tES simulations 6.2 Procedure of the simulation study 6.3 Results of the uncertainty analysis 6.4 Findings, limitations and discussion 7 Transcranial direct current stimulation in stroke patients 7.1 Bridging the gap between simulated electric fields and brain activation in stroke patients 7.2 Methodology for relating simulated electric fields to functional MRI data 7.3 Evaluation of the simulation study and correlation analysis 7.4 Discussion & Conclusion 8 Outlooks for simulations of transcranial electric stimulation List of Figures List of Tables Glossary of Neuroscience Terms Glossary of Technical Terms Bibliography
160

Dynamika soustav těles s neurčitostním modelem vzájemné vazby

Svobodová, Miriam January 2020 (has links)
This diploma thesis deal with evaluation of the impact in the scale of uncertaintly stiffness on the tool deviation during grooving process. By the affect of the insufficient stiffness in each parts of the machine, there is presented a mechanical vibration during the cutting process which may cause a damage to the surface of the workpiece, to the tool or to the processing machine. The change of the stiffness is caused in the result of tool wear, impact of setted cutting conditions and many others. In the first part includes teoretical introduction to field of the uncertainty and choosing suitable methods for the solutions. Chosen methods are Monte Carlo and polynomial chaos expansion which are procced in the interface of MATLAB. Both of the methods are primery tested on the simple systems with the indefinited enters of the stiffness. These systems replace the parts of the stiffness characteristics of the each support parts. After that, the model is defined for the turning during the process of grooving with the 3 degrees of freedom. Then the analyses of the uncertainity and also sensibility analyses for uncertainity entering data of the stiffness are carried out again by both methods. At the end are both methods compared in the points of view by the time consuption and also by precission. Judging by gathered data it is clear that the change of the stiffness has significant impact on vibration in all degrees of freedome of the analysed model. As the example a maximum and a minimum calculated deviation of the workpiece stiffness was calculated via methode of Monte Carlo. The biggest impact on the finall vibration of the tool is found by stiffness of the ball screw. The solution was developed for the more stabile cutting process.

Page generated in 0.0842 seconds