• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 177
  • 154
  • 30
  • 13
  • 12
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 465
  • 465
  • 176
  • 160
  • 87
  • 85
  • 60
  • 58
  • 54
  • 53
  • 53
  • 49
  • 49
  • 44
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Probabilistic Control: Implications For The Development Of Upper Limb Neuroprosthetics

Anderson, Chad January 2007 (has links)
Functional electrical stimulation (FES) involves artificial activation of paralyzed muscles via implanted electrodes. FES has been successfully used to improve the ability of tetraplegics to perform upper limb movements important for daily activities. The variety of movements that can be generated by FES is, however, limited to a few movements such as hand grasp and release. Ideally, a user of an FES system would have effortless command over all of the degrees of freedom associated with upper limb movement. One reason that a broader range of movements has not been implemented is because of the substantial challenge associated with identifying the patterns of muscle stimulation needed to elicit additional movements. The first part of this dissertation addresses this challenge by using a probabilistic algorithm to estimate the patterns of muscle activity associated with a wide range of upper limb movements.A neuroprosthetic involves the control of an external device via brain activity. Neuroprosthetics have been successfully used to improve the ability of tetraplegics to perform tasks important for interfacing with the world around them. The variety of mechanisms which they can control is, however, limited to a few devices such as special computer typing programs. Because motor areas of the cerebral cortex are known to represent and regulate voluntary arm movements it might be possible to sense this activity with electrodes and decipher this information in terms of a moment-by-moment representation of arm trajectory. Indeed, several methods for decoding neural activity have been described, but these approaches are encumbered by technical difficulties. The second part of this dissertation addresses this challenge by using similar probabilistic methods to extract arm trajectory information from electroencephalography (EEG) electrodes that are already chronically deployed and widely used in human subjects.Ultimately, the two approaches developed as part of this dissertation might serve as a flexible controller for interfacing brain activity with functional electrical stimulation systems to realize a brain-controlled upper-limb neuroprosthetic system capable of eliciting natural movements. Such a system would effectively bypass the injured region of the spinal cord and reanimate the arm, greatly increasing movement capability and independence in paralyzed individuals.
12

Variational inference for Gaussian-jump processes with application in gene regulation

Ocone, Andrea January 2013 (has links)
In the last decades, the explosion of data from quantitative techniques has revolutionised our understanding of biological processes. In this scenario, advanced statistical methods and algorithms are becoming fundamental to decipher the dynamics of biochemical mechanisms such those involved in the regulation of gene expression. Here we develop mechanistic models and approximate inference techniques to reverse engineer the dynamics of gene regulation, from mRNA and/or protein time series data. We start from an existent variational framework for statistical inference in transcriptional networks. The framework is based on a continuous-time description of the mRNA dynamics in terms of stochastic differential equations, which are governed by latent switching variables representing the on/off activity of regulating transcription factors. The main contributions of this work are the following. We speeded-up the variational inference algorithm by developing a method to compute a posterior approximate distribution over the latent variables using a constrained optimisation algorithm. In addition to computational benefits, this method enabled the extension to statistical inference in networks with a combinatorial model of regulation. A limitation of this framework is the fact that inference is possible only in transcriptional networks with a single-layer architecture (where a single or couples of transcription factors regulate directly an arbitrary number of target genes). The second main contribution in this work is the extension of the inference framework to hierarchical structures, such as feed-forward loop. In the last contribution we define a general structure for transcription-translation networks. This work is important since it provides a general statistical framework to model complex dynamics in gene regulatory networks. The framework is modular and scalable to realistically large systems with general architecture, thus representing a valuable alternative to traditional differential equation models. All models are embedded in a Bayesian framework; inference is performed using a variational approach and compared to exact inference where possible. We apply the models to the study of different biological systems, from the metabolism in E. coli to the circadian clock in the picoalga O. tauri.
13

Input-output transformations in the awake mouse brain using whole-cell recordings and probabilistic analysis

Puggioni, Paolo January 2015 (has links)
The activity of cortical neurons in awake brains changes dynamically as a function of the behavioural and attentional state. The primary motor cortex (M1) plays a central role in regulating complex motor behaviors. Despite a growing knowledge on its connectivity and spiking pattern, little is known about intra-cellular mechanism and rhythms underlying motor-command generation. In the last decade, whole-cell recordings in awake animals has become a powerful tool for characterising both sub-and supra-threshold activity during behaviour. Seminal in vivo studies have shown that changes in input structure and sub-threshold regime determine spike output during behaviour (input-output transformations). In this thesis I make use of computational and experimental techniques to better understand (i) how the brain regulates the sub-threshold activity of the neurons during movement and (ii) how this reflects in their input-output transformation properties. In the first part of this work I present a novel probabilistic technique to infer input statistics from in-vivo voltage-clamp traces. This approach, based on Bayesian belief networks, outperforms current methods and allows an estimation of synaptic input (i) kinetic properties, (ii) frequency, and (iii) weight distribution. I first validate the model on simulated data, thus I apply it to voltage-clamp recordings of cerebellar interneurons in awake mice. I found that synaptic weight distributions have long tails, which on average do not change during movement. Interestingly, the increase in synaptic current observed during movement is a consequence of the increase in input frequency only. In the second part, I study how the brain regulates the activity of pyramidal neurons in the M1 of awake mice during movement. I performed whole-cell recordings of pyramidal neurons in layer 5B (L5B), which represent one of the main descending output channels from motor cortex. I found that slow large-amplitude membrane potential fluctuations, typical of quiet periods, were suppressed in all L5B pyramidal neurons during movement, which by itself reduced membrane potential (Vm) variability, input sensitivity and output firing rates. However, a sub-population of L5B neurons ( 50%) concurrently experienced an increase in excitatory drive that depolarized mean Vm, enhanced input sensitivity and elevated firing rates. Thus, movement-related bidirectional modulation in L5B neurons is mediated by two opposing mechanisms: 1) a global reduction in network driven Vm variability and 2) a coincident, targeted increase in excitatory drive to a subpopulation of L5B neurons.
14

Ponderação Bayesiana de modelos em regressão linear clássica / Bayesian model averaging in classic linear regression models

Nunes, Hélio Rubens de Carvalho 07 October 2005 (has links)
Este trabalho tem o objetivo de divulgar a metodologia de ponderação de modelos ou Bayesian Model Averaging (BMA) entre os pesquisadores da área agronômica e discutir suas vantagens e limitações. Com o BMA é possível combinar resultados de diferentes modelos acerca de determinada quantidade de interesse, com isso, o BMA apresenta-se como sendo uma metodologia alternativa de análise de dados frente os usuais métodos de seleção de modelos tais como o Coeficiente de Determinação Múltipla (R2 ), Coeficiente de Determinação Múltipla Ajustado (R2), Estatística de Mallows ( Cp) e Soma de Quadrados de Predição (PRESS). Vários trabalhos foram, recentemente, realizados com o objetivo de comparar o desempenho do BMA em relação aos métodos de seleção de modelos, porém, há ainda muitas situações para serem exploradas até que se possa chegar a uma conclusão geral acerca desta metodologia. Neste trabalho, o BMA foi aplicado a um conjunto de dados proveniente de um experimento agronômico. A seguir, o desempenho preditivo do BMA foi comparado com o desempenho dos métodos de seleção acima citados por meio de um estudo de simulação variando o grau de multicolinearidade e o tamanho amostral. Em cada uma dessas situações, foram utilizadas 1000 amostras geradas a partir de medidas descritivas de conjuntos de dados reais da área agronômica. O desempenho preditivo das metodologias em comparação foi medido pelo Logaritmo do Escore Preditivo (LEP). Os resultados empíricos obtidos indicaram que o BMA apresenta desempenho semelhante aos métodos usuais de seleção de modelos nas situações de multicolinearidade exploradas neste trabalho. / The objective of this work was divulge to Bayesian Model Averaging (BMA) between the researchers of the agronomy area and discuss its advantages and limitations. With the BMA is possible combine results of difeerent models about determined quantity of interest, with that, the BMA presents as being a metodology alternative of data analysis front the usual models selection approaches, for example the Coefficient of Multiple Determination (R2), Coefficient of Multiple Determination Adjusted (R2), Mallows (Cp Statistics) and Prediction Error Sum Squares (PRESS). Several works recently were carried out with the objective of compare the performance of the BMA regarding the approaches of models selection, however, there is still many situations for will be exploited to that can arrive to a general conclusion about this metodology. In this work, the BMA was applied to data originating from an agronomy experiment. It follow, the predictive performance of the BMA was compared with the performance of the approaches of selection above cited by means of a study of simulation varying the degree of multicollinearity, measured by the number of condition of the matrix standardized X'X and the number of observations in the sample. In each one of those situations, were utilized 1000 samples generated from the descriptive information of agronomy data. The predictive performance of the metodologies in comparison was measured by the Logarithm of the Score Predictive (LEP). The empirical results obtained indicated that the BMA presents similar performance to the usual approaches of selection of models in the situations of multicollinearity exploited.
15

Scalable Gaussian process inference using variational methods

Matthews, Alexander Graeme de Garis January 2017 (has links)
Gaussian processes can be used as priors on functions. The need for a flexible, principled, probabilistic model of functional relations is common in practice. Consequently, such an approach is demonstrably useful in a large variety of applications. Two challenges of Gaussian process modelling are often encountered. These are dealing with the adverse scaling with the number of data points and the lack of closed form posteriors when the likelihood is non-Gaussian. In this thesis, we study variational inference as a framework for meeting these challenges. An introductory chapter motivates the use of stochastic processes as priors, with a particular focus on Gaussian process modelling. A section on variational inference reviews the general definition of Kullback-Leibler divergence. The concept of prior conditional matching that is used throughout the thesis is contrasted to classical approaches to obtaining tractable variational approximating families. Various theoretical issues arising from the application of variational inference to the infinite dimensional Gaussian process setting are settled decisively. From this theory we are able to give a new argument for existing approaches to variational regression that settles debate about their applicability. This view on these methods justifies the principled extensions found in the rest of the work. The case of scalable Gaussian process classification is studied, both for its own merits and as a case study for non-Gaussian likelihoods in general. Using the resulting algorithms we find credible results on datasets of a scale and complexity that was not possible before our work. An extension to include Bayesian priors on model hyperparameters is studied alongside a new inference method that combines the benefits of variational sparsity and MCMC methods. The utility of such an approach is shown on a variety of example modelling tasks. We describe GPflow, a new Gaussian process software library that uses TensorFlow. Implementations of the variational algorithms discussed in the rest of the thesis are included as part of the software. We discuss the benefits of GPflow when compared to other similar software. Increased computational speed is demonstrated in relevant, timed, experimental comparisons.
16

Etude de la variabilité hémodynamique chez l’enfant et l’adulte sains en IRMf / Study of hemodynamic variability in sane adult and children in fMRI

Badillo, Solveig 18 November 2013 (has links)
En IRMf, les conclusions de paradigmes expérimentaux restent encore sujettes à caution dans la mesure où elles supposent une connaissance a priori du couplage neuro-vasculaire, c’est-à- dire de la fonction de réponse hémodynamique qui modélise le lien entre la stimulation et le signal mesuré. Afin de mieux appréhender les changements neuronaux et vasculaires induits par la réalisation d’une tâche cognitive en IRMf, il apparaît donc indispensable d’étudier de manière approfondie les caractéristiques de la réponse hémodynamique. Cette thèse apporte un nouvel éclairage sur cette étude, en s’appuyant sur une méthode originale d’analyse intra-sujet des données d’IRMf : la Détection-Estimation Conjointe (« Joint Detection-Estimation » en anglais, ou JDE). L’approche JDE modélise de façon non paramétrique et multivariée la réponse hémodynamique, tout en détectant conjointement les aires cérébrales activées en réponse aux stimulations d’un paradigme expérimental. La première contribution de cette thèse a été centrée sur l’analyse approfondie de la variabilité hémodynamique, tant inter-individuelle qu’inter-régionale, au niveau d’un groupe de jeunes adultes sains. Ce travail a permis de valider la méthode JDE au niveau d’une population et de mettre en évidence la variabilité hémodynamique importante apparaissant dans certaines régions cérébrales : lobes pariétal, temporal, occipital, cortex moteur. Cette variabilité est d’autant plus importante que la région est impliquée dans des processus cognitifs plus complexes.Un deuxième axe de recherche a consisté à se focaliser sur l’étude de l’organisation hémodynamique d’une aire cérébrale particulièrement importante chez les êtres humains, la région du langage. Cette fonction étant liée à la capacité d’apprentissage de la lecture, deux groupes d’enfants sains, âgés respectivement de 6 et 9 ans, en cours d’apprentissage ou de consolidation de la lecture, ont été choisis pour mener cette étude. Deux apports méthodologiques importants ont été proposés. Tout d’abord, une extension multi-sessions de l’approche JDE (jusqu’alors limitée au traitement de données mono-session en IRMf) a été mise au point afin d’améliorer la robustesse et la reproductibilité des résultats. Cette extension a permis de mettre en évidence, au sein de la population d’enfants, l’évolution de la réponse hémodynamique avec l’âge, au sein de la région du sillon temporal supérieur. Ensuite, un nouveau cadre a été développé pour contourner l’une des limitations de l’approche JDE « standard », à savoir la parcellisation a priori des données en régions fonctionnellement homogènes. Cette parcellisation est déterminante pour la suite de l’analyse et a un impact sur les résultats hémodynamiques. Afin de s’affranchir d’un tel choix, l’alternative mise au point combine les résultats issus de différentes parcellisations aléatoires des données en utilisant des techniques de «consensus clustering». Enfin, une deuxième extension de l’approche JDE a été mise en place pour estimer la forme de la réponse hémodynamique au niveau d’un groupe de sujets. Ce modèle a pour l’instant été validé sur simulations, et nous prévoyons de l’appliquer sur les données d’enfant pour améliorer l’étude des caractéristiques temporelles de la réponse BOLD dans les réseaux du langage.Ce travail de thèse propose ainsi d’une part des contributions méthodologiques nouvelles pour caractériser la réponse hémodynamique en IRMf, et d’autre part une validation et une application des approches développées sous un éclairage neuroscientifique. / In fMRI, the conclusions of experimental paradigms remain unreliable as far as they supposesome a priori knowledge on the neuro-vascular coupling which is characterized by thehemodynamic response function modeling the link between the stimulus input and the fMRIsignal as output. To improve our understanding of the neuronal and vascular changes inducedby the realization of a cognitive task given in fMRI, it seems thus critical to study thecharacteristics of the hemodynamic response in depth.This thesis gives a new perspective on this topic, supported by an original method for intra-subjectanalysis of fMRI data : the Joint Detection-Estimation (or JDE). The JDE approachmodels the hemodynamic response in a not parametric and multivariate manner, while itjointly detects the cerebral areas which are activated in response to stimulations deliveredalong an experimental paradigm.The first contribution of this thesis is centered on the thorough analysis of the interindividualand inter-regiona hemodynamic variability from a population of young healthyadults. This work has allowed to validate the JDE method at the group level and to highlightthe striking hemodynamic variability in some cerebral regions : parietal, temporal, occipitallobes, motor cortex. This variability is much more important as the region is involved in morecomplex cognitive processes.The second research axis has consisted in focusing on the study of the hemodynamic orga-nizationof a particularly important cerebral area in Humans, the language system. Becausethis function embeds the reading learning ability, groups of healthy children of 6 and 9 yearsold respectively, who were in the process of learning or of strenghting reading, were chosen forthis study. Two important methodological contributions have been proposed. First, a multi-sessionsextension of the JDE approach (until now limited to the processing of mono-sessiondata in fMRI) was worked out in order to improve the robustness and the reproducibility ofthe results. Then, a new framework was developed to overcome the main shortcoming of theJDE approach. The latter indeed relies on a prior parcellation of the data in functionally ho-mogeneousregions, the choice of which is critical for the subsequent inference and impacts thehemodynamic results. In order to avoid this a priori choice, the finalized alternative combinesthe results from various random data fragmentations by using “consensus clustering”.Finally, a second extension of the JDE approach was developed in order to robustly estimatethe shape of the hemodynamic response at the group level. So far, this model was validatedon simulations, and we plan to apply it on children data to improve the study of the BOLDresponse temporal characteristics in the language areas. Thus, this PhD work proposes onone hand new methodological contributions to characterize the hemodynamic response infMRI, and on the other hand a validation and a neuroscientific application of the proposedapproaches.
17

A Statistical Image-Based Shape Model for Visual Hull Reconstruction and 3D Structure Inference

Grauman, Kristen 22 May 2003 (has links)
We present a statistical image-based shape + structure model for Bayesian visual hull reconstruction and 3D structure inference. The 3D shape of a class of objects is represented by sets of contours from silhouette views simultaneously observed from multiple calibrated cameras. Bayesian reconstructions of new shapes are then estimated using a prior density constructed with a mixture model and probabilistic principal components analysis. We show how the use of a class-specific prior in a visual hull reconstruction can reduce the effect of segmentation errors from the silhouette extraction process. The proposed method is applied to a data set of pedestrian images, and improvements in the approximate 3D models under various noise conditions are shown. We further augment the shape model to incorporate structural features of interest; unknown structural parameters for a novel set of contours are then inferred via the Bayesian reconstruction process. Model matching and parameter inference are done entirely in the image domain and require no explicit 3D construction. Our shape model enables accurate estimation of structure despite segmentation errors or missing views in the input silhouettes, and works even with only a single input view. Using a data set of thousands of pedestrian images generated from a synthetic model, we can accurately infer the 3D locations of 19 joints on the body based on observed silhouette contours from real images.
18

Essays on Aggregation and Cointegration of Econometric Models

Silvestrini, Andrea 02 June 2009 (has links)
This dissertation can be broadly divided into two independent parts. The first three chapters analyse issues related to temporal and contemporaneous aggregation of econometric models. The fourth chapter contains an application of Bayesian techniques to investigate whether the post transition fiscal policy of Poland is sustainable in the long run and consistent with an intertemporal budget constraint. Chapter 1 surveys the econometric methodology of temporal aggregation for a wide range of univariate and multivariate time series models. A unified overview of temporal aggregation techniques for this broad class of processes is presented in the first part of the chapter and the main results are summarized. In each case, assuming to know the underlying process at the disaggregate frequency, the aim is to find the appropriate model for the aggregated data. Additional topics concerning temporal aggregation of ARIMA-GARCH models (see Drost and Nijman, 1993) are discussed and several examples presented. Systematic sampling schemes are also reviewed. Multivariate models, which show interesting features under temporal aggregation (Breitung and Swanson, 2002, Marcellino, 1999, Hafner, 2008), are examined in the second part of the chapter. In particular, the focus is on temporal aggregation of VARMA models and on the related concept of spurious instantaneous causality, which is not a time series property invariant to temporal aggregation. On the other hand, as pointed out by Marcellino (1999), other important time series features as cointegration and presence of unit roots are invariant to temporal aggregation and are not induced by it. Some empirical applications based on macroeconomic and financial data illustrate all the techniques surveyed and the main results. Chapter 2 is an attempt to monitor fiscal variables in the Euro area, building an early warning signal indicator for assessing the development of public finances in the short-run and exploiting the existence of monthly budgetary statistics from France, taken as "example country". The application is conducted focusing on the cash State deficit, looking at components from the revenue and expenditure sides. For each component, monthly ARIMA models are estimated and then temporally aggregated to the annual frequency, as the policy makers are interested in yearly predictions. The short-run forecasting exercises carried out for years 2002, 2003 and 2004 highlight the fact that the one-step-ahead predictions based on the temporally aggregated models generally outperform those delivered by standard monthly ARIMA modeling, as well as the official forecasts made available by the French government, for each of the eleven components and thus for the whole State deficit. More importantly, by the middle of the year, very accurate predictions for the current year are made available. The proposed method could be extremely useful, providing policy makers with a valuable indicator when assessing the development of public finances in the short-run (one year horizon or even less). Chapter 3 deals with the issue of forecasting contemporaneous time series aggregates. The performance of "aggregate" and "disaggregate" predictors in forecasting contemporaneously aggregated vector ARMA (VARMA) processes is compared. An aggregate predictor is built by forecasting directly the aggregate process, as it results from contemporaneous aggregation of the data generating vector process. A disaggregate predictor is a predictor obtained from aggregation of univariate forecasts for the individual components of the data generating vector process. The econometric framework is broadly based on Lütkepohl (1987). The necessary and sufficient condition for the equality of mean squared errors associated with the two competing methods in the bivariate VMA(1) case is provided. It is argued that the condition of equality of predictors as stated in Lütkepohl (1987), although necessary and sufficient for the equality of the predictors, is sufficient (but not necessary) for the equality of mean squared errors. Furthermore, it is shown that the same forecasting accuracy for the two predictors can be achieved using specific assumptions on the parameters of the VMA(1) structure. Finally, an empirical application that involves the problem of forecasting the Italian monetary aggregate M1 on the basis of annual time series ranging from 1948 until 1998, prior to the creation of the European Economic and Monetary Union (EMU), is presented to show the relevance of the topic. In the empirical application, the framework is further generalized to deal with heteroskedastic and cross-correlated innovations. Chapter 4 deals with a cointegration analysis applied to the empirical investigation of fiscal sustainability. The focus is on a particular country: Poland. The choice of Poland is not random. First, the motivation stems from the fact that fiscal sustainability is a central topic for most of the economies of Eastern Europe. Second, this is one of the first countries to start the transition process to a market economy (since 1989), providing a relatively favorable institutional setting within which to study fiscal sustainability (see Green, Holmes and Kowalski, 2001). The emphasis is on the feasibility of a permanent deficit in the long-run, meaning whether a government can continue to operate under its current fiscal policy indefinitely. The empirical analysis to examine debt stabilization is made up by two steps. First, a Bayesian methodology is applied to conduct inference about the cointegrating relationship between budget revenues and (inclusive of interest) expenditures and to select the cointegrating rank. This task is complicated by the conceptual difficulty linked to the choice of the prior distributions for the parameters relevant to the economic problem under study (Villani, 2005). Second, Bayesian inference is applied to the estimation of the normalized cointegrating vector between budget revenues and expenditures. With a single cointegrating equation, some known results concerning the posterior density of the cointegrating vector may be used (see Bauwens, Lubrano and Richard, 1999). The priors used in the paper leads to straightforward posterior calculations which can be easily performed. Moreover, the posterior analysis leads to a careful assessment of the magnitude of the cointegrating vector. Finally, it is shown to what extent the likelihood of the data is important in revising the available prior information, relying on numerical integration techniques based on deterministic methods.
19

An Evaluation of Clustering and Classification Algorithms in Life-Logging Devices

Amlinger, Anton January 2015 (has links)
Using life-logging devices and wearables is a growing trend in today’s society. These yield vast amounts of information, data that is not directly overseeable or graspable at a glance due to its size. Gathering a qualitative, comprehensible overview over this quantitative information is essential for life-logging services to serve its purpose. This thesis provides an overview comparison of CLARANS, DBSCAN and SLINK, representing different branches of clustering algorithm types, as tools for activity detection in geo-spatial data sets. These activities are then classified using a simple model with model parameters learned via Bayesian inference, as a demonstration of a different branch of clustering. Results are provided using Silhouettes as evaluation for geo-spatial clustering and a user study for the end classification. The results are promising as an outline for a framework of classification and activity detection, and shed lights on various pitfalls that might be encountered during implementation of such service.
20

Forward and inverse modeling of fire physics towards fire scene reconstructions

Overholt, Kristopher James 06 November 2013 (has links)
Fire models are routinely used to evaluate life safety aspects of building design projects and are being used more often in fire and arson investigations as well as reconstructions of firefighter line-of-duty deaths and injuries. A fire within a compartment effectively leaves behind a record of fire activity and history (i.e., fire signatures). Fire and arson investigators can utilize these fire signatures in the determination of cause and origin during fire reconstruction exercises. Researchers conducting fire experiments can utilize this record of fire activity to better understand the underlying physics. In all of these applications, the fire heat release rate (HRR), location of a fire, and smoke production are important parameters that govern the evolution of thermal conditions within a fire compartment. These input parameters can be a large source of uncertainty in fire models, especially in scenarios in which experimental data or detailed information on fire behavior are not available. To better understand fire behavior indicators related to soot, the deposition of soot onto surfaces was considered. Improvements to a soot deposition submodel were implemented in a computational fluid dynamics (CFD) fire model. To better understand fire behavior indicators related to fire size, an inverse HRR methodology was developed that calculates a transient HRR in a compartment based on measured temperatures resulting from a fire source. To address issues related to the uncertainty of input parameters, an inversion framework was developed that has applications towards fire scene reconstructions. Rather than using point estimates of input parameters, a statistical inversion framework based on the Bayesian inference approach was used to determine probability distributions of input parameters. These probability distributions contain uncertainty information about the input parameters and can be propagated through fire models to obtain uncertainty information about predicted quantities of interest. The Bayesian inference approach was applied to various fire problems and coupled with zone and CFD fire models to extend the physical capability and accuracy of the inversion framework. Example applications include the estimation of both steady-state and transient fire sizes in a compartment, material properties related to pyrolysis, and the location of a fire in a compartment. / text

Page generated in 0.0321 seconds