• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 106
  • 25
  • 23
  • 17
  • 3
  • 2
  • 1
  • Tagged with
  • 229
  • 229
  • 36
  • 34
  • 28
  • 28
  • 26
  • 24
  • 23
  • 22
  • 22
  • 21
  • 21
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

OBSCURATION IN ACTIVE GALACTIC NUCLEI

Nikutta, Robert 01 January 2012 (has links)
All classes of Active Galactic Nuclei (AGN) are fundamentally powered by accretion of gas onto a supermassive black hole. The process converts the potential energy of the infalling matter to X-ray and ultraviolet (UV) radiation, releasing up to several 1012 solar luminosities. Observations show that the accreting "central engines" in AGN are surrounded by dusty matter. The dust occupies a "torus" around the AGN which is comprised of discrete clumps. If the AGN radiation is propagating through the torus on its way to an observer, it will be heavily re-processed by the dust, i.e. converted from UV to infrared (IR) wavelengths. Much of the information about the input radiation is lost in this conversion process while an imprint of the dusty torus is left in the released IR photons. Our group was the first to formulate a consistent treatment of radiative transfer in a clumpy medium an important improvement over simpler models with smooth dust distributions previously used by researchers. Our code CLUMPY computes spectral energy distributions (SED) for any set of model parameters values. Fitting these models to observed AGN SEDs allows us to determine important quantities, such as the torus size, the spatial distribution of clumps, the torus covering factor, or the intrinsic AGN luminosity. Detailed modeling also permits us to study the complex behavior of certain spectral features. IR radiative transfer introduces degeneracies to the solution space: different parameter values can yield similar SEDs. The geometry of the torus further exacerbates the problem. Knowing the amount of parameter degeneracy present in our models is important for quantifying the confidence in data fits. When matching the models to observed SEDs we must employ modern statistical methods. In my research I use Bayesian statistics to determine the likely ranges of parameter values. I have developed all tools required for fitting observed SEDs with our large model database: the latest implementation of CLUMPY, the fit algorithms, the Markov Chain Monte Carlo sampler, and the Bayesian estimator. In collaboration with observing groups we have applied our methods to a multitude of real-life AGN.
82

Computational Systems Biology of Saccharomyces cerevisiae Cell Growth and Division

Mayhew, Michael Benjamin January 2014 (has links)
<p>Cell division and growth are complex processes fundamental to all living organisms. In the budding yeast, <italic>Saccharomyces cerevisiae</italic>, these two processes are known to be coordinated with one another as a cell's mass must roughly double before division. Moreover, cell-cycle progression is dependent on cell size with smaller cells at birth generally taking more time in the cell cycle. This dependence is a signature of size control. Systems biology is an emerging field that emphasizes connections or dependencies between biological entities and processes over the characteristics of individual entities. Statistical models provide a quantitative framework for describing and analyzing these dependencies. In this dissertation, I take a statistical systems biology approach to study cell division and growth and the dependencies within and between these two processes, drawing on observations from richly informative microscope images and time-lapse movies. I review the current state of knowledge on these processes, highlighting key results and open questions from the biological literature. I then discuss my development of machine learning and statistical approaches to extract cell-cycle information from microscope images and to better characterize the cell-cycle progression of populations of cells. In addition, I analyze single cells to uncover correlation in cell-cycle progression, evaluate potential models of dependence between growth and division, and revisit classical assertions about budding yeast size control. This dissertation presents a unique perspective and approach towards comprehensive characterization of the coordination between growth and division.</p> / Dissertation
83

Desenvolvimento de interfaces gráficas para estatística Bayesiana aplicada à comparação mista de tratamentos / Graphical User Interface development for Bayesian Statistics applied to Mixed Treatment Comparison

Marcelo Goulart Correia 12 September 2013 (has links)
A partir dos avanços obtidos pela industria farmacêutica surgiram diversos medicamentos para o combate de enfermidades. Esses medicamentos possuem efeito tópico similar porém com suaves modificações em sua estrutura bioquímica, com isso a concorrência entre as industrias farmacêuticas se torna cada vez mais acirrada. Como forma de comparar a efetividade desses medicamentos, surgem diversas metodologias, com o objetivo de encontrar qual seria o melhor medicamento para uma dada situação. Uma das metodologias estudadas é a comparação mista de tratamentos, cujo objetivo é encontrar a efetividade de determinadas drogas em estudos e/ou ensaios clínicos que abordem, mesmo que de maneira indireta, os medicamentos estudados. A utilização dessa metodologia é demasiadamente complexa pois requer conhecimento de linguagens de programação em ambientes estatísticos além do domínio sobre as metodologias aplicadas a essa técnica. O objetivo principal desse estudo é a criação de uma interface gráfica que facilite a utilização do MTC para usuários que não possuam conhecimento em linguagens de programação, que seja de código aberto e multiplataforma. A expectativa é que, com essa interface, a utilização de técnicas mais abrangentes e avançadas seja facilitada, além disso, venha tornar o ensinamento sobre o tema mais facilitado para pessoas que ainda não conhecem o método / Based on the progress made by the pharmaceutical industry, several medications have emerged to combat diseases. These drugs have similar topic effects but with subtle changes in their biochemical structure, thus competition between the pharmaceutical industry becomes increasingly fierce. In order to compare the effectiveness of these drugs, appear different methodologies with the objective of find what would be the best medicine for a given situation. One of the methods studied is the Mixed Treatment Comparision (MTC) whose objective is to find the effectiveness of certain drugs in studies and / or clinical trials that address, even if indirectly, the drugs studied. The use of this methodology is too complex because it requires knowledge of programming languages in statistical environments, beyond the mastery of the methodologies applied to this technique. The main objective of this study is to create a graphical user interface (GUI) that facilitates the use of MTC for users who have no knowledge in programming languages, which is open source and cross-platform. The expectation about this interface is that the use of more comprehensive and advanced techniques is facilitated, moreover, make the teaching about the topic easier for people who do not know the method
84

Efficient deterministic approximate Bayesian inference for Gaussian process models

Bui, Thang Duc January 2018 (has links)
Gaussian processes are powerful nonparametric distributions over continuous functions that have become a standard tool in modern probabilistic machine learning. However, the applicability of Gaussian processes in the large-data regime and in hierarchical probabilistic models is severely limited by analytic and computational intractabilities. It is, therefore, important to develop practical approximate inference and learning algorithms that can address these challenges. To this end, this dissertation provides a comprehensive and unifying perspective of pseudo-point based deterministic approximate Bayesian learning for a wide variety of Gaussian process models, which connects previously disparate literature, greatly extends them and allows new state-of-the-art approximations to emerge. We start by building a posterior approximation framework based on Power-Expectation Propagation for Gaussian process regression and classification. This framework relies on a structured approximate Gaussian process posterior based on a small number of pseudo-points, which is judiciously chosen to summarise the actual data and enable tractable and efficient inference and hyperparameter learning. Many existing sparse approximations are recovered as special cases of this framework, and can now be understood as performing approximate posterior inference using a common approximate posterior. Critically, extensive empirical evidence suggests that new approximation methods arisen from this unifying perspective outperform existing approaches in many real-world regression and classification tasks. We explore the extensions of this framework to Gaussian process state space models, Gaussian process latent variable models and deep Gaussian processes, which also unify many recently developed approximation schemes for these models. Several mean-field and structured approximate posterior families for the hidden variables in these models are studied. We also discuss several methods for approximate uncertainty propagation in recurrent and deep architectures based on Gaussian projection, linearisation, and simple Monte Carlo. The benefit of the unified inference and learning frameworks for these models are illustrated in a variety of real-world state-space modelling and regression tasks.
85

Desenvolvimento de interfaces gráficas para estatística Bayesiana aplicada à comparação mista de tratamentos / Graphical User Interface development for Bayesian Statistics applied to Mixed Treatment Comparison

Marcelo Goulart Correia 12 September 2013 (has links)
A partir dos avanços obtidos pela industria farmacêutica surgiram diversos medicamentos para o combate de enfermidades. Esses medicamentos possuem efeito tópico similar porém com suaves modificações em sua estrutura bioquímica, com isso a concorrência entre as industrias farmacêuticas se torna cada vez mais acirrada. Como forma de comparar a efetividade desses medicamentos, surgem diversas metodologias, com o objetivo de encontrar qual seria o melhor medicamento para uma dada situação. Uma das metodologias estudadas é a comparação mista de tratamentos, cujo objetivo é encontrar a efetividade de determinadas drogas em estudos e/ou ensaios clínicos que abordem, mesmo que de maneira indireta, os medicamentos estudados. A utilização dessa metodologia é demasiadamente complexa pois requer conhecimento de linguagens de programação em ambientes estatísticos além do domínio sobre as metodologias aplicadas a essa técnica. O objetivo principal desse estudo é a criação de uma interface gráfica que facilite a utilização do MTC para usuários que não possuam conhecimento em linguagens de programação, que seja de código aberto e multiplataforma. A expectativa é que, com essa interface, a utilização de técnicas mais abrangentes e avançadas seja facilitada, além disso, venha tornar o ensinamento sobre o tema mais facilitado para pessoas que ainda não conhecem o método / Based on the progress made by the pharmaceutical industry, several medications have emerged to combat diseases. These drugs have similar topic effects but with subtle changes in their biochemical structure, thus competition between the pharmaceutical industry becomes increasingly fierce. In order to compare the effectiveness of these drugs, appear different methodologies with the objective of find what would be the best medicine for a given situation. One of the methods studied is the Mixed Treatment Comparision (MTC) whose objective is to find the effectiveness of certain drugs in studies and / or clinical trials that address, even if indirectly, the drugs studied. The use of this methodology is too complex because it requires knowledge of programming languages in statistical environments, beyond the mastery of the methodologies applied to this technique. The main objective of this study is to create a graphical user interface (GUI) that facilitates the use of MTC for users who have no knowledge in programming languages, which is open source and cross-platform. The expectation about this interface is that the use of more comprehensive and advanced techniques is facilitated, moreover, make the teaching about the topic easier for people who do not know the method
86

Sequencing Effects and Loss Aversion in a Delay Discounting Task

January 2018 (has links)
abstract: The attractiveness of a reward depends in part on the delay to its receipt, with more distant rewards generally being valued less than more proximate ones. The rate at which people discount the value of delayed rewards has been associated with a variety of clinically and socially relevant human behaviors. Thus, the accurate measurement of delay discounting rates is crucial to the study of mechanisms underlying behaviors such as risky sex, addiction, and gambling. In delay discounting tasks, participants make choices between two alternatives: one small amount of money delivered immediately versus a large amount of money delivered after a delay. After many choices, the experimental task will converge on an indifference point: the value of the delayed reward that approximates the value of the immediate one. It has been shown that these indifference points are systematically biased by the direction in which one of the alternatives adjusts. This bias is termed a sequencing effect. The present research proposed a reference-dependent model of choice drawn from Prospect Theory to account for the presence of sequencing effects in a delay discounting task. Sensitivity to reference frames and sequencing effects were measured in two computer tasks. Bayesian and frequentist analyses indicated that the reference-dependent model of choice cannot account for sequencing effects. Thus, an alternative, perceptual account of sequencing effects that draws on a Bayesian framework of magnitude estimation is proposed and furnished with some preliminary evidence. Implications for future research in the measurement of delay discounting and sensitivity to reference frames are discussed. / Dissertation/Thesis / Masters Thesis Psychology 2018
87

Bayesovská faktorová analýza / Bayesian factor analysis

Vávra, Jan January 2018 (has links)
Bayesian factor analysis - abstract Factor analysis is a method which enables high-dimensional random vector of measurements to be approximated by linear combinations of much lower number of hidden factors. Classical estimation procedure of this model lies on the cho- ice of the number of factors, the decomposition of variance matrix while keeping identification conditions satisfied and on the appropriate choice of rotation for better interpretation of the model. This model will be transferred into bayesian framework which offers the usage of prior information unlike the classical appro- ach. The number of hidden factors can be considered as a random parameter and the dependency of each measurement on at most one factor can be forced by suitable specification of prior distribution. Estimates of model parameters are based on posterior distribution which is approximated by Monte Carlo Markov Chain methods. Bayesian approach solves the problem of selection of the num- ber of factors, the model estimation and the ensuring of the identifiability and the interpretability at the same time. The ability to estimate the real number of hidden factors is tested in a simulation study. 1
88

Identification and photometric redshifts for type-I quasars with medium- and narrow-band filter surveys / Identificação e redshifts fotométricos para quasares do tipo-I com sistemas de filtros de bandas médias e estreitas

Carolina Queiroz de Abreu Silva 16 November 2015 (has links)
Quasars are valuable sources for several cosmological applications. In particular, they can be used to trace some of the heaviest halos and their high intrinsic luminosities allow them to be detected at high redshifts. This implies that quasars (or active galactic nuclei, in a more general sense) have a huge potential to map the large-scale structure. However, this potential has not yet been fully realized, because instruments which rely on broad-band imaging to pre-select spectroscopic targets usually miss most quasars and, consequently, are not able to properly separate broad-line emitting quasars from other point-like sources (such as stars and low resolution galaxies). This work is an initial attempt to investigate the realistic gains on the identification and separation of quasars and stars when medium- and narrow-band filters in the optical are employed. The main novelty of our approach is the use of Bayesian priors both for the angular distribution of stars of different types on the sky and for the distribution of quasars as a function of redshift. Since the evidence from these priors convolve the angular dependence of stars with the redshift dependence of quasars, this allows us to control for the near degeneracy between these objects. However, our results are inconclusive to quantify the efficiency of star-quasar separation by using this approach and, hence, some critical refinements and improvements are still necessary. / Quasares são objetos valiosos para diversas aplicações cosmológicas. Em particular, eles podem ser usados para localizar alguns dos halos mais massivos e suas luminosidades intrinsecamente elevadas permitem que eles sejam detectados a altos redshifts. Isso implica que quasares (ou núcleos ativos de galáxias, de um modo geral) possuem um grande potencial para mapear a estrutura em larga escala. Entretanto, esse potencial ainda não foi completamente atingido, porque instrumentos que se baseiam no imageamento por bandas largas para pré-selecionar alvos espectroscópicos perdem a maioria dos quasares e, consequentemente, não são capazes de separar adequadamente quasares com linhas de emissão largas de outras fontes pontuais (como estrelas e galáxias de baixa resolução). Esse trabalho é uma tentativa inicial de investigar os ganhos reais na identificação e separação de quasares e estrelas quando são usados filtros de bandas médias e estreitas. A principal novidade desse método é o uso de priors Bayesianos tanto para a distribuição angular de estrelas de diferentes tipos no céu quanto para a distribuição de quasares como função do redshift. Como a evidência desses priors é uma convolução entre a dependência angular das estrelas e a dependência em redshift dos quasares, isso permite que a degenerescência entre esses objetos seja levada em consideração. Entretanto, nossos resultados ainda são inconclusivos para quantificar a eficiência da separação entre estrelas e quasares utilizando esse método e, portanto, alguns refinamentos críticos são necessários.
89

Anotação probabilística de perfis de metabólitos obtidos por cromatografia líquida acoplada a espectrometria de massas / Probabilistic annotation of metabolite profiles obtained by liquid chromatography coupled to mass spectrometry

Ricardo Roberto da Silva 16 April 2014 (has links)
A metabolômica é uma ciência emergente na era pós-genômica que almeja a análise abrangente de pequenas moléculas orgânicas em sistemas biológicos. Técnicas de cromatografia líquida acoplada a espectrometria de massas (LC-MS) figuram como as abordagens de amostragem mais difundidas. A extração e detecção simultânea de metabólitos por LC-MS produz conjuntos de dados complexos que requerem uma série de etapas de pré-processamento para que a informação possa ser extraída com eficiência e precisão. Para que as abordagens de perfil metabólico não direcionado possam ser efetivamente relacionadas às alterações de interesse no metabolismo, é estritamente necessário que os metabólitos amostrados sejam anotados com confiabilidade e que a sua inter-relação seja interpretada sob a pressuposição de uma amostra conectada do metabolismo. Diante do desafio apresentado, a presente tese teve por objetivo desenvolver um arcabouço de software, que tem como componente central um método probabilístico de anotação de metabólitos que permite a incorporação de fontes independentes de informações espectrais e conhecimento prévio acerca do metabolismo. Após a classificação probabilística, um novo método para representar a distribuição de probabilidades a posteriori em forma de grafo foi proposto. Uma biblioteca de métodos para o ambiente R, denominada ProbMetab (Probilistic Metabolomics), foi criada e disponibilizada de forma aberta e gratuita. Utilizando o software ProbMetab para analisar um conjunto de dados benchmark com identidades dos compostos conhecidas de antemão, demonstramos que até 90% das identidades corretas dos metabólitos estão presentes entre as três maiores probabilidades. Portanto, pode-se enfatizar a eficiência da disponibilização da distribuição de probabilidades a posteriori em lugar da classificação simplista usualmente adotada na área de metabolômica, em que se usa apenas o candidato de maior probabilidade. Numa aplicação à dados reais, mudanças em uma via metabólica reconhecidamente relacionada a estresses abióticos em plantas (Biossíntese de Flavona e Flavonol) foram automaticamente detectadas em dados de cana-de-açúcar, demonstrando a importância de uma visualização centrada na distribuição a posteriori da rede de anotações dos metabólitos. / Metabolomics is an emerging science field in the post-genomic era, which aims at a comprehensive analysis of small organic molecules in biological systems. Techniques of liquid chromatography coupled to mass spectrometry (LC-MS) figure as the most widespread approaches to metabolomics studies. The metabolite detection by LC-MS produces complex data sets, that require a series of preprocessing steps to ensure that the information can be extracted efficiently and accurately. In order to be effectively related to alterations in the metabolism of interest, is absolutely necessary that the metabolites sampled by untargeted metabolic profiling approaches are annotated with reliability and that their relationship are interpreted under the assumption of a connected metabolism sample. Faced with the presented challenge, this thesis developed a software framework, which has as its central component a probabilistic method for metabolite annotation that allows the incorporation of independent sources of spectral information and prior knowledge about metabolism. After the probabilistic classification, a new method to represent the a posteriori probability distribution in the form of a graph has been proposed. A library of methods for R environment, called ProbMetab (Probilistic Metabolomics), was created and made available as an open source software. Using the ProbMetab software to analyze a set of benchmark data with compound identities known beforehand, we demonstrated that up to 90% of the correct metabolite identities were present among the top-three higher probabilities, emphasizing the efficiency of a posteriori probability distribution display, in place of a simplistic classification with only the most probable candidate, usually adopted in the field of metabolomics. In an application to real data, changes in a known metabolic pathway related to abiotic stresses in plants (Biosynthesis of Flavone and Flavonol) were automatically detected on sugar cane data, demonstrating the importance of a view centered on the posterior distribution of metabolite annotation network.
90

Quantification of modelling uncertainties in turbulent flow simulations / Quantification des incertitudes de modélisation dans les écoulements turbulents

Edeling, Wouter Nico 14 April 2015 (has links)
Le but de cette thèse est de faire des simulations prédictives à partir de modèles de turbulence de type RANS (Reynolds-Averaged Navier-Stokes). Ces simulations font l'objet d'un traitement systématique du modèle, de son incertitude et de leur propagation par le biais d'un modèle de calcul prédictif aux incertitudes quantifiées. Pour faire cela, nous utilisons le cadre robuste de la statistique Bayesienne.La première étape vers ce but a été d'obtenir une estimation de l'erreur de simulations RANS basées sur le modèle de turbulence de Launder-Sharma k-e. Nous avons recherché en particulier à estimer des incertitudes pour les coefficients du modele, pour des écoulements de parois en gradients favorable et défavorable. Dans le but d'estimer la propagation des coefficients qui reproduisent le plus précisemment ces types d'écoulements, nous avons étudié 13 configurations différentes de calibrations Bayesienne. Chaque calibration était associée à un gradient de pression spécifique gràce à un modèle statistique. Nous representont la totalite des incertitudes dans la solution avec une boite-probabilite (p-box). Cette boîte-p représente aussi bien les paramètres de variabilité de l'écoulement que les incertitudes epistemiques de chaque calibration. L'estimation d'un nouvel écoulement de couche-limite est faite pour des valeurs d'incertitudes générées par cette information sur l'incertitude elle-même. L'erreur d'incertitude qui en résulte est consistante avec les mesures expérimentales.Cependant, malgré l'accord avec les mesures, l'erreur obtenue était encore trop large. Ceci est dû au fait que la boite-p est une prédiction non pondérée. Pour améliorer cela, nous avons développé une autre approche qui repose également sur la variabilité des coefficients de fermeture du modèle, au travers de multiples scénarios d'écoulements et de multiples modèles de fermeture. La variabilité est là encore estimée par le recours à la calibration Bayesienne et confrontée aux mesures expérimentales de chaque scénario. Cependant, un scénario-modèle Bayesien moyen (BMSA) est ici utilisé pour faire correspondre les distributions a posteriori à un scénario (prédictif) non mesuré. Contrairement aux boîtes-p, cette approche est une approche pondérée faisant appel aux probabilités des modèles de turbulence, déterminée par les données de calibration. Pour tous les scénarios de prédiction considérés, la déviation standard de l'estimation stochastique est consistante avec les mesures effectuées.Les résultats de l'approche BMSA expriment des barres d'erreur raisonnables. Cependant, afin de l'appliquer à des topologies plus complexes et au-delà de la classe des écoulements de couche-limite, des techniques de modeles de substitution doivent être mises en places. La méthode de la collocation Stochastique-Simplex (SSC) est une de ces techniques et est particulièrement robuste pour la propagation de distributions d'entrée incertaines dans un code de calcul. Néanmois, son utilisation de la triangulation Delaunay peut entrainer un problème de coût prohibitif pour les cas à plus de 5 dimensions. Nous avons donc étudié des moyens pour améliorer cette faible scalabilité. En premier lieu, c'est dans ce but que nous avons en premier proposé une technique alternative d'interpolation basée sur le probleme 'Set-Covering'. Deuxièmement, nous avons intégré la méthode SSC au cadre du modèle de réduction à haute dimension (HDMR) dans le but d'éviter de considérer tous les espaces de haute dimension en même temps.Finalement, avec l'utilisation de notre technique de modelisation de substitution (surrogate modelling technique), nous avons appliqué le cadre BMSA à un écoulement transsonique autour d'un profil d'aile. Avec cet outil nous sommes maintenant capable de faire des simulations prédictives d'écoulements auparavant trop coûteux et offrant des incertitudes quantifiées selon les imperfections des différents modèles de turbulence. / The goal of this thesis is to make predictive simulations with Reynolds-Averaged Navier-Stokes (RANS) turbulence models, i.e. simulations with a systematic treatment of model and data uncertainties and their propagation through a computational model to produce predictions of quantities of interest with quantified uncertainty. To do so, we make use of the robust Bayesian statistical framework.The first step toward our goal concerned obtaining estimates for the error in RANS simulations based on the Launder-Sharma k-e turbulence closure model, for a limited class of flows. In particular we searched for estimates grounded in uncertainties in the space of model closure coefficients, for wall-bounded flows at a variety of favourable and adverse pressure gradients. In order to estimate the spread of closure coefficients which reproduces these flows accurately, we performed 13 separate Bayesian calibrations. Each calibration was at a different pressure gradient, using measured boundary-layer velocity profiles, and a statistical model containing a multiplicative model inadequacy term in the solution space. The results are 13 joint posterior distributions over coefficients and hyper-parameters. To summarize this information we compute Highest Posterior-Density (HPD) intervals, and subsequently represent the total solution uncertainty with a probability box (p-box). This p-box represents both parameter variability across flows, and epistemic uncertainty within each calibration. A prediction of a new boundary-layer flow is made with uncertainty bars generated from this uncertainty information, and the resulting error estimate is shown to be consistent with measurement data.However, although consistent with the data, the obtained error estimates were very large. This is due to the fact that a p-box constitutes a unweighted prediction. To improve upon this, we developed another approach still based on variability in model closure coefficients across multiple flow scenarios, but also across multiple closure models. The variability is again estimated using Bayesian calibration against experimental data for each scenario, but now Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors in an unmeasured (prediction) scenario. Unlike the p-boxes, this is a weighted approach involving turbulence model probabilities which are determined from the calibration data. The methodology was applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth.The BMSA approach results in reasonable error bars, which can also be decomposed into separate contributions. However, to apply it to more complex topologies outside the class of boundary-layer flows, surrogate modelling techniques must be applied. The Simplex-Stochastic Collocation (SSC) method is a robust surrogate modelling technique used to propagate uncertain input distributions through a computer code. However, its use of the Delaunay triangulation can become prohibitively expensive for problems with dimensions higher than 5. We therefore investigated means to improve upon this bad scalability. In order to do so, we first proposed an alternative interpolation stencil technique based upon the Set-Covering problem, which resulted in a significant speed up when sampling the full-dimensional stochastic space. Secondly, we integrated the SSC method into the High-Dimensional Model-Reduction framework in order to avoid sampling high-dimensional spaces all together.Finally, with the use of our efficient surrogate modelling technique, we applied the BMSA framework to the transonic flow over an airfoil. With this we are able to make predictive simulations of computationally expensive flow problems with quantified uncertainty due to various imperfections in the turbulence models.

Page generated in 0.0842 seconds