• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 191
  • 31
  • 18
  • 12
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 553
  • 553
  • 214
  • 196
  • 106
  • 101
  • 73
  • 67
  • 67
  • 67
  • 66
  • 57
  • 54
  • 50
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

High fidelity micromechanics-based statistical analysis of composite material properties

Mustafa, Ghulam 08 April 2016 (has links)
Composite materials are being widely used in light weight structural applications due to their high specific stiffness and strength properties. However, predicting their mechanical behaviour accurately is a difficult task because of the complicated nature of these heterogeneous materials. This behaviour is not easily modeled with most of existing macro mechanics based models. Designers compensate for the model unknowns in failure predictions by generating overly conservative designs with relatively simple ply stacking sequences, thereby mitigating many of the benefits promised by composites. The research presented in this dissertation was undertaken with the primary goal of providing efficient methodologies for use in the design of composite structures considering inherent material variability and model shortcomings. A micromechanics based methodology is proposed to simulate stiffness, strength, and fatigue behaviour of composites. The computational micromechanics framework is based on the properties of the constituents of composite materials: the fiber, matrix and fiber/matrix interface. This model helps the designer to understand in-depth the failure modes in these materials and design efficient structures utilizing arbitrary layups with a reduced requirement for supporting experimental testing. The only limiting factor in using a micromechanics model is the challenge in obtaining the constituent properties. The overall novelty of this dissertation is to calibrate these constituent properties by integrating the micromechanics approach with a Bayesian statistical model. The early research explored the probabilistic aspects of the constituent properties to calculate the stiffness characteristics of a unidirectional lamina. Then these stochastic stiffness properties were considered as an input to analyze the wing box of a wind turbine blade. Results of this study gave a gateway to map constituent uncertainties to the top-level structure. Next, a stochastic first ply failure load method was developed based on micromechanics and Bayesian inference. Finally, probabilistic SN curves of composite materials were calculated after fatigue model parameter calibration using Bayesian inference. Throughout this research, extensive experimental data sets from literature have been used to calibrate and evaluate the proposed models. The micromechanics based probabilistic framework formulated here is quite general, and applied on the specific application of a wind turbine blade. The procedure may be easily generalized to deal with other structural applications such as storage tanks, pressure vessels, civil structural cladding, unmanned air vehicles, automotive bodies, etc. which can be explored in future work. / Graduate / 0548 / enginer315@gmail.com
72

Bayesian approaches for modeling protein biophysics

Hines, Keegan 18 September 2014 (has links)
Proteins are the fundamental unit of computation and signal processing in biological systems. A quantitative understanding of protein biophysics is of paramount importance, since even slight malfunction of proteins can lead to diverse and severe disease states. However, developing accurate and useful mechanistic models of protein function can be strikingly elusive. I demonstrate that the adoption of Bayesian statistical methods can greatly aid in modeling protein systems. I first discuss the pitfall of parameter non-identifiability and how a Bayesian approach to modeling can yield reliable and meaningful models of molecular systems. I then delve into a particular case of non-identifiability within the context of an emerging experimental technique called single molecule photobleaching. I show that the interpretation of this data is non-trivial and provide a rigorous inference model for the analysis of this pervasive experimental tool. Finally, I introduce the use of nonparametric Bayesian inference for the analysis of single molecule time series. These methods aim to circumvent problems of model selection and parameter identifiability and are demonstrated with diverse applications in single molecule biophysics. The adoption of sophisticated inference methods will lead to a more detailed understanding of biophysical systems. / text
73

Machine Learning Methods for Microarray Data Analysis

Gabbur, Prasad January 2010 (has links)
Microarrays emerged in the 1990s as a consequence of the efforts to speed up the process of drug discovery. They revolutionized molecular biological research by enabling monitoring of thousands of genes together. Typical microarray experiments measure the expression levels of a large numberof genes on very few tissue samples. The resulting sparsity of data presents major challenges to statistical methods used to perform any kind of analysis on this data. This research posits that phenotypic classification and prediction serve as good objective functions for both optimization and evaluation of microarray data analysis methods. This is because classification measures whatis needed for diagnostics and provides quantitative performance measures such as leave-one-out (LOO) or held-out prediction accuracy and confidence. Under the classification framework, various microarray data normalization procedures are evaluated using a class label hypothesis testing framework and also employing Support Vector Machines (SVM) and linear discriminant based classifiers. A novel normalization technique based on minimizing the squared correlation coefficients between expression levels of gene pairs is proposed and evaluated along with the other methods. Our results suggest that most normalization methods helped classification on the datasets considered except the rank method, most likely due to its quantization effects.Another contribution of this research is in developing machine learning methods for incorporating an independent source of information, in the form of gene annotations, to analyze microarray data. Recently, genes of many organisms have been annotated with terms from a limited vocabulary called Gene Ontologies (GO), describing the genes' roles in various biological processes, molecular functions and their locations within the cell. Novel probabilistic generative models are proposed for clustering genes using both their expression levels and GO tags. These models are similar in essence to the ones used for multimodal data, such as images and words, with learning and inference done in a Bayesian framework. The multimodal generative models are used for phenotypic class prediction. More specifically, the problems of phenotype prediction for static gene expression data and state prediction for time-course data are emphasized. Using GO tags for organisms whose genes have been studied more comprehensively leads to an improvement in prediction. Our methods also have the potential to provide a way to assess the quality of available GO tags for the genes of various model organisms.
74

Bayesian multisensory perception

Hospedales, Timothy January 2008 (has links)
A key goal for humans and artificial intelligence systems is to develop an accurate and unified picture of the outside world based on the data from any sense(s) that may be available. The availability of multiple senses presents the perceptual system with new opportunities to fulfil this goal, but exploiting these opportunities first requires the solution of two related tasks. The first is how to make the best use of any redundant information from the sensors to produce the most accurate percept of the state of the world. The second is how to interpret the relationship between observations in each modality; for example, the correspondence problem of whether or not they originate from the same source. This thesis investigates these questions using ideal Bayesian observers as the underlying theoretical approach. In particular, the latter correspondence task is treated as a problem of Bayesian model selection or structure inference in Bayesian networks. This approach provides a unified and principled way of representing and understanding the perceptual problems faced by humans and machines and their commonality. In the domain of machine intelligence, we exploit the developed theory for practical benefit, developing a model to represent audio-visual correlations. Unsupervised learning in this model provides automatic calibration and user appearance learning, without human intervention. Inference in the model involves explicit reasoning about the association between latent sources and observations. This provides audio-visual tracking through occlusion with improved accuracy compared to standard techniques. It also provides detection, verification and speech segmentation, ultimately allowing the machine to understand ``who said what, where?'' in multi-party conversations. In the domain of human neuroscience, we show how a variety of recent results in multimodal perception can be understood as the consequence of probabilistic reasoning about the causal structure of multimodal observations. We show this for a localisation task in audio-visual psychophysics, which is very similar to the task solved by our machine learning system. We also use the same theory to understand results from experiments in the completely different paradigm of oddity detection using visual and haptic modalities. These results begin to suggest that the human perceptual system performs -- or at least approximates -- sophisticated probabilistic reasoning about the causal structure of observations under the hood.
75

Estimation et Classification de Signaux Altimétriques / Estimation and Classification of Altimetric Signals

Severini, Jérôme 07 October 2010 (has links)
La mesure de la hauteur des océans, des vents de surface (fortement liés aux températures des océans), ou encore de la hauteur des vagues sont un ensemble de paramètres nécessaires à l'étude des océans mais aussi au suivi de leurs évolutions : l'altimétrie spatiale est l'une des disciplines le permettant. Une forme d'onde altimétrique est le résultat de l'émission d'une onde radar haute fréquence sur une surface donnée (classiquement océanique) et de la mesure de la réflexion de cette onde. Il existe actuellement une méthode d'estimation non optimale des formes d'onde altimétriques ainsi que des outils de classifications permettant d'identifier les différents types de surfaces observées. Nous proposons dans cette étude d'appliquer la méthode d'estimation bayésienne aux formes d'onde altimétriques ainsi que de nouvelles approches de classification. Nous proposons enfin la mise en place d'un algorithme spécifique permettant l'étude de la topographie en milieu côtier, étude qui est actuellement très peu développée dans le domaine de l'altimétrie. / After having scanned the ocean levels during thirteen years, the french/american satelliteTopex-Poséidon disappeared in 2005. Topex-Poséidon was replaced by Jason-1 in december 2001 and a new satellit Jason-2 is waited for 2008. Several estimation methods have been developed for signals resulting from these satellites. In particular, estimators of the sea height and wave height have shown very good performance when they are applied on waveforms backscattered from ocean surfaces. However, it is a more challenging problem to extract relevant information from signals backscattered from non-oceanic surfaces such as inland waters, deserts or ices. This PhD thesis is divided into two parts : A first direction consists of developing classification methods for altimetric signals in order to recognize the type of surface affected by the radar waveform. In particular, a specific attention will be devoted to support vector machines (SVMs) and functional data analysis for this problem. The second part of this thesis consists of developing estimation algorithms appropriate to altimetric signals obtained after reflexion on non-oceanic surfaces. Bayesian algorithms are currently under investigation for this estimation problem. This PhD is co-supervised by the french society CLS (Collect Localisation Satellite) (seehttp://www.cls.fr/ for more details) which will in particular provide the real altimetric data necessary for this study.
76

Novel Sensing and Inference Techniques in Air and Water Environments

Zhou, Xiaochi January 2015 (has links)
<p>Environmental sensing is experiencing tremendous development due largely to the advancement of sensor technology and wireless technology/internet that connects them and enable data exchange. Environmental monitoring sensor systems range from satellites that continuously monitor earth surface to miniature wearable devices that track local environment and people's activities. However, transforming these data into knowledge of the underlying physical and/or chemical processes remains a big challenge given the spatial, temporal scale, and heterogeneity of the relevant natural phenomena. This research focuses on the development and application of novel sensing and inference techniques in air and water environments. The overall goal is to infer the state and dynamics of some key environmental variables by building various models: either a sensor system or numerical simulations that capture the physical processes.</p><p>This dissertation is divided into five chapters. Chapter 1 introduces the background and motivation of this research. Chapter 2 focuses on the evaluation of different models (physically-based versus empirical) and remote sensing data (multispectral versus hyperspectral) for suspended sediment concentration (SSC) retrieval in shallow water environments. The study site is the Venice lagoon (Italy), where we compare the estimated SSC from various models and datasets against in situ probe measurements. The results showed that the physically-based model provides more robust estimate of SSC compared against empirical models when evaluated using the cross-validation method (leave-one-out). Despite the finer spectral resolution and the choice of optimal combinations of bands, the hyperspectral data is less reliable for SSC retrieval comparing to multispectral data due to its limited amount of historical dataset, information redundancy, and cross-band correlation.</p><p>Chapter 3 introduces a multipollutant sensor/sampler system that developed for use on mobile applications including aerostats and unmanned aerial vehicles (UAVs). The system is particularly applicable to open area sources such as forest fires, due to its light weight (3.5 kg), compact size (6.75 L), and internal power supply. The sensor system, termed “Kolibri”, consists of low-cost sensors measuring CO2 and CO, and samplers for particulate matter and volatile organic compounds (VOCs). The Kolibri is controlled by a microcontroller, which can record and transfer data in real time using a radio module. Selection of the sensors was based on laboratory testing for accuracy, response delay and recovery, cross-sensitivity, and precision. The Kolibri was compared against rack-mounted continuous emission monitors (CEMs) and another mobile sampling instrument (the ``Flyer'') that had been used in over ten open area pollutant sampling events. Our results showed that the time series of CO, CO2, and PM2.5 concentrations measured by the Kolibri agreed well with those from the CEMs and the Flyer. The VOC emission factors obtained using the Kolibri are comparable to existing literature values. The Kolibri system can be applied to various open area sampling challenging situations such as fires, lagoons, flares, and landfills.</p><p>Chapter 4 evaluates the trade-off between sensor quality and quantity for fenceline monitoring of fugitive emissions. This research is motivated by the new air quality standard that requires continuous monitoring of hazardous air pollutants (HAPs) along the fenceline of oil and gas refineries. Recently, the emergence of low-cost sensors enables the implementation of spatially-dense sensor network that can potentially compensate for the low quality of individual sensors. To quantify sensor inaccuracy and uncertainty of describing gas concentration that is governed by turbulent air flow, a Bayesian approach is applied to probabilistically infer the leak source and strength. Our results show that a dense sensor network can partly compensate for low-sensitivity or high noise of individual sensors. However, the fenceline monitoring approach fails to make an accurate leak detection when sensor/wind bias exists even with a dense sensor network.</p><p>Chapter 5 explores the feasibility of applying a mobile sensing approach to estimate fugitive methane emissions in suburban and rural environments. We first compare the mobile approach against a stationary method (OTM33A) proposed by the US EPA using a series of controlled release tests. Analysis shows that the mobile sensing approach can reduce estimation bias and uncertainty compared against the OTM33A method. Then, we apply this mobile sensing approach to quantify fugitive emissions from several ammonia fertilizer plants in rural areas. Significant methane emission was identified from one plant while the other two shows relatively low emissions. Sensitivity analysis of several model parameters shows that the error term in the Bayesian inference is vital for the determination of model uncertainty while others are less influential. Overall, this mobile sensing approach shows promising results for future applications of quantifying fugitive methane emission in suburban and rural environments.</p> / Dissertation
77

Bayesovske modely očných pohybov / Bayesian models of eye movements

Lux, Erik January 2014 (has links)
Attention allows us to monitor objects or regions of visual space and extract information from them to use for report or storage. Classical theories of attention assumed a single focus of selection but many everyday activities, such as playing video games, suggest otherwise. Nonetheless, the underlying mechanism which can explain the ability to divide attention has not been well established. Numerous attempts have been made in order to clarify divided attention, including analytical strategies as well as methods working with visual phenomena, even more sophisticated predictors incorporating information about past selection decisions. Virtually all the attempts approach this problem by constructing a simplified model of attention. In this study, we develop a version of the existing Bayesian framework to propose such models, and evaluate their ability to generate eye movement trajectories. For the comparison of models, we use the eye movement trajectories generated by several analytical strategies. We measure the similarity between...
78

Bayesovske modely očných pohybov / Bayesian models of eye movements

Lux, Erik January 2014 (has links)
Attention allows us to monitor objects or regions of visual space and extract information from them to use for report or storage. Classical theories of attention assumed a single focus of selection but many everyday activities, such as playing video games, suggest otherwise. Nonetheless, the underlying mechanism which can explain the ability to divide attention has not been well established. Numerous attempts have been made in order to clarify divided attention, including analytical strategies as well as methods working with visual phenomena, even more sophisticated predictors incorporating information about past selection decisions. Virtually all the attempts approach this problem by constructing a simplified model of attention. In this study, we develop a version of the existing Bayesian framework to propose such models, and evaluate their ability to generate eye movement trajectories. For the comparison of models, we use the eye movement trajectories generated by several analytical strategies. We measure the...
79

Statistical Analysis and Bayesian Methods for Fatigue Life Prediction and Inverse Problems in Linear Time Dependent PDEs with Uncertainties

Sawlan, Zaid A 10 November 2018 (has links)
This work employs statistical and Bayesian techniques to analyze mathematical forward models with several sources of uncertainty. The forward models usually arise from phenomenological and physical phenomena and are expressed through regression-based models or partial differential equations (PDEs) associated with uncertain parameters and input data. One of the critical challenges in real-world applications is to quantify uncertainties of the unknown parameters using observations. To this purpose, methods based on the likelihood function, and Bayesian techniques constitute the two main statistical inferential approaches considered here. Two problems are studied in this thesis. The first problem is the prediction of fatigue life of metallic specimens. The second part is related to inverse problems in linear PDEs. Both problems require the inference of unknown parameters given certain measurements. We first estimate the parameters by means of the maximum likelihood approach. Next, we seek a more comprehensive Bayesian inference using analytical asymptotic approximations or computational techniques. In the fatigue life prediction, there are several plausible probabilistic stress-lifetime (S-N) models. These models are calibrated given uniaxial fatigue experiments. To generate accurate fatigue life predictions, competing S-N models are ranked according to several classical information-based measures. A different set of predictive information criteria is then used to compare the candidate Bayesian models. Moreover, we propose a spatial stochastic model to generalize S-N models to fatigue crack initiation in general geometries. The model is based on a spatial Poisson process with an intensity function that combines the S-N curves with an averaged effective stress that is computed from the solution of the linear elasticity equations.
80

Essays on bivariate option pricing via copula and heteroscedasticity models: a classical and bayesian approach / Ensaios sobre precificação de opções bivariadas via cópulas e modelos heterocedásticos: abordagem clássica e bayesiana

Lopes, Lucas Pereira 15 February 2019 (has links)
This dissertation is composed of two main and independents essays, but complementary. In the first one, we discuss the option price under a bayesian perspective. This essay aims to price and analyze the fair price behavior of the call-on-max (bivariate) option considering marginal heteroscedastic models with dependence structure modeled via copulas. Concerning inference, we adopt a Bayesian perspective and computationally intensive methods based on Monte Carlo simulations via Markov Chain (MCMC). A simulation study examines the bias and the root mean squared errors of the posterior means for the parameters. Real stocks prices of Brazilian banks illustrate the approach. For the proposed method is verified the effects of strike and dependence structure on the fair price of the option. The results show that the prices obtained by our heteroscedastic model approach and copulas differ substantially from the prices obtained by the model derived from Black and Scholes. Empirical results are presented to argue the advantages of our strategy. In the second chapter, we consider the GARCH-in-mean models with asymmetric variance specifications to model the volatility of the assets-objects under the risk-neutral dynamics. Moreover, the copula functions model the joint distribution, with the objective of capturing non-linear, linear and tails associations between the assets. We aim to provide a methodology to realize a more realistic pricing option. To illustrate the methodology, we use stocks from two Brazilian companies, where our the modeling offered a proper fitting. Confronting the results obtained with the classic model, which is an extension of the Black and Scholes model, we note that considering constant volatility over time underpricing the options, especially in-the-money options. / Essa dissertação é composta por dois principais ensaios independentes e complementares. No primeiro discutimos a precificação de opções bivariadas sob uma perspectiva bayesiana. Neste ensaio o principal objetivo foi precificar e analizar o preço justo da opção bivariada call-onmax considerando modelos heterocedásticos para as marginais e a modelagem de dependência realizada por funções cópulas. Para a inferência, adotamos o método computacionalmente intensivo baseado em simulações Monte Carlo via Cadeia de Markov (MCMC). Um estudo de simulação examinou o viés e o erro quadrático médio dos parâmetros a posteriori. Para a ilustração da abordagem, foram utilizados preços de ações de bancos Brasileiros. Além disso, foi verificado o efeito do strike e da estrutura de dependência nos preços das opções. Os resultados mostraram que os preços obtidos pelo método utilizado difere substancialmente dos obtidos pelo modelo clássico derivado de Black e Scholes. No segundo capítulo, consideramos os modelos GARCH-in-mean com especificações assimétricas para a variância com o objetivo de acomodar as características da volatilidade dos ativos-objetos sob uma perspectiva da dinâmica do risco-neutro. Além do mais, as funções cópulas foram utilizadas para capturar as possíveis estruturas de dependência linear, não-linear e caudais entre os ativos. Para ilustrar a metodologia, utilizamos dados de duas companhias Brasileiras. Confrontando os resultados obtidos com o modelo clássico extendido de Black e Scholes, notamos que a premissa de volatilidade constante sub-precifica as opções bivariadas, especialmente dentro-do-dinheiro.

Page generated in 0.0531 seconds