Spelling suggestions: "subject:"bayesian statistics"" "subject:"eayesian statistics""
101 |
Approximate inference : new visionsLi, Yingzhen January 2018 (has links)
Nowadays machine learning (especially deep learning) techniques are being incorporated to many intelligent systems affecting the quality of human life. The ultimate purpose of these systems is to perform automated decision making, and in order to achieve this, predictive systems need to return estimates of their confidence. Powered by the rules of probability, Bayesian inference is the gold standard method to perform coherent reasoning under uncertainty. It is generally believed that intelligent systems following the Bayesian approach can better incorporate uncertainty information for reliable decision making, and be less vulnerable to attacks such as data poisoning. Critically, the success of Bayesian methods in practice, including the recent resurgence of Bayesian deep learning, relies on fast and accurate approximate Bayesian inference applied to probabilistic models. These approximate inference methods perform (approximate) Bayesian reasoning at a relatively low cost in terms of time and memory, thus allowing the principles of Bayesian modelling to be applied to many practical settings. However, more work needs to be done to scale approximate Bayesian inference methods to big systems such as deep neural networks and large-scale dataset such as ImageNet. In this thesis we develop new algorithms towards addressing the open challenges in approximate inference. In the first part of the thesis we develop two new approximate inference algorithms, by drawing inspiration from the well known expectation propagation and message passing algorithms. Both approaches provide a unifying view of existing variational methods from different algorithmic perspectives. We also demonstrate that they lead to better calibrated inference results for complex models such as neural network classifiers and deep generative models, and scale to large datasets containing hundreds of thousands of data-points. In the second theme of the thesis we propose a new research direction for approximate inference: developing algorithms for fitting posterior approximations of arbitrary form, by rethinking the fundamental principles of Bayesian computation and the necessity of algorithmic constraints in traditional inference schemes. We specify four algorithmic options for the development of such new generation approximate inference methods, with one of them further investigated and applied to Bayesian deep learning tasks.
|
102 |
Desenvolvimento de interfaces gráficas para estatística Bayesiana aplicada à comparação mista de tratamentos / Graphical User Interface development for Bayesian Statistics applied to Mixed Treatment ComparisonMarcelo Goulart Correia 12 September 2013 (has links)
A partir dos avanços obtidos pela industria farmacêutica surgiram diversos
medicamentos para o combate de enfermidades. Esses medicamentos possuem
efeito tópico similar porém com suaves modificações em sua estrutura bioquímica,
com isso a concorrência entre as industrias farmacêuticas se torna cada vez mais
acirrada. Como forma de comparar a efetividade desses medicamentos, surgem
diversas metodologias, com o objetivo de encontrar qual seria o melhor
medicamento para uma dada situação. Uma das metodologias estudadas é a
comparação mista de tratamentos, cujo objetivo é encontrar a efetividade de
determinadas drogas em estudos e/ou ensaios clínicos que abordem, mesmo que de
maneira indireta, os medicamentos estudados. A utilização dessa metodologia é
demasiadamente complexa pois requer conhecimento de linguagens de
programação em ambientes estatísticos além do domínio sobre as metodologias
aplicadas a essa técnica. O objetivo principal desse estudo é a criação de uma
interface gráfica que facilite a utilização do MTC para usuários que não possuam
conhecimento em linguagens de programação, que seja de código aberto e
multiplataforma. A expectativa é que, com essa interface, a utilização de técnicas
mais abrangentes e avançadas seja facilitada, além disso, venha tornar o
ensinamento sobre o tema mais facilitado para pessoas que ainda não conhecem o
método / Based on the progress made by the pharmaceutical industry, several
medications have emerged to combat diseases. These drugs have similar topic
effects but with subtle changes in their biochemical structure, thus competition
between the pharmaceutical industry becomes increasingly fierce. In order to
compare the effectiveness of these drugs, appear different methodologies with the
objective of find what would be the best medicine for a given situation. One of the
methods studied is the Mixed Treatment Comparision (MTC) whose objective is to
find the effectiveness of certain drugs in studies and / or clinical trials that address,
even if indirectly, the drugs studied. The use of this methodology is too complex
because it requires knowledge of programming languages in statistical environments,
beyond the mastery of the methodologies applied to this technique. The main
objective of this study is to create a graphical user interface (GUI) that facilitates the
use of MTC for users who have no knowledge in programming languages, which is
open source and cross-platform. The expectation about this interface is that the use
of more comprehensive and advanced techniques is facilitated, moreover, make the
teaching about the topic easier for people who do not know the method
|
103 |
Sequencing Effects and Loss Aversion in a Delay Discounting TaskJanuary 2018 (has links)
abstract: The attractiveness of a reward depends in part on the delay to its receipt, with more distant rewards generally being valued less than more proximate ones. The rate at which people discount the value of delayed rewards has been associated with a variety of clinically and socially relevant human behaviors. Thus, the accurate measurement of delay discounting rates is crucial to the study of mechanisms underlying behaviors such as risky sex, addiction, and gambling. In delay discounting tasks, participants make choices between two alternatives: one small amount of money delivered immediately versus a large amount of money delivered after a delay. After many choices, the experimental task will converge on an indifference point: the value of the delayed reward that approximates the value of the immediate one. It has been shown that these indifference points are systematically biased by the direction in which one of the alternatives adjusts. This bias is termed a sequencing effect.
The present research proposed a reference-dependent model of choice drawn from Prospect Theory to account for the presence of sequencing effects in a delay discounting task. Sensitivity to reference frames and sequencing effects were measured in two computer tasks. Bayesian and frequentist analyses indicated that the reference-dependent model of choice cannot account for sequencing effects. Thus, an alternative, perceptual account of sequencing effects that draws on a Bayesian framework of magnitude estimation is proposed and furnished with some preliminary evidence. Implications for future research in the measurement of delay discounting and sensitivity to reference frames are discussed. / Dissertation/Thesis / Masters Thesis Psychology 2018
|
104 |
Bayesovská faktorová analýza / Bayesian factor analysisVávra, Jan January 2018 (has links)
Bayesian factor analysis - abstract Factor analysis is a method which enables high-dimensional random vector of measurements to be approximated by linear combinations of much lower number of hidden factors. Classical estimation procedure of this model lies on the cho- ice of the number of factors, the decomposition of variance matrix while keeping identification conditions satisfied and on the appropriate choice of rotation for better interpretation of the model. This model will be transferred into bayesian framework which offers the usage of prior information unlike the classical appro- ach. The number of hidden factors can be considered as a random parameter and the dependency of each measurement on at most one factor can be forced by suitable specification of prior distribution. Estimates of model parameters are based on posterior distribution which is approximated by Monte Carlo Markov Chain methods. Bayesian approach solves the problem of selection of the num- ber of factors, the model estimation and the ensuring of the identifiability and the interpretability at the same time. The ability to estimate the real number of hidden factors is tested in a simulation study. 1
|
105 |
Identification and photometric redshifts for type-I quasars with medium- and narrow-band filter surveys / Identificação e redshifts fotométricos para quasares do tipo-I com sistemas de filtros de bandas médias e estreitasCarolina Queiroz de Abreu Silva 16 November 2015 (has links)
Quasars are valuable sources for several cosmological applications. In particular, they can be used to trace some of the heaviest halos and their high intrinsic luminosities allow them to be detected at high redshifts. This implies that quasars (or active galactic nuclei, in a more general sense) have a huge potential to map the large-scale structure. However, this potential has not yet been fully realized, because instruments which rely on broad-band imaging to pre-select spectroscopic targets usually miss most quasars and, consequently, are not able to properly separate broad-line emitting quasars from other point-like sources (such as stars and low resolution galaxies). This work is an initial attempt to investigate the realistic gains on the identification and separation of quasars and stars when medium- and narrow-band filters in the optical are employed. The main novelty of our approach is the use of Bayesian priors both for the angular distribution of stars of different types on the sky and for the distribution of quasars as a function of redshift. Since the evidence from these priors convolve the angular dependence of stars with the redshift dependence of quasars, this allows us to control for the near degeneracy between these objects. However, our results are inconclusive to quantify the efficiency of star-quasar separation by using this approach and, hence, some critical refinements and improvements are still necessary. / Quasares são objetos valiosos para diversas aplicações cosmológicas. Em particular, eles podem ser usados para localizar alguns dos halos mais massivos e suas luminosidades intrinsecamente elevadas permitem que eles sejam detectados a altos redshifts. Isso implica que quasares (ou núcleos ativos de galáxias, de um modo geral) possuem um grande potencial para mapear a estrutura em larga escala. Entretanto, esse potencial ainda não foi completamente atingido, porque instrumentos que se baseiam no imageamento por bandas largas para pré-selecionar alvos espectroscópicos perdem a maioria dos quasares e, consequentemente, não são capazes de separar adequadamente quasares com linhas de emissão largas de outras fontes pontuais (como estrelas e galáxias de baixa resolução). Esse trabalho é uma tentativa inicial de investigar os ganhos reais na identificação e separação de quasares e estrelas quando são usados filtros de bandas médias e estreitas. A principal novidade desse método é o uso de priors Bayesianos tanto para a distribuição angular de estrelas de diferentes tipos no céu quanto para a distribuição de quasares como função do redshift. Como a evidência desses priors é uma convolução entre a dependência angular das estrelas e a dependência em redshift dos quasares, isso permite que a degenerescência entre esses objetos seja levada em consideração. Entretanto, nossos resultados ainda são inconclusivos para quantificar a eficiência da separação entre estrelas e quasares utilizando esse método e, portanto, alguns refinamentos críticos são necessários.
|
106 |
Anotação probabilística de perfis de metabólitos obtidos por cromatografia líquida acoplada a espectrometria de massas / Probabilistic annotation of metabolite profiles obtained by liquid chromatography coupled to mass spectrometryRicardo Roberto da Silva 16 April 2014 (has links)
A metabolômica é uma ciência emergente na era pós-genômica que almeja a análise abrangente de pequenas moléculas orgânicas em sistemas biológicos. Técnicas de cromatografia líquida acoplada a espectrometria de massas (LC-MS) figuram como as abordagens de amostragem mais difundidas. A extração e detecção simultânea de metabólitos por LC-MS produz conjuntos de dados complexos que requerem uma série de etapas de pré-processamento para que a informação possa ser extraída com eficiência e precisão. Para que as abordagens de perfil metabólico não direcionado possam ser efetivamente relacionadas às alterações de interesse no metabolismo, é estritamente necessário que os metabólitos amostrados sejam anotados com confiabilidade e que a sua inter-relação seja interpretada sob a pressuposição de uma amostra conectada do metabolismo. Diante do desafio apresentado, a presente tese teve por objetivo desenvolver um arcabouço de software, que tem como componente central um método probabilístico de anotação de metabólitos que permite a incorporação de fontes independentes de informações espectrais e conhecimento prévio acerca do metabolismo. Após a classificação probabilística, um novo método para representar a distribuição de probabilidades a posteriori em forma de grafo foi proposto. Uma biblioteca de métodos para o ambiente R, denominada ProbMetab (Probilistic Metabolomics), foi criada e disponibilizada de forma aberta e gratuita. Utilizando o software ProbMetab para analisar um conjunto de dados benchmark com identidades dos compostos conhecidas de antemão, demonstramos que até 90% das identidades corretas dos metabólitos estão presentes entre as três maiores probabilidades. Portanto, pode-se enfatizar a eficiência da disponibilização da distribuição de probabilidades a posteriori em lugar da classificação simplista usualmente adotada na área de metabolômica, em que se usa apenas o candidato de maior probabilidade. Numa aplicação à dados reais, mudanças em uma via metabólica reconhecidamente relacionada a estresses abióticos em plantas (Biossíntese de Flavona e Flavonol) foram automaticamente detectadas em dados de cana-de-açúcar, demonstrando a importância de uma visualização centrada na distribuição a posteriori da rede de anotações dos metabólitos. / Metabolomics is an emerging science field in the post-genomic era, which aims at a comprehensive analysis of small organic molecules in biological systems. Techniques of liquid chromatography coupled to mass spectrometry (LC-MS) figure as the most widespread approaches to metabolomics studies. The metabolite detection by LC-MS produces complex data sets, that require a series of preprocessing steps to ensure that the information can be extracted efficiently and accurately. In order to be effectively related to alterations in the metabolism of interest, is absolutely necessary that the metabolites sampled by untargeted metabolic profiling approaches are annotated with reliability and that their relationship are interpreted under the assumption of a connected metabolism sample. Faced with the presented challenge, this thesis developed a software framework, which has as its central component a probabilistic method for metabolite annotation that allows the incorporation of independent sources of spectral information and prior knowledge about metabolism. After the probabilistic classification, a new method to represent the a posteriori probability distribution in the form of a graph has been proposed. A library of methods for R environment, called ProbMetab (Probilistic Metabolomics), was created and made available as an open source software. Using the ProbMetab software to analyze a set of benchmark data with compound identities known beforehand, we demonstrated that up to 90% of the correct metabolite identities were present among the top-three higher probabilities, emphasizing the efficiency of a posteriori probability distribution display, in place of a simplistic classification with only the most probable candidate, usually adopted in the field of metabolomics. In an application to real data, changes in a known metabolic pathway related to abiotic stresses in plants (Biosynthesis of Flavone and Flavonol) were automatically detected on sugar cane data, demonstrating the importance of a view centered on the posterior distribution of metabolite annotation network.
|
107 |
Quantification of modelling uncertainties in turbulent flow simulations / Quantification des incertitudes de modélisation dans les écoulements turbulentsEdeling, Wouter Nico 14 April 2015 (has links)
Le but de cette thèse est de faire des simulations prédictives à partir de modèles de turbulence de type RANS (Reynolds-Averaged Navier-Stokes). Ces simulations font l'objet d'un traitement systématique du modèle, de son incertitude et de leur propagation par le biais d'un modèle de calcul prédictif aux incertitudes quantifiées. Pour faire cela, nous utilisons le cadre robuste de la statistique Bayesienne.La première étape vers ce but a été d'obtenir une estimation de l'erreur de simulations RANS basées sur le modèle de turbulence de Launder-Sharma k-e. Nous avons recherché en particulier à estimer des incertitudes pour les coefficients du modele, pour des écoulements de parois en gradients favorable et défavorable. Dans le but d'estimer la propagation des coefficients qui reproduisent le plus précisemment ces types d'écoulements, nous avons étudié 13 configurations différentes de calibrations Bayesienne. Chaque calibration était associée à un gradient de pression spécifique gràce à un modèle statistique. Nous representont la totalite des incertitudes dans la solution avec une boite-probabilite (p-box). Cette boîte-p représente aussi bien les paramètres de variabilité de l'écoulement que les incertitudes epistemiques de chaque calibration. L'estimation d'un nouvel écoulement de couche-limite est faite pour des valeurs d'incertitudes générées par cette information sur l'incertitude elle-même. L'erreur d'incertitude qui en résulte est consistante avec les mesures expérimentales.Cependant, malgré l'accord avec les mesures, l'erreur obtenue était encore trop large. Ceci est dû au fait que la boite-p est une prédiction non pondérée. Pour améliorer cela, nous avons développé une autre approche qui repose également sur la variabilité des coefficients de fermeture du modèle, au travers de multiples scénarios d'écoulements et de multiples modèles de fermeture. La variabilité est là encore estimée par le recours à la calibration Bayesienne et confrontée aux mesures expérimentales de chaque scénario. Cependant, un scénario-modèle Bayesien moyen (BMSA) est ici utilisé pour faire correspondre les distributions a posteriori à un scénario (prédictif) non mesuré. Contrairement aux boîtes-p, cette approche est une approche pondérée faisant appel aux probabilités des modèles de turbulence, déterminée par les données de calibration. Pour tous les scénarios de prédiction considérés, la déviation standard de l'estimation stochastique est consistante avec les mesures effectuées.Les résultats de l'approche BMSA expriment des barres d'erreur raisonnables. Cependant, afin de l'appliquer à des topologies plus complexes et au-delà de la classe des écoulements de couche-limite, des techniques de modeles de substitution doivent être mises en places. La méthode de la collocation Stochastique-Simplex (SSC) est une de ces techniques et est particulièrement robuste pour la propagation de distributions d'entrée incertaines dans un code de calcul. Néanmois, son utilisation de la triangulation Delaunay peut entrainer un problème de coût prohibitif pour les cas à plus de 5 dimensions. Nous avons donc étudié des moyens pour améliorer cette faible scalabilité. En premier lieu, c'est dans ce but que nous avons en premier proposé une technique alternative d'interpolation basée sur le probleme 'Set-Covering'. Deuxièmement, nous avons intégré la méthode SSC au cadre du modèle de réduction à haute dimension (HDMR) dans le but d'éviter de considérer tous les espaces de haute dimension en même temps.Finalement, avec l'utilisation de notre technique de modelisation de substitution (surrogate modelling technique), nous avons appliqué le cadre BMSA à un écoulement transsonique autour d'un profil d'aile. Avec cet outil nous sommes maintenant capable de faire des simulations prédictives d'écoulements auparavant trop coûteux et offrant des incertitudes quantifiées selon les imperfections des différents modèles de turbulence. / The goal of this thesis is to make predictive simulations with Reynolds-Averaged Navier-Stokes (RANS) turbulence models, i.e. simulations with a systematic treatment of model and data uncertainties and their propagation through a computational model to produce predictions of quantities of interest with quantified uncertainty. To do so, we make use of the robust Bayesian statistical framework.The first step toward our goal concerned obtaining estimates for the error in RANS simulations based on the Launder-Sharma k-e turbulence closure model, for a limited class of flows. In particular we searched for estimates grounded in uncertainties in the space of model closure coefficients, for wall-bounded flows at a variety of favourable and adverse pressure gradients. In order to estimate the spread of closure coefficients which reproduces these flows accurately, we performed 13 separate Bayesian calibrations. Each calibration was at a different pressure gradient, using measured boundary-layer velocity profiles, and a statistical model containing a multiplicative model inadequacy term in the solution space. The results are 13 joint posterior distributions over coefficients and hyper-parameters. To summarize this information we compute Highest Posterior-Density (HPD) intervals, and subsequently represent the total solution uncertainty with a probability box (p-box). This p-box represents both parameter variability across flows, and epistemic uncertainty within each calibration. A prediction of a new boundary-layer flow is made with uncertainty bars generated from this uncertainty information, and the resulting error estimate is shown to be consistent with measurement data.However, although consistent with the data, the obtained error estimates were very large. This is due to the fact that a p-box constitutes a unweighted prediction. To improve upon this, we developed another approach still based on variability in model closure coefficients across multiple flow scenarios, but also across multiple closure models. The variability is again estimated using Bayesian calibration against experimental data for each scenario, but now Bayesian Model-Scenario Averaging (BMSA) is used to collate the resulting posteriors in an unmeasured (prediction) scenario. Unlike the p-boxes, this is a weighted approach involving turbulence model probabilities which are determined from the calibration data. The methodology was applied to the class of turbulent boundary-layers subject to various pressure gradients. For all considered prediction scenarios the standard-deviation of the stochastic estimate is consistent with the measurement ground truth.The BMSA approach results in reasonable error bars, which can also be decomposed into separate contributions. However, to apply it to more complex topologies outside the class of boundary-layer flows, surrogate modelling techniques must be applied. The Simplex-Stochastic Collocation (SSC) method is a robust surrogate modelling technique used to propagate uncertain input distributions through a computer code. However, its use of the Delaunay triangulation can become prohibitively expensive for problems with dimensions higher than 5. We therefore investigated means to improve upon this bad scalability. In order to do so, we first proposed an alternative interpolation stencil technique based upon the Set-Covering problem, which resulted in a significant speed up when sampling the full-dimensional stochastic space. Secondly, we integrated the SSC method into the High-Dimensional Model-Reduction framework in order to avoid sampling high-dimensional spaces all together.Finally, with the use of our efficient surrogate modelling technique, we applied the BMSA framework to the transonic flow over an airfoil. With this we are able to make predictive simulations of computationally expensive flow problems with quantified uncertainty due to various imperfections in the turbulence models.
|
108 |
Bayesian mixture models for frequent itemset miningHe, Ruofei January 2012 (has links)
In binary-transaction data-mining, traditional frequent itemset mining often produces results which are not straightforward to interpret. To overcome this problem, probability models are often used to produce more compact and conclusive results, albeit with some loss of accuracy. Bayesian statistics have been widely used in the development of probability models in machine learning in recent years and these methods have many advantages, including their abilities to avoid overfitting. In this thesis, we develop two Bayesian mixture models with the Dirichlet distribution prior and the Dirichlet process (DP) prior to improve the previous non-Bayesian mixture model developed for transaction dataset mining. First, we develop a finite Bayesian mixture model by introducing conjugate priors to the model. Then, we extend this model to an infinite Bayesian mixture using a Dirichlet process prior. The Dirichlet process mixture model is a nonparametric Bayesian model which allows for the automatic determination of an appropriate number of mixture components. We implement the inference of both mixture models using two methods: a collapsed Gibbs sampling scheme and a variational approximation algorithm. Experiments in several benchmark problems have shown that both mixture models achieve better performance than a non-Bayesian mixture model. The variational algorithm is the faster of the two approaches while the Gibbs sampling method achieves a more accurate result. The Dirichlet process mixture model can automatically grow to a proper complexity for a better approximation. However, these approaches also show that mixture models underestimate the probabilities of frequent itemsets. Consequently, these models have a higher sensitivity but a lower specificity.
|
109 |
Análise Bayesiana de dois problemas em Astrofísica Relativística: neutrinos do colapso gravitacional e massas das estrelas de nêutrons / Bayesian analysis of two problems in Relativistic Astrophysics: neutrinos from gravitational collapse and mass distribution of neutron stars.Rodolfo Valentim da Costa Lima 19 April 2012 (has links)
O evento estraordinário de SN1987A vem sendo investigado há mais de vinte e cinco anos. O fascínio que cerca tal evento astronômico está relacionado com a observação em tempo real da explosão à luz da Física de neutrinos. Detectores espalhados pelo mundo observaram um surto neutrinos que dias mais tarde foi confirmado como sendo a SN1987A. Kamiokande, IMB e Baksan apresentaram os eventos detectados que permitiu o estudo de modelos para a explosão e resfriamento da hipotética estrela de nêutrons remanescente. Até hoje não há um consenso a origem do progenitor e a natureza do objeto compacto remanescente. O trabalho se divide em duas partes: estudo dos neutrinos de SN1987A através de Análise Estatística Bayesiana através de um modelo proposto com duas temperaturas que evidenciam dois surtos de neutrinos. A motivação está na hipótese do segundo surto como resultado da formação de matéria estranha no objeto compacto. A metodologia empregada foi a desenvolvida por um trabalho interessante de Loredo (2002) que permite modelar e testar hipóteses sobre os modelos via Bayesian Information Criterion (BIC). A segunda parte do trabalho, a mesma metodologia estatística é usada no estudo da distribuição de massas das estrelas de nêutrons usando a base de dados disponível (http://stellarcollapse.org). A base de dados foi analisada utilizando somente o valor do objeto e seu desvio padrão. Construindo uma função de verossimilhança e utilizando distribuições ``a priori\'\' com hipótese de bimodalidade da distribuição das massas contra uma distribuição unimodal sobre todas as massas dos objetos. O teste BIC indica forte tendência favorável à existência da bimodalidade com valores centrados em 1.37M para objetos de baixa massa e 1.73M para objetos de alta massa e a confirmação da fraca evidência de um terceiro pico esperado em 1.25M. / The extraordinary event of supernova has been investigated twenty five years ago. The fascination surrounds such astronomical event is on the real time observation the explosion at light to neutrino Physics. Detectors spread for the world had observed one burst neutrinos that days later it was confirmed as being of SN1987A. Kamiokande, IMB and Baksan had presented the detected events that allowed to the study of models for the explosion and cooling of hypothetical neutron star remain. Until today it does not have a consensus the origin of the progenitor and the nature of the remaining compact object. The work is divided in two parts: study of the neutrinos of SN1987A through Analysis Bayesiana Statistics through a model considered with two temperatures that two evidence bursts of neutrinos. The motivation is in the hypothesis of as burst as resulted of the formation of strange matter in the compact object. The employed methodology was developed for an interesting work of Loredo & Lamb (2002) that it allows shape and to test hypotheses on the models saw Bayesian Information Criterion (BIC). The second part of the work, the same methodology statistics is used in the study of the distribution of masses of the neutron stars using the available database http://stellarcollapse.org/. The database was analyzed only using the value of the object and its shunting line standard. Constructing to a a priori function likelihood and using distributions with hypothesis of bimodal distribution of the masses against a unimodal distribution on all the masses of objects. Test BIC indicates fort favorable trend the existence of the bimodality with values centered in 1.37M for objects of low mass and 1.73M for objects of high mass and week evidence of one third peak around 1.25M.
|
110 |
Driver distraction: implications for individuals with traumatic brain injuriesNeyens, David Michael 01 December 2010 (has links)
Traumatic brain injuries (TBIs) are injuries to the brain associated with the transfer of energy from some external source. There are an estimated 1.4 million TBIs each year, and about half are due to transportation crashes (NINDS, 2007). Driver distraction is defined as a process or condition that draws a driver's attention away from driving activities toward a competing activity (Sheridan, 2004) and has been identified as an under-examined issue for TBI populations (Cyr, et al., 2008). The interaction between the cognitive impairments related to TBIs and the competing demands from driver distraction may be especially problematic. The goal of this dissertation is to investigate the effect of driver distraction on individuals with TBI.
This dissertation uses several approaches and data sources: crash data, a TBI registry, a survey of TBI drivers, and an on-road driving study of TBI and non-TBI drivers. Results demonstrate that a subset of TBI drivers are more willing to engage in distracting tasks and they are more likely to have received speeding tickets. TBI drivers involved in crashes were less likely to wear seatbelts and were more likely to be involved in multiple crashes compared to all other drivers in crashes. Additionally, a subset of TBI drivers exhibits more risk-taking while driving that may result from the TBI or a predisposition to take risks.
A Bayesian approach was used to analyze the effect of distracting tasks on driving performance of TBI drivers in an on-road study. A simulator study of non-TBI drivers was used to develop prior distributions of parameter estimates. The distracting tasks include a CD selecting task, a coin sorting task, and a radio tuning task. All of the tasks contained visual-manual components and the coin sorting task contained an additional cognitive component associated with counting the currency. This suggests that TBI drivers exhibited worse driving performance during a coin sorting task than the non-TBI drivers in terms of the standard deviation of speed and maximum lateral acceleration of the vehicle. This suggests that the cognitive component of the coin sorting task may be causing the decreased performance for the TBI drivers. Across all tasks, TBI drivers spent a larger percent of the task duration looking at the task with a larger number of glances towards the distraction task than the non-TBI drivers.
Driver distractions with cognitive components may be especially problematic for TBI drivers. Future work should investigate if this effect is consistent across more complex cognitive driver distraction tasks (e.g., cell phone usage) for this population. Additionally, future work should validate the high proportion of TBI drivers involved in multiple crashes.
|
Page generated in 0.1235 seconds