• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 6
  • 5
  • 3
  • 1
  • Tagged with
  • 47
  • 13
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

TIME-OF-FLIGHT NEUTRON CT FOR ISOTOPE DENSITY RECONSTRUCTION AND CONE-BEAM CT SEPARABLE MODELS

Thilo Balke (15348532) 26 April 2023 (has links)
<p>There is a great need for accurate image reconstruction in the context of non-destructive evaluation. Major challenges include the ever-increasing necessity for high resolution reconstruction with limited scan and reconstruction time and thus fewer and noisier measurements. In this thesis, we leverage advanced Bayesian modeling of the physical measurement process and probabilistic prior information of the image distribution in order to yield higher image quality despite limited measurement time. We demonstrate in several ways efficient computational performance through the exploitation of more efficient memory access, optimized parametrization of the system model, and multi-pixel parallelization. We demonstrate that by building high-fidelity forward models that we can generate quantitatively reliable reconstructions despite very limited measurement data.</p> <p><br></p> <p>In the first chapter, we introduce an algorithm for estimating isotopic densities from neutron time-of-flight imaging data. Energy resolved neutron imaging (ERNI) is an advanced neutron radiography technique capable of non-destructively extracting spatial isotopic information within a given material. Energy-dependent radiography image sequences can be created by utilizing neutron time-of-flight techniques. In combination with uniquely characteristic isotopic neutron cross-section spectra, isotopic areal densities can be determined on a per-pixel basis, thus resulting in a set of areal density images for each isotope present in the sample. By preforming ERNI measurements over several rotational views, an isotope decomposed 3D computed tomography is possible. We demonstrate a method involving a robust and automated background estimation based on a linear programming formulation. The extremely high noise due to low count measurements is overcome using a sparse coding approach. It allows for a significant computation time improvement, from weeks to a few hours compared to existing neutron evaluation tools, enabling at the present stage a semi-quantitative, user-friendly routine application. </p> <p><br></p> <p>In the second chapter, we introduce the TRINIDI algorithm, a more refined algorithm for the same problem.</p> <p>Accurate reconstruction of 2D and 3D isotope densities is a desired capability with great potential impact in applications such as evaluation and development of next-generation nuclear fuels.</p> <p>Neutron time-of-flight (TOF) resonance imaging offers a potential approach by exploiting the characteristic neutron adsorption spectra of each isotope.</p> <p>However, it is a major challenge to compute quantitatively accurate images due to a variety of confounding effects such as severe Poisson noise, background scatter, beam non-uniformity, absorption non-linearity, and extended source pulse duration. We present the TRINIDI algorithm which is based on a two-step process in which we first estimate the neutron flux and background counts, and then reconstruct the areal densities of each isotope and pixel.</p> <p>Both components are based on the inversion of a forward model that accounts for the highly non-linear absorption, energy-dependent emission profile, and Poisson noise, while also modeling the substantial spatio-temporal variation of the background and flux. </p> <p>To do this, we formulate the non-linear inverse problem as two optimization problems that are solved in sequence.</p> <p>We demonstrate on both synthetic and measured data that TRINIDI can reconstruct quantitatively accurate 2D views of isotopic areal density that can then be reconstructed into quantitatively accurate 3D volumes of isotopic volumetric density.</p> <p><br></p> <p>In the third chapter, we introduce a separable forward model for cone-beam computed tomography (CT) that enables efficient computation of a Bayesian model-based reconstruction. Cone-beam CT is an attractive tool for many kinds of non-destructive evaluation (NDE). Model-based iterative reconstruction (MBIR) has been shown to improve reconstruction quality and reduce scan time. However, the computational burden and storage of the system matrix is challenging. In this paper we present a separable representation of the system matrix that can be completely stored in memory and accessed cache-efficiently. This is done by quantizing the voxel position for one of the separable subproblems. A parallelized algorithm, which we refer to as zipline update, is presented that speeds up the computation of the solution by about 50 to 100 times on 20 cores by updating groups of voxels together. The quality of the reconstruction and algorithmic scalability are demonstrated on real cone-beam CT data from an NDE application. We show that the reconstruction can be done from a sparse set of projection views while reducing artifacts visible in the conventional filtered back projection (FBP) reconstruction. We present qualitative results using a Markov Random Field (MRF) prior and a Plug-and-Play denoiser.</p>
42

Bayesian Approaches for Synthesising Evidence in Health Technology Assessment

McCarron, Catherine Elizabeth 04 1900 (has links)
<p><strong>ABSTRACT</strong></p> <p><strong>Background and Objectives</strong>:<strong> </strong>Informed health care decision making depends on the available evidence base. Where the available evidence comes from different sources methods are required that can synthesise all of the evidence. The synthesis of different types of evidence poses various methodological challenges. The objective of this thesis is to investigate the use of Bayesian methods for combining evidence on effects from randomised and non-randomised studies and additional evidence from the literature with patient level trial data. <strong> </strong></p> <p><strong>Methods</strong>: Using a Bayesian three-level hierarchical model an approach was proposed to combine evidence from randomised and non-randomised studies while adjusting for potential imbalances in patient covariates. The proposed approach was compared to four other Bayesian methods using a case study of endovascular versus open surgical repair for the treatment of abdominal aortic aneurysms. In order to assess the performance of the proposed approach beyond this single applied example a simulation study was conducted. The simulation study examined a series of Bayesian approaches under a variety of scenarios. The subsequent research focussed on the use of informative prior distributions to integrate additional evidence with patient level data in a Bayesian cost-effectiveness analysis comparing endovascular and open surgical repair in terms of incremental costs and life years gained.</p> <p><strong>Results and Conclusions</strong>: The shift in the estimated odds ratios towards those of the more balanced randomised studies, observed in the case study, suggested that the proposed Bayesian approach was capable of adjusting for imbalances. These results were reinforced in the simulation study. The impact of the informative priors in terms of increasing estimated mean life years in the control group, demonstrated the potential importance of incorporating all available evidence in the context of an economic evaluation. In addressing these issues this research contributes to comprehensive evidence based decision making in health care.</p> / Doctor of Philosophy (PhD)
43

Perceived ambiguity, ambiguity attitude and strategic ambiguity in games

Hartmann, L. January 2019 (has links)
This thesis contributes to the theoretical work on decision and game theory when decision makers or players perceive ambiguity. The first article introduces a new axiomatic framework for ambiguity aversion and provides axiomatic characterizations for important preference classes that thus far had lacked characterizations. The second article introduces a new axiom called Weak Monotonicity which is shown to play a crucial role in the multiple prior model. It is shown that for many important preference classes, the assumption of monotonic preferences is a consequence of the other axioms and does not have to be assumed. The third article introduces an intuitive definition of perceived ambiguity in the multiple prior model. It is shown that the approach allows an application to games where players perceive strategic ambiguity. A very general equilibrium existence result is given. The modelling capabilities of the approach are highlighted through the analysis of examples. The fourth article applies the model from the previous article to a specific class of games with a lattice-structure. We perform comparative statics on perceived ambiguity and ambiguity attitude. We show that more optimism does not necessarily lead to higher equilibria when players have Alpha-Maxmin preferences. We present necessary and sufficient conditions on the structure of the prior sets for this comparative statics result to hold. The introductory chapter provides the basis of the four articles in this thesis. An overview of axiomatic decision theory, decision-making under ambiguity and ambiguous games is given. It introduces and discusses the most relevant results from the literature.
44

Segmentation et interprétation d'images naturelles pour l'identification de feuilles d'arbres sur smartphone / Segmentation and interpretation of natural images for tree leaf identification on smartphones

Cerutti, Guillaume 21 November 2013 (has links)
Les espèces végétales, et en particulier les espèces d'arbres, forment un cadre de choix pour un processus de reconnaissance automatique basé sur l'analyse d'images. Les critères permettant de les identifier sont en effet le plus souvent des éléments morphologiques visuels, bien décrits et référencés par la botanique, qui laissent à penser qu'une reconnaissance par la forme est envisageable. Les feuilles constituent dans ce contexte les organes végétaux discriminants les plus faciles à appréhender, et sont de ce fait les plus communément employés pour ce problème qui connaît actuellement un véritable engouement. L'identification automatique pose toutefois un certain nombre de problèmes complexes, que ce soit dans le traitement des images ou dans la difficulté même de la classification en espèces, qui en font une application de pointe en reconnaissance de formes.Cette thèse place le problème de l'identification des espèces d'arbres à partir d'images de leurs feuilles dans le contexte d'une application pour smartphones destinée au grand public. Les images sur lesquelles nous travaillons sont donc potentiellement complexes et leur acquisition peu supervisée. Nous proposons alors des méthodes d'analyse d'images dédiées, permettant la segmentation et l'interprétation des feuilles d'arbres, en se basant sur une modélisation originale de leurs formes, et sur des approches basées modèles déformables. L'introduction de connaissances a priori sur la forme des objets améliore ainsi de façon significative la qualité et la robustesse de l'information extraite de l'image. Le traitement se déroulant sur l'appareil, nous avons développé ces algorithmes en prenant en compte les contraintes matérielles liées à leur utilisation.Nous introduisons également une description spécifique des formes des feuilles, inspirée par les caractéristiques déterminantes recensées dans les ouvrages botaniques. Ces différents descripteurs fournissent des informations de haut niveau qui sont fusionnées en fin de processus pour identifier les espèces, tout en permettant une interprétation sémantique intéressante dans le cadre de l'interaction avec un utilisateur néophyte. Les performances obtenues en termes de classification, sur près de 100 espèces d'arbres, se situent par ailleurs au niveau de l'état de l'art dans le domaine, et démontrent une robustesse particulière sur les images prises en environnement naturel. Enfin, nous avons intégré l'implémentation de notre système de reconnaissance dans l'application Folia pour iPhone, qui constitue une validation de nos approches et méthodes dans un cadre réel. / Plant species, and especially tree species, constitute a well adapted target for an automatic recognition process based on image analysis. The criteria that make their identification possible are indeed often morphological visual elements, which are well described and referenced by botany. This leads to think that a recognition through shape is worth considering. Leaves stand out in this context as the most accessible discriminative plant organs, and are subsequently the most often used for this problem recently receiving a particular attention. Automatic identification however gives rise to a fair amount of complex problems, linked with the processing of images, or in the difficult nature of the species classification itself, which make it an advanced application for pattern recognition.This thesis considers the problem of tree species identification from leaf images within the framework of a smartphone application intended for a non-specialist audience. The images on which we expect to work are then potentially very complex scenes and their acquisition rather unsupervised. We consequently propose dedicated methods for image analysis, in order to segment and interpret tree leaves, using an original shape modelling and deformable templates. The introduction on prior knowledge on the shape of objects enhances significatively the quality and the robustness of the information we extract from the image. All processing being carried out on the mobile device, we developed those algorithms with concern towards the material constraints of their exploitation. We also introduce a very specific description of leaf shapes, inspired by the determining characteristics listed in botanical references. These different descriptors constitute independent sources of high-level information that are fused at the end of the process to identify species, while providing the user with a possible semantic interpretation. The classification performance demonstrated over approximately 100 tree species are competitive with state-of-the-art methods of the domain, and show a particular robustness to difficult natural background images. Finally, we integrated the implementation of our recognition system into the \textbf{Folia} application for iPhone, which constitutes a validation of our approaches and methods in a real-world use.
45

Inférence et apprentissage perceptifs dans l’autisme : une approche comportementale et neurophysiologique / Perceptual inference and learning in autism : a behavioral and neurophysiological approach

Sapey-Triomphe, Laurie-Anne 04 July 2017 (has links)
La perception de notre environnement repose sur les informations sensorielles reçues, mais aussi sur nos a priori. Dans le cadre Bayésien, ces a priori capturent les régularités de notre environnement et sont essentiels pour inférer les causes de nos sensations. Récemment, les théories du cerveau Bayésien ont été appliquées à l'autisme pour tenter d'en expliquer les symptômes. Les troubles du spectre de l'autisme (TSA) sont caractérisés par des difficultés de compréhension des interactions sociales, par des comportements restreints et répétitifs, et par une perception sensorielle atypique.Cette thèse vise à caractériser l'inférence et l'apprentissage perceptifs dans les TSA, en étudiant la sensorialité et la construction d'a priori. Nous avons utilisé des tests comportementaux, des modèles computationnels, des questionnaires, de l'imagerie fonctionnelle et de la spectroscopie par résonnance magnétique chez des adultes avec ou sans TSA. La définition des profils sensoriels de personnes avec des hauts quotients autistiques a été affinée grâce à un questionnaire dont nous avons validé la traduction française. En explorant les stratégies d'apprentissage perceptif, nous avons ensuite montré que les personnes avec TSA étaient moins enclines à spontanément utiliser une mode d'apprentissage permettant de généraliser. L'étude de la construction implicite des a priori a montré que les personnes avec TSA étaient capables d'apprendre un a priori, mais l'ajustaient difficilement suite à un changement de contexte. Enfin, l'étude des corrélats neurophysiologiques de l'inférence perceptive a révélé un réseau cérébral et une neuromodulation différents dans les TSA.L'ensemble de ces résultats met en lumière une perception atypique dans les TSA, marquée par un apprentissage et une pondération anormale des a priori. Une approche Bayésienne des TSA pourrait améliorer leur caractérisation, diagnostics et prises en charge / How we perceive our environment relies both on sensory information and on our priors or expectations. Within the Baysian framework, these priors capture the underlying statistical regularities of our environment and allow inferring sensation causes. Recently, Bayesian brain theories suggested that autistic symptoms could arise from an atypical weighting of sensory information and priors. Autism spectrum disorders (ASD) is characterized defined by difficulties in social interactions, by restricted and repetitive patterns of behaviors, and by an atypical sensory perception.This thesis aims at characterizing perceptual inference and learning in ASD, and studies sensory sensitivity and prior learning. This was investigated using behavioral tasks, computational models, questionnaires, functional magnetic resonance imaging and magnetic resonance spectroscopy in adults with or without ASD. Sensory profiles in people with high autism spectrum quotients were first refined, using a questionnaire that we validated in French. The study of perceptual learning strategies then revealed that subjects with ASD were less inclined to spontaneously use a learning style enabling generalization. The implicit learning of priors was explored and showed that subjects with ASD were able to build up a prior but had difficulties adjusting it in changing contexts. Finally, the investigation of the neurophysiological correlates and molecular underpinnings of a similar task showed that perceptual decisions biased by priors relied on a distinct neural network in ASD, and was not related to the same modulation by the glutamate/GABA ratio.The overall results shed light on an atypical learning and weighting of priors in ASD, resulting in an abnormal perceptual inference. A Bayesian approach could help characterizing ASD and could contribute to ASD diagnosis and care
46

Approximation de lois impropres et applications / Approximation of improper priors and applications

Bioche, Christèle 27 November 2015 (has links)
Le but de cette thèse est d’étudier l’approximation d’a priori impropres par des suites d’a priori propres. Nous définissons un mode de convergence sur les mesures de Radon strictement positives pour lequel une suite de mesures de probabilité peut admettre une mesure impropre pour limite. Ce mode de convergence, que nous appelons convergence q-vague, est indépendant du modèle statistique. Il permet de comprendre l’origine du paradoxe de Jeffreys-Lindley. Ensuite, nous nous intéressons à l’estimation de la taille d’une population. Nous considérons le modèle du removal sampling. Nous établissons des conditions nécessaires et suffisantes sur un certain type d’a priori pour obtenir des estimateurs a posteriori bien définis. Enfin, nous montrons à l’aide de la convergence q-vague, que l’utilisation d’a priori vagues n’est pas adaptée car les estimateurs obtenus montrent une grande dépendance aux hyperparamètres. / The purpose of this thesis is to study the approximation of improper priors by proper priors. We define a convergence mode on the positive Radon measures for which a sequence of probability measures could converge to an improper limiting measure. This convergence mode, called q-vague convergence, is independant from the statistical model. It explains the origin of the Jeffreys-Lindley paradox. Then, we focus on the estimation of the size of a population. We consider the removal sampling model. We give necessary and sufficient conditions on the hyperparameters in order to have proper posterior distributions and well define estimate of abundance. In the light of the q-vague convergence, we show that the use of vague priors is not appropriate in removal sampling since the estimates obtained depend crucially on hyperparameters.
47

Addressing Challenges in Graphical Models: MAP estimation, Evidence, Non-Normality, and Subject-Specific Inference

Sagar K N Ksheera (15295831) 17 April 2023 (has links)
<p>Graphs are a natural choice for understanding the associations between variables, and assuming a probabilistic embedding for the graph structure leads to a variety of graphical models that enable us to understand these associations even further. In the realm of high-dimensional data, where the number of associations between interacting variables is far greater than the available number of data points, the goal is to infer a sparse graph. In this thesis, we make contributions in the domain of Bayesian graphical models, where our prior belief on the graph structure, encoded via uncertainty on the model parameters, enables the estimation of sparse graphs.</p> <p><br></p> <p>We begin with the Gaussian Graphical Model (GGM) in Chapter 2, one of the simplest and most famous graphical models, where the joint distribution of interacting variables is assumed to be Gaussian. In GGMs, the conditional independence among variables is encoded in the inverse of the covariance matrix, also known as the precision matrix. Under a Bayesian framework, we propose a novel prior--penalty dual called the `graphical horseshoe-like' prior and penalty, to estimate precision matrix. We also establish the posterior convergence of the precision matrix estimate and the frequentist consistency of the maximum a posteriori (MAP) estimator.</p> <p><br></p> <p>In Chapter 3, we develop a general framework based on local linear approximation for MAP estimation of the precision matrix in GGMs. This general framework holds true for any graphical prior, where the element-wise priors can be written as a Laplace scale mixture. As an application of the framework, we perform MAP estimation of the precision matrix under the graphical horseshoe penalty.</p> <p><br></p> <p>In Chapter 4, we focus on graphical models where the joint distribution of interacting variables cannot be assumed Gaussian. Motivated by the quantile graphical models, where the Gaussian likelihood assumption is relaxed, we draw inspiration from the domain of precision medicine, where personalized inference is crucial to tailor individual-specific treatment plans. With an aim to infer Directed Acyclic Graphs (DAGs), we propose a novel quantile DAG learning framework, where the DAGs depend on individual-specific covariates, making personalized inference possible. We demonstrate the potential of this framework in the regime of precision medicine by applying it to infer protein-protein interaction networks in Lung adenocarcinoma and Lung squamous cell carcinoma.</p> <p><br></p> <p>Finally, we conclude this thesis in Chapter 5, by developing a novel framework to compute the marginal likelihood in a GGM, addressing a longstanding open problem. Under this framework, we can compute the marginal likelihood for a broad class of priors on the precision matrix, where the element-wise priors on the diagonal entries can be written as gamma or scale mixtures of gamma random variables and those on the off-diagonal terms can be represented as normal or scale mixtures of normal. This result paves new roads for model selection using Bayes factors and tuning of prior hyper-parameters.</p>

Page generated in 0.0406 seconds