• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • Tagged with
  • 10
  • 10
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Issues in Bayesian Gaussian Markov random field models with application to intersensor calibration

Liang, Dong 01 December 2009 (has links)
A long term record of the earth's vegetation is important in studies of global climate change. Over the last three decades, multiple data sets on vegetation have been collected using different satellite-based sensors. There is a need for methods that combine these data into a long term earth system data record. The Advanced Very High Resolution Radiometer (AVHRR) has provided reflectance measures of the entire earth since 1978. Physical and statistical models have been used to improve the consistency and reliability of this record. The Moderated Resolution Imaging Spectroradiometer (MODIS) has provided measurements with superior radiometric properties and geolocation accuracy. However, this record is available only since 2000. In this thesis, we perform statistical calibration of AVHRR to MODIS. We aim to: (1) fill in gaps in the ongoing MODIS record; (2) extend MODIS values back to 1982. We propose Bayesian mixed models to predict MODIS values using snow cover and AVHRR values as covariates. Random effects are used to account for spatiotemporal correlation in the data. We estimate the parameters based on the data after 2000, using Markov chain Monte Carlo methods. We then back-predict MODIS data between 1978 and 1999, using the posterior samples of the parameter estimates. We develop new Conditional Autoregressive (CAR) models for seasonal data. We also develop new sampling methods for CAR models. Our approach enables filling in gaps in the MODIS record and back-predicting these values to construct a consistent historical record. The Bayesian framework incorporates multiple sources of variation in estimating the accuracy of the obtained data. The approach is illustrated using vegetation data over a region in Minnesota.
2

Fast parallel solution of heterogeneous 3D time-harmonic wave equations

Poulson, Jack Lesly 30 January 2013 (has links)
Several advancements related to the solution of 3D time-harmonic wave equations are presented, especially in the context of a parallel moving-PML sweeping preconditioner for problems without large-scale resonances. The main contribution of this dissertation is the introduction of an efficient parallel sweeping preconditioner and its subsequent application to several challenging velocity models. For instance, 3D seismic problems approaching a billion degrees of freedom have been solved in just a few minutes using several thousand processors. The setup and application costs of the sequential algorithm were also respectively refined to O(γ^2 N^(4/3)) and O(γ N log N), where N denotes the total number of degrees of freedom in the 3D volume and γ(ω) denotes the modestly frequency-dependent number of grid points per Perfectly Matched Layer discretization. Furthermore, high-performance parallel algorithms are proposed for performing multifrontal triangular solves with many right-hand sides, and a custom compression scheme is introduced which builds upon the translation invariance of free-space Green’s functions in order to justify the replacement of each dense matrix within a certain modified multifrontal method with the sum of a small number of Kronecker products. For the sake of reproducibility, every algorithm exercised within this dissertation is made available as part of the open source packages Clique and Parallel Sweeping Preconditioner (PSP). / text
3

Likelihood-Based Approach for Analysis of Longitudinal Nominal Data Using Marginalized Random Effects Models

Lee, Keunbaik, Kang, Sanggil, Liu, Xuefeng, Seo, Daekwan 01 August 2011 (has links)
Likelihood-based marginalized models using random effects have become popular for analyzing longitudinal categorical data. These models permit direct interpretation of marginal mean parameters and characterize the serial dependence of longitudinal outcomes using random effects [12,22]. In this paper, we propose model that expands the use of previous models to accommodate longitudinal nominal data. Random effects using a new covariance matrix with a Kronecker product composition are used to explain serial and categorical dependence. The Quasi-Newton algorithm is developed for estimation. These proposed methods are illustrated with a real data set and compared with other standard methods.
4

Stability and Controllability of Diffusion Models

Mörtberg, Anton January 2023 (has links)
Alan Turing explored how patterns emerged in embryos. The modelsexplained by Turing describe how two morphogens interact with eachother and develop certain patterns though different conditions. Usingthe differential equation expressed by Turing gives a lot of freedom inthe choice of variables depending on how the variables are chosen, whichwill be explored and simulated in this thesis. Later, a systems theoreticformulation will be used to explore the stability and controllability ofthe homogenous diffusion problem and Turing’s Reaction and Diffusionproblem. It will be found that under the assumptions made in this thesisboth models will be stable and controllable.
5

Multiple Radar Target Tracking in Environments with High Noise and Clutter

January 2015 (has links)
abstract: Tracking a time-varying number of targets is a challenging dynamic state estimation problem whose complexity is intensified under low signal-to-noise ratio (SNR) or high clutter conditions. This is important, for example, when tracking multiple, closely spaced targets moving in the same direction such as a convoy of low observable vehicles moving through a forest or multiple targets moving in a crisscross pattern. The SNR in these applications is usually low as the reflected signals from the targets are weak or the noise level is very high. An effective approach for detecting and tracking a single target under low SNR conditions is the track-before-detect filter (TBDF) that uses unthresholded measurements. However, the TBDF has only been used to track a small fixed number of targets at low SNR. This work proposes a new multiple target TBDF approach to track a dynamically varying number of targets under the recursive Bayesian framework. For a given maximum number of targets, the state estimates are obtained by estimating the joint multiple target posterior probability density function under all possible target existence combinations. The estimation of the corresponding target existence combination probabilities and the target existence probabilities are also derived. A feasible sequential Monte Carlo (SMC) based implementation algorithm is proposed. The approximation accuracy of the SMC method with a reduced number of particles is improved by an efficient proposal density function that partitions the multiple target space into a single target space. The proposed multiple target TBDF method is extended to track targets in sea clutter using highly time-varying radar measurements. A generalized likelihood function for closely spaced multiple targets in compound Gaussian sea clutter is derived together with the maximum likelihood estimate of the model parameters using an iterative fixed point algorithm. The TBDF performance is improved by proposing a computationally feasible method to estimate the space-time covariance matrix of rapidly-varying sea clutter. The method applies the Kronecker product approximation to the covariance matrix and uses particle filtering to solve the resulting dynamic state space model formulation. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2015
6

Computation of High-Dimensional Multivariate Normal and Student-t Probabilities Based on Matrix Compression Schemes

Cao, Jian 22 April 2020 (has links)
The first half of the thesis focuses on the computation of high-dimensional multivariate normal (MVN) and multivariate Student-t (MVT) probabilities. Chapter 2 generalizes the bivariate conditioning method to a d-dimensional conditioning method and combines it with a hierarchical representation of the n × n covariance matrix. The resulting two-level hierarchical-block conditioning method requires Monte Carlo simulations to be performed only in d dimensions, with d ≪ n, and allows the dominant complexity term of the algorithm to be O(n log n). Chapter 3 improves the block reordering scheme from Chapter 2 and integrates it into the Quasi-Monte Carlo simulation under the tile-low-rank representation of the covariance matrix. Simulations up to dimension 65,536 suggest that this method can improve the run time by one order of magnitude compared with the hierarchical Monte Carlo method. The second half of the thesis discusses a novel matrix compression scheme with Kronecker products, an R package that implements the methods described in Chapter 3, and an application study with the probit Gaussian random field. Chapter 4 studies the potential of using the sum of Kronecker products (SKP) as a compressed covariance matrix representation. Experiments show that this new SKP representation can save the memory footprint by one order of magnitude compared with the hierarchical representation for covariance matrices from large grids and the Cholesky factorization in one million dimensions can be achieved within 600 seconds. In Chapter 5, an R package is introduced that implements the methods in Chapter 3 and show how the package improves the accuracy of the computed excursion sets. Chapter 6 derives the posterior properties of the probit Gaussian random field, based on which model selection and posterior prediction are performed. With the tlrmvnmvt package, the computation becomes feasible in tens of thousands of dimensions, where the prediction errors are significantly reduced.
7

Likelihood ratio tests of separable or double separable covariance structure, and the empirical null distribution

Gottfridsson, Anneli January 2011 (has links)
The focus in this thesis is on the calculations of an empirical null distributionfor likelihood ratio tests testing either separable or double separable covariancematrix structures versus an unstructured covariance matrix. These calculationshave been performed for various dimensions and sample sizes, and are comparedwith the asymptotic χ2-distribution that is commonly used as an approximative distribution. Tests of separable structures are of particular interest in cases when data iscollected such that more than one relation between the components of the observationis suspected. For instance, if there are both a spatial and a temporalaspect, a hypothesis of two covariance matrices, one for each aspect, is reasonable.
8

Computer Experiments with Both Quantitative and Qualitative Inputs

Zhang, Yulei January 2014 (has links)
No description available.
9

Modèles aléatoires harmoniques pour les signaux électroencéphalographiques

Villaron, Emilie 25 June 2012 (has links)
Cette thèse s'inscrit dans le contexte de l'analyse des signaux biomédicaux multicapteurs par des méthodes stochastiques. Les signaux auxquels nous nous intéressons présentent un caractère oscillant transitoire bien représenté par les décompositions dans le plan temps-fréquence c'est pourquoi nous avons choisi de considérer non plus les décours temporels de ces signaux mais les coefficients issus de la décomposition de ces derniers dans le plan temps-fréquence. Dans une première partie, nous décomposons les signaux multicapteurs sur une base de cosinus locaux (appelée base MDCT) et nous modélisons les coefficients à l'aide d'un modèle à états latents. Les coefficients sont considérés comme les réalisations de processus aléatoires gaussiens multivariés dont la distribution est gouvernée par une chaîne de Markov cachée. Nous présentons les algorithmes classiques liés à l'utilisation des modèles de Markov caché et nous proposons une extension dans le cas où les matrices de covariance sont factorisées sous forme d'un produit de Kronecker. Cette modélisation permet de diminuer la complexité des méthodes de calcul numérique utilisées tout en stabilisant les algorithmes associés. Nous appliquons ces modèles à des données électroencéphalographiques et nous montrons que les matrices de covariance représentant les corrélations entre les capteurs et les fréquences apportent des informations pertinentes sur les signaux analysés. Ceci est notamment illustré par un cas d'étude sur la caractérisation de la désynchronisation des ondes alpha dans le contexte de la sclérose en plaques. / This thesis adresses the problem of multichannel biomedical signals analysis using stochastic methods. EEG signals exhibit specific features that are both time and frequency localized, which motivates the use of time-frequency signal representations. In this document the (time-frequency labelled) coefficients are modelled as multivariate random variables. In the first part of this work, multichannel signals are expanded using a local cosine basis (called MDCT basis). The approach we propose models the distribution of time-frequency coefficients (here MDCT coefficients) in terms of latent variables by the use of a hidden Markov model. In the framework of application to EEG signals, the latent variables describe some hidden mental state of the subject. The latter control the covariance matrices of Gaussian vectors of fixed-time vectors of multi-channel, multi-frequency, MDCT coefficients. After presenting classical algorithms to estimate the parameters, we define a new model in which the (space-frequency) covariance matrices are expanded as tensor products (also named Kronecker products) of frequency and channels matrices. Inference for the proposed model is developped and yields estimates for the model parameters, together with maximum likelihood estimates for the sequences of latent variables. The model is applied to electroencephalogram data, and it is shown that variance-covariance matrices labelled by sensor and frequency indices can yield relevant informations on the analyzed signals. This is illustrated with a case study, namely the detection of alpha waves in rest EEG for multiple sclerosis patients and control subjects.
10

Motion planning of multi-robot system for airplane stripping / Plannification des trajectoires s’un système multi-robot pour faire le décapage des avions

Kalawoun, Rawan 26 April 2019 (has links)
Cette thèse est une partie d’un projet français qui s’appelle AEROSTRIP, un partenariat entre l’Institut Pascal, Sigma, SAPPI et Air-France industries, il est financé par le gouvernement français par le programme FUI (20 eme appel). Le projet AEROSTRIP consiste à développer le premier système automatique qui nettoie écologiquement les surfaces des avions et les pièces de rechange en utilisant un abrasif écologique projeté à grande vitesse sur la surface des avions (maïs). Ma thèse consiste à optimiser les trajectoires du système robotique total de telle façon que le décapage de l’avion soit optimal. Le déplacement des robots est nécessaire pour assurer une couverture totale de la surface à décaper parce que ces surfaces sont trop grandes et elles ne peuvent pas être décapées d’une seule position. Le but de mon travail est de trouver le nombre optimal de robots avec leur positions optimales pour décaper totalement l’avion. Une fois ce nombre est déterminé, on cherche les trajectoires des robots entre ces différentes positions. Alors, pour atteindre ce but, j’ai défini un cadre général composant de quatre étapes essentiels: l’étape pre-processing, l’étape optimization algorithm, l’étape generation of the end-effector trajectories et l’étape robot scheduling, assignment and control.Dans ma thèse, j’ai deux contributions dans deux différentes étapes du cadre général: l’étape pre-processing et l’étape optimization algorithm. Le calcul de l’espace de travail du robot est nécessairedans l’étape pre-processing: on a proposé l’Analyse par Intervalles pour trouver cet espace de travail parce qu’il garantie le fait de trouver des solutions dans un temps de calcul raisonnable. Alors, ma première contribution est une nouvelle fonction d’inclusion qui réduit le pessimisme, la surestimation des solutions qui est le principal inconvénient de l’Analyse par Intervalles. La nouvelle fonction d’inclusion est évaluée sur des problèmes de satisfaction de contraintes et des problèmes d’optimisation des contraintes. En plus, on a proposé un algorithme d’optimisation hybride pour trouver le nombre optimal de robots avec leur positions optimales: c’est notre deuxième contribution qui est dans l’étape optimization algorithm. Pour évaluer l’algorithme d’optimisation, on a testé cet algorithme sur des surfaces régulières, comme un cylindre et un hémisphère, et sur un surface complexe: une voiture. / This PHD is a part of a French project named AEROSTRIP, (a partnership between Pascal Institute,Sigma, SAPPI, and Air-France industries), it is funded by the French Government through the FUIProgram (20th call). The AEROSTRIP project aims at developing the first automated system thatecologically cleans the airplanes surfaces using a process of soft projection of ecological media onthe surface (corn). My PHD aims at optimizing the trajectory of the whole robotic systems in orderto optimally strip the airplane. Since a large surface can not be totally covered by a single robot base placement, repositioning of the robots is necessary to ensure a complete stripping of the surface. The goal in this work is to find the optimal number of robots with their optimal positions required to totally strip the air-plane. Once found, we search for the trajectories of the robots of the multi-robot system between those poses. Hence, we define a general framework to solve this problem having four main steps: the pre-processing step, the optimization algorithm step, the generation of the end-effector trajectories step and the robot scheduling, assignment and control step.In my thesis, I present two contributions in two different steps of the general framework: the pre-processing step, the optimization algorithm step. The computation of the robot workspace is required in the pre-processing step: we proposed Interval Analysis to find this workspace since it guarantees finding solutions in a reasonable computation time. Though, our first contribution is a new inclusion function that reduces the pessimism, the overestimation of the solution, which is the main disadvantage of Interval Analysis. The proposed inclusion function is assessed on some Constraints Satisfaction Problems and Constraints Optimization problems. Furthermore, we propose an hybrid optimization algorithm in order to find the optimal number of robots with their optimal poses: it is our second contribution in the optimization algorithm step. To assess our hybrid optimization algorithm, we test the algorithm on regular surfaces, such as a cylinder and a hemisphere, and on a complex surface: a car.

Page generated in 0.0688 seconds