• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Iterative receivers for digital communications via variational inference and estimation

Nissilä, M. (Mauri) 08 January 2008 (has links)
Abstract In this thesis, iterative detection and estimation algorithms for digital communications systems in the presence of parametric uncertainty are explored and further developed. In particular, variational methods, which have been extensively applied in other research fields such as artificial intelligence and machine learning, are introduced and systematically used in deriving approximations to the optimal receivers in various channel conditions. The key idea behind the variational methods is to transform the problem of interest into an optimization problem via an introduction of extra degrees of freedom known as variational parameters. This is done so that, for fixed values of the free parameters, the transformed problem has a simple solution, solving approximately the original problem. The thesis contributes to the state of the art of advanced receiver design in a number of ways. These include the development of new theoretical and conceptual viewpoints of iterative turbo-processing receivers as well as a new set of practical joint estimation and detection algorithms. Central to the theoretical studies is to show that many of the known low-complexity turbo receivers, such as linear minimum mean square error (MMSE) soft-input soft-output (SISO) equalizers and demodulators that are based on the Bayesian expectation-maximization (BEM) algorithm, can be formulated as solutions to the variational optimization problem. This new approach not only provides new insights into the current designs and structural properties of the relevant receivers, but also suggests some improvements on them. In addition, SISO detection in multipath fading channels is considered with the aim of obtaining a new class of low-complexity adaptive SISOs. As a result, a novel, unified method is proposed and applied in order to derive recursive versions of the classical Baum-Welch algorithm and its Bayesian counterpart, referred to as the BEM algorithm. These formulations are shown to yield computationally attractive soft decision-directed (SDD) channel estimators for both deterministic and Rayleigh fading intersymbol interference (ISI) channels. Next, by modeling the multipath fading channel as a complex bandpass autoregressive (AR) process, it is shown that the statistical parameters of radio channels, such as frequency offset, Doppler spread, and power-delay profile, can be conveniently extracted from the estimated AR parameters which, in turn, may be conveniently derived via an EM algorithm. Such a joint estimator for all relevant radio channel parameters has a number of virtues, particularly its capability to perform equally well in a variety of channel conditions. Lastly, adaptive iterative detection in the presence of phase uncertainty is investigated. As a result, novel iterative joint Bayesian estimation and symbol a posteriori probability (APP) computation algorithms, based on the variational Bayesian method, are proposed for both constant-phase channel models and dynamic phase models, and their performance is evaluated via computer simulations.
2

Computational methods for the estimation of cardiac electrophysiological conduction parameters in a patient specific setting

Wallman, Kaj Mikael Joakim January 2013 (has links)
Cardiovascular disease is the primary cause of death globally. Although this group encompasses a heterogeneous range of conditions, many of these diseases are associated with abnormalities in the cardiac electrical propagation. In these conditions, structural abnormalities in the form of scars and fibrotic tissue are known to play an important role, leading to a high individual variability in the exact disease mechanisms. Because of this, clinical interventions such as ablation therapy and CRT that work by modifying the electrical propagation should ideally be optimized on a patient specific basis. As a tool for optimizing these interventions, computational modelling and simulation of the heart have become increasingly important. However, in order to construct these models, a crucial step is the estimation of tissue conduction properties, which have a profound impact on the cardiac activation sequence predicted by simulations. Information about the conduction properties of the cardiac tissue can be gained from electrophysiological data, obtained using electroanatomical mapping systems. However, as in other clinical modalities, electrophysiological data are often sparse and noisy, and this results in high levels of uncertainty in the estimated quantities. In this dissertation, we develop a methodology based on Bayesian inference, together with a computationally efficient model of electrical propagation to achieve two main aims: 1) to quantify values and associated uncertainty for different tissue conduction properties inferred from electroanatomical data, and 2) to design strategies to optimise the location and number of measurements required to maximise information and reduce uncertainty. The methodology is validated in several studies performed using simulated data obtained from image-based ventricular models, including realistic fibre orientation and conduction heterogeneities. Subsequently, by using the developed methodology to investigate how the uncertainty decreases in response to added measurements, we derive an a priori index for placing electrophysiological measurements in order to optimise the information content of the collected data. Results show that the derived index has a clear benefit in minimising the uncertainty of inferred conduction properties compared to a random distribution of measurements, suggesting that the methodology presented in this dissertation provides an important step towards improving the quality of the spatiotemporal information obtained using electroanatomical mapping.
3

Advanced signal processing techniques for multi-target tracking

Daniyan, Abdullahi January 2018 (has links)
The multi-target tracking problem essentially involves the recursive joint estimation of the state of unknown and time-varying number of targets present in a tracking scene, given a series of observations. This problem becomes more challenging because the sequence of observations is noisy and can become corrupted due to miss-detections and false alarms/clutter. Additionally, the detected observations are indistinguishable from clutter. Furthermore, whether the target(s) of interest are point or extended (in terms of spatial extent) poses even more technical challenges. An approach known as random finite sets provides an elegant and rigorous framework for the handling of the multi-target tracking problem. With a random finite sets formulation, both the multi-target states and multi-target observations are modelled as finite set valued random variables, that is, random variables which are random in both the number of elements and the values of the elements themselves. Furthermore, compared to other approaches, the random finite sets approach possesses a desirable characteristic of being free of explicit data association prior to tracking. In addition, a framework is available for dealing with random finite sets and is known as finite sets statistics. In this thesis, advanced signal processing techniques are employed to provide enhancements to and develop new random finite sets based multi-target tracking algorithms for the tracking of both point and extended targets with the aim to improve tracking performance in cluttered environments. To this end, firstly, a new and efficient Kalman-gain aided sequential Monte Carlo probability hypothesis density (KG-SMC-PHD) filter and a cardinalised particle probability hypothesis density (KG-SMC-CPHD) filter are proposed. These filters employ the Kalman- gain approach during weight update to correct predicted particle states by minimising the mean square error between the estimated measurement and the actual measurement received at a given time in order to arrive at a more accurate posterior. This technique identifies and selects those particles belonging to a particular target from a given PHD for state correction during weight computation. The proposed SMC-CPHD filter provides a better estimate of the number of targets. Besides the improved tracking accuracy, fewer particles are required in the proposed approach. Simulation results confirm the improved tracking performance when evaluated with different measures. Secondly, the KG-SMC-(C)PHD filters are particle filter (PF) based and as with PFs, they require a process known as resampling to avoid the problem of degeneracy. This thesis proposes a new resampling scheme to address a problem with the systematic resampling method which causes a high tendency of resampling very low weight particles especially when a large number of resampled particles are required; which in turn affect state estimation. Thirdly, the KG-SMC-(C)PHD filters proposed in this thesis perform filtering and not tracking , that is, they provide only point estimates of target states but do not provide connected estimates of target trajectories from one time step to the next. A new post processing step using game theory as a solution to this filtering - tracking problem is proposed. This approach was named the GTDA method. This method was employed in the KG-SMC-(C)PHD filter as a post processing technique and was evaluated using both simulated and real data obtained using the NI-USRP software defined radio platform in a passive bi-static radar system. Lastly, a new technique for the joint tracking and labelling of multiple extended targets is proposed. To achieve multiple extended target tracking using this technique, models for the target measurement rate, kinematic component and target extension are defined and jointly propagated in time under the generalised labelled multi-Bernoulli (GLMB) filter framework. The GLMB filter is a random finite sets-based filter. In particular, a Poisson mixture variational Bayesian (PMVB) model is developed to simultaneously estimate the measurement rate of multiple extended targets and extended target extension was modelled using B-splines. The proposed method was evaluated with various performance metrics in order to demonstrate its effectiveness in tracking multiple extended targets.

Page generated in 0.119 seconds