• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 23
  • 22
  • 21
  • 16
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 270
  • 43
  • 42
  • 38
  • 34
  • 34
  • 31
  • 31
  • 30
  • 27
  • 26
  • 23
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Wireless Channel Equalization in Digital Communication Systems

Jalali, Sammuel 01 January 2012 (has links)
Our modern society has transformed to an information-demanding system, seeking voice, video, and data in quantities that could not be imagined even a decade ago. The mobility of communicators has added more challenges. One of the new challenges is to conceive highly reliable and fast communication system unaffected by the problems caused in the multipath fading wireless channels. Our quest is to remove one of the obstacles in the way of achieving ultimately fast and reliable wireless digital communication, namely Inter-Symbol Interference (ISI), the intensity of which makes the channel noise inconsequential. The theoretical background for wireless channels modeling and adaptive signal processing are covered in first two chapters of dissertation. The approach of this thesis is not based on one methodology but several algorithms and configurations that are proposed and examined to fight the ISI problem. There are two main categories of channel equalization techniques, supervised (training) and blind unsupervised (blind) modes. We have studied the application of a new and specially modified neural network requiring very short training period for the proper channel equalization in supervised mode. The promising performance in the graphs for this network is presented in chapter 4. For blind modes two distinctive methodologies are presented and studied. Chapter 3 covers the concept of multiple "cooperative" algorithms for the cases of two and three cooperative algorithms. The "select absolutely larger equalized signal" and "majority vote" methods have been used in 2-and 3-algoirithm systems respectively. Many of the demonstrated results are encouraging for further research. Chapter 5 involves the application of general concept of simulated annealing in blind mode equalization. A limited strategy of constant annealing noise is experimented for testing the simple algorithms used in multiple systems. Convergence to local stationary points of the cost function in parameter space is clearly demonstrated and that justifies the use of additional noise. The capability of the adding the random noise to release the algorithm from the local traps is established in several cases.
102

Low-rank matrix recovery: blind deconvolution and efficient sampling of correlated signals

Ahmed, Ali 13 January 2014 (has links)
Low-dimensional signal structures naturally arise in a large set of applications in various fields such as medical imaging, machine learning, signal, and array processing. A ubiquitous low-dimensional structure in signals and images is sparsity, and a new sampling theory; namely, compressive sensing, proves that the sparse signals and images can be reconstructed from incomplete measurements. The signal recovery is achieved using efficient algorithms such as \ell_1-minimization. Recently, the research focus has spun-off to encompass other interesting low-dimensional signal structures such as group-sparsity and low-rank structure. This thesis considers low-rank matrix recovery (LRMR) from various structured-random measurement ensembles. These results are then employed for the in depth investigation of the classical blind-deconvolution problem from a new perspective, and for the development of a framework for the efficient sampling of correlated signals (the signals lying in a subspace). In the first part, we study the blind deconvolution; separation of two unknown signals by observing their convolution. We recast the deconvolution of discrete signals w and x as a rank-1 matrix wx* recovery problem from a structured random measurement ensemble. The convex relaxation of the problem leads to a tractable semidefinite program. We show, using some of the mathematical tools developed recently for LRMR, that if we assume the signals convolved with one another live in known subspaces, then this semidefinite relaxation is provably effective. In the second part, we design various efficient sampling architectures for signals acquired using large arrays. The sampling architectures exploit the correlation in the signals to acquire them at a sub-Nyquist rate. The sampling devices are designed using analog components with clear implementation potential. For each of the sampling scheme, we show that the signal reconstruction can be framed as an LRMR problem from a structured-random measurement ensemble. The signals can be reconstructed using the familiar nuclear-norm minimization. The sampling theorems derived for each of the sampling architecture show that the LRMR framework produces the Shannon-Nyquist performance for the sub-Nyquist acquisition of correlated signals. In the final part, we study low-rank matrix factorizations using randomized linear algebra. This specific method allows us to use a least-squares program for the reconstruction of the unknown low-rank matrix from the samples of its row and column space. Based on the principles of this method, we then design sampling architectures that not only acquire correlated signals efficiently but also require a simple least-squares program for the signal reconstruction. A theoretical analysis of all of the LRMR problems above is presented in this thesis, which provides the sufficient measurements required for the successful reconstruction of the unknown low-rank matrix, and the upper bound on the recovery error in both noiseless and noisy cases. For each of the LRMR problem, we also provide a discussion of a computationally feasible algorithm, which includes a least-squares-based algorithm, and some of the fastest algorithms for solving nuclear-norm minimization.
103

Compressed Sensing in the Presence of Side Information

Rostami, Mohammad January 2012 (has links)
Reconstruction of continuous signals from a number of their discrete samples is central to digital signal processing. Digital devices can only process discrete data and thus processing the continuous signals requires discretization. After discretization, possibility of unique reconstruction of the source signals from their samples is crucial. The classical sampling theory provides bounds on the sampling rate for unique source reconstruction, known as the Nyquist sampling rate. Recently a new sampling scheme, Compressive Sensing (CS), has been formulated for sparse signals. CS is an active area of research in signal processing. It has revolutionized the classical sampling theorems and has provided a new scheme to sample and reconstruct sparse signals uniquely, below Nyquist sampling rates. A signal is called (approximately) sparse when a relatively large number of its elements are (approximately) equal to zero. For the class of sparse signals, sparsity can be viewed as prior information about the source signal. CS has found numerous applications and has improved some image acquisition devices. Interesting instances of CS can happen, when apart from sparsity, side information is available about the source signals. The side information can be about the source structure, distribution, etc. Such cases can be viewed as extensions of the classical CS. In such cases we are interested in incorporating the side information to either improve the quality of the source reconstruction or decrease the number of the required samples for accurate reconstruction. A general CS problem can be transformed to an equivalent optimization problem. In this thesis, a special case of CS with side information about the feasible region of the equivalent optimization problem is studied. It is shown that in such cases uniqueness and stability of the equivalent optimization problem still holds. Then, an efficient reconstruction method is proposed. To demonstrate the practical value of the proposed scheme, the algorithm is applied on two real world applications: image deblurring in optical imaging and surface reconstruction in the gradient field. Experimental results are provided to further investigate and confirm the effectiveness and usefulness of the proposed scheme.
104

A Feynman Path Centroid Effective Potential Approach for the Study of Low Temperature Parahydrogen Clusters and Droplets

Yang, Jing January 2012 (has links)
The quantum simulation of large molecular systems is a formidable task. We explore the use of effective potentials based on the Feynman path centroid variable in order to simulate large quantum clusters at a reduced computational cost. This centroid can be viewed as the “most” classical variable of a quantum system. Earlier work has shown that one can use a pairwise centroid pseudo-potential to simulate the quantum dynamics of hydrogen in the bulk phase at 25 K and 14 K [Chem. Phys. Lett. 249, 231, (1996)]. Bulk hydrogen, however, freezes below 14 K, so we focus on hydrogen clusters and nanodroplets in the very low temperature regime in order to study their structural behaviours. The calculation of the effective centroid potential is addressed along with its use in the context of molecular dynamics simulations. The effective pseudo-potential of a cluster is temperature dependent and shares similar behaviour as that in the bulk phase. Centroid structural properties in three dimensional space are presented and compared to the results of reference path-integral Monte Carlo simulations. The centroid pseudo-potential approach yields a great reduction in computation cost. With large cluster sizes, the approximate pseudo-potential results are in agreement with the exact reference calculations. An approach to deconvolute centroid structural properties in order to obtain real space results for hydrogen clusters of a wide range of sizes is also presented. The extension of the approach to the treatment of confined hydrogen is discussed, and concluding remarks are presented.
105

Variable Splitting as a Key to Efficient Image Reconstruction

Dolui, Sudipto January 2012 (has links)
The problem of reconstruction of digital images from their degraded measurements has always been a problem of central importance in numerous applications of imaging sciences. In real life, acquired imaging data is typically contaminated by various types of degradation phenomena which are usually related to the imperfections of image acquisition devices and/or environmental effects. Accordingly, given the degraded measurements of an image of interest, the fundamental goal of image reconstruction is to recover its close approximation, thereby "reversing" the effect of image degradation. Moreover, the massive production and proliferation of digital data across different fields of applied sciences creates the need for methods of image restoration which would be both accurate and computationally efficient. Developing such methods, however, has never been a trivial task, as improving the accuracy of image reconstruction is generally achieved at the expense of an elevated computational burden. Accordingly, the main goal of this thesis has been to develop an analytical framework which allows one to tackle a wide scope of image reconstruction problems in a computationally efficient manner. To this end, we generalize the concept of variable splitting, as a tool for simplifying complex reconstruction problems through their replacement by a sequence of simpler and therefore easily solvable ones. Moreover, we consider two different types of variable splitting and demonstrate their connection to a number of existing approaches which are currently used to solve various inverse problems. In particular, we refer to the first type of variable splitting as Bregman Type Splitting (BTS) and demonstrate its applicability to the solution of complex reconstruction problems with composite, cross-domain constraints. As specific applications of practical importance, we consider the problem of reconstruction of diffusion MRI signals from sub-critically sampled, incomplete data as well as the problem of blind deconvolution of medical ultrasound images. Further, we refer to the second type of variable splitting as Fuzzy Clustering Splitting (FCS) and show its application to the problem of image denoising. Specifically, we demonstrate how this splitting technique allows us to generalize the concept of neighbourhood operation as well as to derive a unifying approach to denoising of imaging data under a variety of different noise scenarios.
106

Lse And Mse Optimum Deconvolution

Aktas, Metin 01 July 2004 (has links) (PDF)
In this thesis, we considered the deconvolution problem when the channel is known a priori. LSE and MSE optimum solutions are investigated with deterministic and statistical approaches. We derived closed form LSE expressions and investigated the factors that affect the FIR inverse filters. It turns out that, minimum LSE can be obtained when the system zeros are distributed homogeneously on the z-plane. We proposed partition-based FIR-IIR inverse filters. The selection of FIR and IIR parts is based on partitioning the channel zeros into two regions and using the specified channel zeros to design the best delay FIR and all pole IIR inverse filters. Three methods for partitioning are presented, namely unit circle-based, ring-based and optimum-partitioning. It turns out that ring-based and optimum-partitioning FIR-IIR inverse filter performs better than the best delay FIR inverse filter for the same complexity by about 4-5 dB. For noisy observations, it is shown that, noise should also be considered in the delay selection and partitioning. We extended our results for the design of MSE optimum statistical inverse filters. It is shown that best delay FIR-IIR inverse filters are less sensitive to the estimation errors compared to the IIR Wiener filters and they perform better than the FIR Wiener filters. Furthermore, they are always causal and stable making them suitable for real-time implementations. When the statistical and deterministic filters are compared, it is shown that for low SNR statistical filters perform better by about 1-2 dB, while deterministic filters perform better by about 0.5-1 dB for high SNR
107

Inverse problems in medical ultrasound images - applications to image deconvolution, segmentation and super-resolution / Problèmes inverses en imagerie ultrasonore - applications déconvolution image, ségmentation et super résolution

Zhao, Ningning 20 October 2016 (has links)
L'imagerie ultrasonore est une modalité d'acquisition privilégiée en imagerie médicale en raison de son innocuité, sa simplicité d'utilisation et son coût modéré d'utilisation. Néanmoins, la résolution limitée et le faible contraste limitent son utilisation dans certaines d'applications. C'est dans ce contexte que différentes techniques de post-traitement visant à améliorer la qualité de telles images sont proposées dans ce manuscrit. Dans un premier temps, nous proposons d'aborder le problème conjoint de la déconvolution et de la segmentation d'images ultrasonores en exploitant l'interaction entre ces deux problèmes. Le problème, énoncé dans un cadre bayésien, est résolu à l'aide d'un algorithme MCMC en raison de la complexité de la loi a posteriori des paramètres d'intérêt. Dans un second temps, nous proposons une nouvelle méthode rapide de super-résolution fondée sur la résolution analytique d'un problème de minimisation l2-l2. Il convient de remarquer que les deux approches proposées peuvent être appliquées aussi bien à des images ultrasonores qu'à des images naturelles ou constantes par morceaux. Enfin, nous proposons une méthode de déconvolution aveugle basée sur un modèle paramétrique de la réponse impulsionelle de l'instrument ou du noyau de flou. / In the field of medical image analysis, ultrasound is a core imaging modality employed due to its real time and easy-to-use nature, its non-ionizing and low cost characteristics. Ultrasound imaging is used in numerous clinical applications, such as fetus monitoring, diagnosis of cardiac diseases, flow estimation, etc. Classical applications in ultrasound imaging involve tissue characterization, tissue motion estimation or image quality enhancement (contrast, resolution, signal to noise ratio). However, one of the major problems with ultrasound images, is the presence of noise, having the form of a granular pattern, called speckle. The speckle noise in ultrasound images leads to the relative poor image qualities compared with other medical image modalities, which limits the applications of medical ultrasound imaging. In order to better understand and analyze ultrasound images, several device-based techniques have been developed during last 20 years. The object of this PhD thesis is to propose new image processing methods allowing us to improve ultrasound image quality using postprocessing techniques. First, we propose a Bayesian method for joint deconvolution and segmentation of ultrasound images based on their tight relationship. The problem is formulated as an inverse problem that is solved within a Bayesian framework. Due to the intractability of the posterior distribution associated with the proposed Bayesian model, we investigate a Markov chain Monte Carlo (MCMC) technique which generates samples distributed according to the posterior and use these samples to build estimators of the ultrasound image. In a second step, we propose a fast single image super-resolution framework using a new analytical solution to the l2-l2 problems (i.e., $\ell_2$-norm regularized quadratic problems), which is applicable for both medical ultrasound images and piecewise/ natural images. In a third step, blind deconvolution of ultrasound images is studied by considering the following two strategies: i) A Gaussian prior for the PSF is proposed in a Bayesian framework. ii) An alternating optimization method is explored for blind deconvolution of ultrasound.
108

Avaliação do método de deconvolução sobre dados de sísmica rasa / Evaluation of the deconvolution method on shallow seismic data.

Allan Segovia Spadini 23 April 2012 (has links)
Neste trabalho de pesquisa foi realizado um estudo do método de deconvolução visando melhor adequação à situação encontrada na escala de investigação rasa para a estimativa da forma de onda e da resposta impulsiva da Terra. Procedimentos determinísticos e estatísticos (métodos cegos) foram avaliados sobre dados sintéticos e dados reais adquiridos com fontes de impacto e com uma fonte pseudo-aleatória. / In this research work a study of the method of deconvolution was conducted in order to improve the adequacy to the shallow subsurface scale of investigation to the estimate of the seismic wavelet and of the earth impulse response. Deterministic and statistical (blind) procedures were evaluated over synthetic and real data acquired with impact sources and a pseudo-random source.
109

Estudos dosimétricos da hidroxiapatita por ressonância paramagnética eletrônica e termoluminescência / Hydroxyapatite dosimetric studies by electron paramagnetic resonance and thermoluminescence

Luiz Carlos de Oliveira 26 February 2010 (has links)
Estudos dosimétricos em hidroxiapatita (HAp) podem ser utilizados para a determinação da dose absorvida em tecidos duros em diversas situações, tais como acidentes radiológicos, controle de processos de esterilização e datação arqueológica. Essa tese apresenta estudos da resposta à dose de radiação tanto do sinal de Ressonância Paramagnética Eletrônica (RPE) quanto de termoluminescência (TL) para HAp. A fauna de mamíferos fósseis encontrada na Planície Costeira do Rio Grande do Sul é conhecida desde o final do século XIX, porém ainda pouco se sabe sobre seu contexto bio e cronoestratigráfico. Neste trabalho foram datadas, por RPE, onze amostras de dentes de mamíferos extintos, coletados no Arroio Chuí e ao longo da praia, proveniente do sul da costa do Rio Grande do Sul. As idades obtidas para essas amostras contribuem para um melhor entendimento da origem dos depósitos fossilíferos. Em um segundo momento do trabalho, é proposto um novo procedimento de decomposição de espectros complexos de RPE, voltados para a dosimetria e datação. O método consiste na utilização de funções do pacote de funções do software \"Easyspin\", que é gratuito, associadas a métodos de minimização de funções. Após a validação, o método foi aplicado na decomposição de espectro de duas amostras de dentes de Stego-mastodon waringi, provenientes do nordeste do Brasil. A decomposição visa verificar o efeito que tem as componentes sobrepostas ao sinal dosimétrico sobre o cálculo da dose acumulada e mostrou-se útil para melhorar a precisão na determinação da dose. Por fim, hidroxiapatita sintética carbonatada do tipo-A e hidroxiapatita natural extraída de dentes fósseis foram caracterizadas por TL. Os resultados obtidos por essa técnica mostrou que esses dois tipos de amostra são capazes de responder a radiação ionizante. Contudo, esses mesmos resultados também revelaram a impossibilidade de se fazer dosimetria por termoluminescência com fins à datação, para esse tipo de amostra. / Dosimetric studies on hydroxyapatite (HAp) can be used to determine the absorbed dose in hard tissues in several situation, such as radiologic accidents, control of sterilization process and archaeological dating. This PhD thesis presents studies about radiation dose response assessed by electron paramagnetic resonance (EPR) as well as thermoluminescence in HAp. The fossil mammalian fauna from the Coastal Plain of Rio Grande do Sul State has been known since the late XIX century; however, its biostratigraphic and chronostratigraphic context is still poorly known. The present works describes the results of electron spin resonance (ESR) dating on teeth of extinct mammals collected along the Chuí Creek and coastline. The ages obtained for these samples contribute to a better knowledge about the origin of these fossil. In a second stage of this work, we propose a new EPR spectrum deconvolution (or decomposition) procedure aimed to dosimetry and dating. The method uses the EasySpin (a freeware software) set of functions associated with minimization procedures. After validation, the method was applied in spectrum deconvolution of two Stegomastodon waringi enamel samples, originated from northeast of Brazil. The spectrum econvolution is aimed to verify the superposed components effects on the dosimetric signal over the accumulated dose calculation. The results have shown to be helpful in improving the dose calculation accuracy. In the last stage of this work, synthetic carbonated A-typ hydroxyapatite and natural one were investigated by Thermoluminescence. The obtained results shown that the two samples respond linearly to the ionizing radiation dose. However, the short thermoluminescent glow-peak lifetime suggest that it\'s inadequate for dating purpose.
110

Efficient methodologies for single-image blind deconvolution and deblurring

Khan, Aftab January 2014 (has links)
The Blind Image Deconvolution/Deblurring (BID) problem was realised in the early 1960s but it still remains a challenging task for the image processing research community to find an efficient, reliable and most importantly a diversely applicable deblurring scheme. The main challenge arises from little or no prior information about the image or the blurring process as well as the lack of optimal restoration filters to reduce or completely eliminate the blurring effect. Moreover, restoration can be marred by the two common side effects of deblurring; namely the noise amplification and ringing artefacts that arise in the deblurred image due to an unrealizable or imperfect restoration filter. Also, developing a scheme that can process different types of blur, especially for real images, is yet to be realized to a satisfactory level. This research is focused on the development of blind restoration schemes for real life blurred images. The primary objective is to design a BID scheme that is robust in term of Point Spread Function (PSF) estimation, efficient in terms of restoration speed, and effective in terms of restoration quality. A desired scheme will require a deblurring measure to act as a feedback of quality regarding the deblurred image and lead the estimation of the blurring PSF. The blurred image and the estimated PSF can then be passed on to any classical restoration filter for deblurring. The deblurring measures presented in this research include blind non-Gaussianity measures as well as blind Image Quality Measures (IQMs). These measures are blind in the sense that they are able to gauge the quality of an image directly from it without the need to reference a high quality image. The non-Gaussianity measures include spatial and spectral kurtosis measures; while the image quality analysers include the Blind/Reference-less Image Spatial QUality Evaluator (BRISQUE), Natural Image Quality Evaluator (NIQE) index and Reblurring based Peak Signal to Noise Ratio (RPSNR) measure. BRISQUE, NIQE and spectral kurtosis, are introduced for the first time as deblurring measures for BID. RPSNR is a novel full reference yet blind IQM designed and used in this research work. Experiments were conducted on different image datasets and real life blurred images. Optimization of the BID schemes has been achieved using a gradient descent based scheme and a Genetic Algorithm (GA). Quantitative results based on full-reference and non-reference IQMs, present BRISQUE as a robust and computationally efficient blind feedback quality measure. Also, parametric and arbitrarily shaped (non-parametric or generic) PSFs were treated for the blind deconvolution of images. The parametric forms of PSF include uniform Gaussian, motion and out-of-focus blur. The arbitrarily shaped PSFs comprise blurs that have a much more complex blur shape which cannot be easily modelled in the parametric form. A novel scheme for arbitrarily shaped PSF estimation and blind deblurring has been designed, implemented and tested on artificial and real life blurred images. The scheme provides a unified base for the estimation of both parametric and arbitrarily shaped PSFs with the BRISQUE quality measure in conjunction with a GA. Full-reference and non-reference IQMs have been utilised to gauge the quality of deblurred images for the BID schemes. In the real BID case, only non-reference IQMs can be employed due to the unavailability of the reference high quality image. Quantitative results of these images depict the restoration ability of the BID scheme. The significance of the research work lies in the BID scheme‘s ability to handle parametric and arbitrarily shaped PSFs using a single algorithm, for single-shot blurred images, with enhanced optimization through the gradient descent scheme and GA in conjunction with multiple feedback IQMs.

Page generated in 0.0904 seconds