• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 23
  • 22
  • 20
  • 16
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 268
  • 43
  • 42
  • 38
  • 34
  • 34
  • 31
  • 31
  • 30
  • 27
  • 26
  • 23
  • 23
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Compressed Sensing in the Presence of Side Information

Rostami, Mohammad January 2012 (has links)
Reconstruction of continuous signals from a number of their discrete samples is central to digital signal processing. Digital devices can only process discrete data and thus processing the continuous signals requires discretization. After discretization, possibility of unique reconstruction of the source signals from their samples is crucial. The classical sampling theory provides bounds on the sampling rate for unique source reconstruction, known as the Nyquist sampling rate. Recently a new sampling scheme, Compressive Sensing (CS), has been formulated for sparse signals. CS is an active area of research in signal processing. It has revolutionized the classical sampling theorems and has provided a new scheme to sample and reconstruct sparse signals uniquely, below Nyquist sampling rates. A signal is called (approximately) sparse when a relatively large number of its elements are (approximately) equal to zero. For the class of sparse signals, sparsity can be viewed as prior information about the source signal. CS has found numerous applications and has improved some image acquisition devices. Interesting instances of CS can happen, when apart from sparsity, side information is available about the source signals. The side information can be about the source structure, distribution, etc. Such cases can be viewed as extensions of the classical CS. In such cases we are interested in incorporating the side information to either improve the quality of the source reconstruction or decrease the number of the required samples for accurate reconstruction. A general CS problem can be transformed to an equivalent optimization problem. In this thesis, a special case of CS with side information about the feasible region of the equivalent optimization problem is studied. It is shown that in such cases uniqueness and stability of the equivalent optimization problem still holds. Then, an efficient reconstruction method is proposed. To demonstrate the practical value of the proposed scheme, the algorithm is applied on two real world applications: image deblurring in optical imaging and surface reconstruction in the gradient field. Experimental results are provided to further investigate and confirm the effectiveness and usefulness of the proposed scheme.
82

A Feynman Path Centroid Effective Potential Approach for the Study of Low Temperature Parahydrogen Clusters and Droplets

Yang, Jing January 2012 (has links)
The quantum simulation of large molecular systems is a formidable task. We explore the use of effective potentials based on the Feynman path centroid variable in order to simulate large quantum clusters at a reduced computational cost. This centroid can be viewed as the “most” classical variable of a quantum system. Earlier work has shown that one can use a pairwise centroid pseudo-potential to simulate the quantum dynamics of hydrogen in the bulk phase at 25 K and 14 K [Chem. Phys. Lett. 249, 231, (1996)]. Bulk hydrogen, however, freezes below 14 K, so we focus on hydrogen clusters and nanodroplets in the very low temperature regime in order to study their structural behaviours. The calculation of the effective centroid potential is addressed along with its use in the context of molecular dynamics simulations. The effective pseudo-potential of a cluster is temperature dependent and shares similar behaviour as that in the bulk phase. Centroid structural properties in three dimensional space are presented and compared to the results of reference path-integral Monte Carlo simulations. The centroid pseudo-potential approach yields a great reduction in computation cost. With large cluster sizes, the approximate pseudo-potential results are in agreement with the exact reference calculations. An approach to deconvolute centroid structural properties in order to obtain real space results for hydrogen clusters of a wide range of sizes is also presented. The extension of the approach to the treatment of confined hydrogen is discussed, and concluding remarks are presented.
83

Variable Splitting as a Key to Efficient Image Reconstruction

Dolui, Sudipto January 2012 (has links)
The problem of reconstruction of digital images from their degraded measurements has always been a problem of central importance in numerous applications of imaging sciences. In real life, acquired imaging data is typically contaminated by various types of degradation phenomena which are usually related to the imperfections of image acquisition devices and/or environmental effects. Accordingly, given the degraded measurements of an image of interest, the fundamental goal of image reconstruction is to recover its close approximation, thereby "reversing" the effect of image degradation. Moreover, the massive production and proliferation of digital data across different fields of applied sciences creates the need for methods of image restoration which would be both accurate and computationally efficient. Developing such methods, however, has never been a trivial task, as improving the accuracy of image reconstruction is generally achieved at the expense of an elevated computational burden. Accordingly, the main goal of this thesis has been to develop an analytical framework which allows one to tackle a wide scope of image reconstruction problems in a computationally efficient manner. To this end, we generalize the concept of variable splitting, as a tool for simplifying complex reconstruction problems through their replacement by a sequence of simpler and therefore easily solvable ones. Moreover, we consider two different types of variable splitting and demonstrate their connection to a number of existing approaches which are currently used to solve various inverse problems. In particular, we refer to the first type of variable splitting as Bregman Type Splitting (BTS) and demonstrate its applicability to the solution of complex reconstruction problems with composite, cross-domain constraints. As specific applications of practical importance, we consider the problem of reconstruction of diffusion MRI signals from sub-critically sampled, incomplete data as well as the problem of blind deconvolution of medical ultrasound images. Further, we refer to the second type of variable splitting as Fuzzy Clustering Splitting (FCS) and show its application to the problem of image denoising. Specifically, we demonstrate how this splitting technique allows us to generalize the concept of neighbourhood operation as well as to derive a unifying approach to denoising of imaging data under a variety of different noise scenarios.
84

Application of Marine Magnetometer for Underwater Object Exploration: Assessment of Depth and Structural Index

Chang, En-Hsin 31 July 2012 (has links)
Magnetic survey is a common geophysical exploration technique. By measuring the magnetic field strength at specific area, the characteristics and physical meaning of the target can be obtained through the analysis of the Earth's magnetic field anomalies within a stratigraphic zone or archaeological sites. In recent years, the marine magnetometer is employed to conduct underwater archaeological expedition at surrounding waters of Taiwan for ancient shipwrecks researching. The purpose of this study is to understand the relationship between the magnetic anomalies with the magnetic object via the various signal processing methods, included the calculation horizontal and vertical derivatives using fast Fourier transform (FFT) to eliminate the regional magnetic influence and gain the anomalies characteristics of the target itself, as well as highlight the location and boundaries of the magnetic source through the analytical signal. In addition, the Euler deconvolution implements as a tool for magnetic source inversion. The theory of Euler deconvolution was first proposed by Thompson (1982), this method is able to detect the magnetic source and estimate its locations by choosing the suitable structural index. Hsu (2002) proposed the enhanced Euler deconvolution, which is a combined inversion for structural index and source location through the use of the vertical derivative of measured data.In this study, we first generate various anomalies as testing models which are correspond with different geometric shape of magnetic source, the position and structural index for model is inversed by enhanced Euler deconvolution in both 2D and 3D.Moreover, the experiment was planned at offshore of Dalinpu in Kaohsiung, we took the CPC's pipelines as investigation objects which were buried under the seabed, than compare with sub-bottom profiler data to assess the feasibility of this method for underwater exploring applications.The most estimated results in 2D are correspond to the theory, but it does not have significant results in 3D due to the lack of observed data for the whole surface.In general, this method is concise and fast, it is fit for interpreting the magnetic data for exploring the underwater object.
85

Automation of the Laguerre Expansion Technique for Analysis of Time-resolved Fluorescence Spectroscopy Data

Dabir, Aditi Sandeep 2009 December 1900 (has links)
Time-resolved fluorescence spectroscopy (TRFS) is a powerful analytical tool for quantifying the biochemical composition of organic and inorganic materials. The potentials of TRFS as nondestructive clinical tool for tissue diagnosis have been recently demonstrated. To facilitate the translation of TRFS technology to the clinical arena, algorithms for online TRFS data analysis are of great need. A fast model-free TRFS deconvolution algorithm based on the Laguerre expansion method has been previously introduced, demonstrating faster performance than standard multiexponential methods, and the ability to estimate complex fluorescence decay without any a-priori assumption of its functional form. One limitation of this method, however, was the need to select, a priori, the Laguerre parameter a and the expansion order, which are crucial for accurate estimation of the fluorescence decay. In this thesis, a new implementation of the Laguerre deconvolution method is introduced, in which a nonlinear least-square optimization of the Laguerre parameter is performed, and the selection of optimal expansion order is attained based on a Minimum Description Length (MDL) criterion. In addition, estimation of the zero-time delay between the recorded instrument response and fluorescence decay is also performed based on a normalized means square error criterion. The method was fully validated on fluorescence lifetime, endogenous tissue fluorophores, and human tissue. The automated Laguerre deconvolution method is expected to facilitate online applications of TRFS, such as clinical real-time tissue diagnosis.
86

Deconvolution in Random Effects Models via Normal Mixtures

Litton, Nathaniel A. 2009 August 1900 (has links)
This dissertation describes a minimum distance method for density estimation when the variable of interest is not directly observed. It is assumed that the underlying target density can be well approximated by a mixture of normals. The method compares a density estimate of observable data with a density of the observable data induced from assuming the target density can be written as a mixture of normals. The goal is to choose the parameters in the normal mixture that minimize the distance between the density estimate of the observable data and the induced density from the model. The method is applied to the deconvolution problem to estimate the density of $X_{i}$ when the variable $% Y_{i}=X_{i}+Z_{i}$, $i=1,\ldots ,n$, is observed, and the density of $Z_{i}$ is known. Additionally, it is applied to a location random effects model to estimate the density of $Z_{ij}$ when the observable quantities are $p$ data sets of size $n$ given by $X_{ij}=\alpha _{i}+\gamma Z_{ij},~i=1,\ldots ,p,~j=1,\ldots ,n$, where the densities of $\alpha_{i} $ and $Z_{ij}$ are both unknown. The performance of the minimum distance approach in the measurement error model is compared with the deconvoluting kernel density estimator of Stefanski and Carroll (1990). In the location random effects model, the minimum distance estimator is compared with the explicit characteristic function inversion method from Hall and Yao (2003). In both models, the methods are compared using simulated and real data sets. In the simulations, performance is evaluated using an integrated squared error criterion. Results indicate that the minimum distance methodology is comparable to the deconvoluting kernel density estimator and outperforms the explicit characteristic function inversion method.
87

Blind Deconvolution Techniques In Identifying Fmri Based Brain Activation

Akyol, Halime Iclal 01 November 2011 (has links) (PDF)
In this thesis, we conduct functional Magnetic Resonance Imaging (fMRI) data analysis with the aim of grouping the brain voxels depending on their responsiveness to a neural task. We mathematically treat the fMRI signals as the convolution of the neural stimulus with the hemodynamic response function (HRF). We first estimate a time series including HRFs for each of the observed fMRI signals from a given set and we cluster them in order to identify the groups of brain voxels. The HRF estimation problem is studied within the Bayesian framework through a blind deconvolution algorithm using MAP approach under completely unsupervised and model-free settings, i.e, stimulus is assumed to be unknown and also no particular shape is assumed for the HRF. Only using a given fMRI signal together with a weak Gaussian prior distribution imposed on HRF favoring &lsquo / smoothness&rsquo / , our method successfully estimates all the components of our framework: the HRF, the stimulus and the noise process. Then, we propose to use a modified version of Hausdorff distance to detect similarities within the space of HRFs, spectrally transform the data using Laplacian Eigenmaps and finally cluster them through EM clustering. According to our simulations, our method proves to be robust to lag, sampling jitter, quadratic drift and AWGN (Additive White Gaussian Noise). In particular, we obtained 100% sensitivity and specificity in terms of detecting active and passive voxels in our real data experiments. To conclude with, we propose a new framework for a mathematical treatment for voxel-based fMRI data analysis and our findings show that even when the HRF is unpredictable due to variability in cognitive processes, one can still obtain very high quality activation detection through the method proposed in this thesis.
88

A Practical Solution for Eliminating Artificial Image Contrast in Aberration-Corrected TEM

Tanaka, Nobuo, Kondo, Yushi, Kawai, Tomoyuki, Yamasaki, Jun 02 1900 (has links)
No description available.
89

Pressure transient testing and productivity analysis for horizontal wells

Cheng, Yueming 15 November 2004 (has links)
This work studied the productivity evaluation and well test analysis of horizontal wells. The major components of this work consist of a 3D coupled reservoir/wellbore model, a productivity evaluation, a deconvolution technique, and a nonlinear regression technique improving horizontal well test interpretation. A 3D coupled reservoir/wellbore model was developed using the boundary element method for realistic description of the performance behavior of horizontal wells. The model is able to flexibly handle multiple types of inner and outer boundary conditions, and can accurately simulate transient tests and long-term production of horizontal wells. Thus, it can serve as a powerful tool in productivity evaluation and analysis of well tests for horizontal wells. Uncertainty of productivity prediction was preliminarily explored. It was demonstrated that the productivity estimates can be distributed in a broad range because of the uncertainties of reservoir/well parameters. A new deconvolution method based on a fast-Fourier-transform algorithm is presented. This new technique can denoise "noisy" pressure and rate data, and can deconvolve pressure drawdown and buildup test data distorted by wellbore storage. For cases with no rate measurements, a "blind" deconvolution method was developed to restore the pressure response free of wellbore storage distortion, and to detect the afterflow/unloading rate function using Fourier analysis of the observed pressure data. This new deconvolution method can unveil the early time behavior of a reservoir system masked by variable-wellbore-storage distortion, and thus provides a powerful tool to improve pressure transient test interpretation. The applicability of the method is demonstrated with a variety of synthetic and actual field cases for both oil and gas wells. A practical nonlinear regression technique for analysis of horizontal well testing is presented. This technique can provide accurate and reliable estimation of well-reservoir parameters if the downhole flow rate data are available. In the situation without flow rate measurement, reasonably reliable parameter estimation can be achieved by using the detected flow rate from blind deconvolution. It has the advantages of eliminating the need for estimation of the wellbore storage coefficient and providing reasonable estimates of effective wellbore length. This technique provides a practical tool for enhancement of horizontal well test interpretation, and its practical significance is illustrated by synthetic and actual field cases.
90

Implementing Efficient iterative 3D Deconvolution for Microscopy / Implementering av effektiv iterativ 3D-avfaltning för mikroskopi

Mehadi, Ahmed Shah January 2009 (has links)
Both Gauss-Seidel Iterative 3D deconvolution and Richardson-Lucy like algorithms are used due to their stability and high quality results in high noise microscopic medical image processing. An approach to determine the difference between these two algorithms is presented in this paper. It is shown that the convergence rate and the quality of these two algorithms are influenced by the size of the point spread function (PSF). Larger PSF sizes causes faster convergence but this effect falls off for larger sizes . It is furthermore shown that the relaxation factor and the number of iterations are influencing the convergence rate of the two algorithms. It has been found that increasing relaxation factor and number of iterations improve convergence and can reduce the error of the deblurred image. It also found that overrelaxation converges faster than underrelaxation for small number of iterations. However, it can be achieved smaller final error with under-relaxation. The choice of underrelaxation factor and overrelaxation factor value are highly problem specific and different from one type of images. In addition, when it comes to 3D iterative deconvolution, the influence of boundary conditions for these two algorithms is discussed. Implementation aspects are discussed and it is concluded that cache memory is vital for achieving a fast implementation of iterative 3D deconvolution. A mix of the two algorithms have been developed and compared with the previously mentioned Gauss-Seidel and the Richardson-Lucy-like algorithms. The experiments indicate that, if the value of the relaxation parameter is optimized, then the Richardson-Lucy-like algorithm has the best performance for 3D iterative deconvolution. / Upplösningen på bilder tagna med mikroskop är idag begränsad av diffraktion. För att komma runt detta förbättras bilden digitalt utifrån en matematisk modell av den fysiska processen. Den här avhandlingen jämför två algoritmer för att lösa ekvationerna: Richardson-Lucy och Gauss-Seidel. Vidare studeras effekten av parametrar såsom utbredningen av ljusspridfunktionen och regularisering av ekvationslösaren. / Mobile: (0046)762778136

Page generated in 0.0864 seconds