• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 52
  • 7
  • 5
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 82
  • 82
  • 27
  • 19
  • 19
  • 18
  • 16
  • 16
  • 16
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Multi-color Fluorescence In-situ Hybridization (m-fish) Image Analysis Based On Sparse Representation Models

January 2015 (has links)
There are a variety of chromosomal abnormalities such as translocation, duplication, deletion, insertion and inversion, which may cause severe diseases, e.g., cancers and birth defects. Multi-color fluorescence in-situ hybridization (M-FISH) is an imaging technique popularly used for simultaneously detecting and visualizing these complex abnormalities in a single hybridization. In spite of the advancement of fluorescence microscopy for chromosomal abnormality detection, the quality of the fluorescence images is still limited, due to the spectral overlap, uneven intensity level across multiple channels, variations of background and inhomogeneous intensity within intra-channels. Therefore, it is critical but challenging to distinguish the different types of chromosomes accurately in order to detect the chromosomal abnormalities from M-FISH images. The main contribution of this dissertation is to develop an M-FISH image analysis pipeline by taking full advantage of spatial and spectral information from M-FISH imaging. In addition, novel image analysis approaches such as the sparse representation are applied in this work. The pipeline starts with the image preprocessing to extract the background to improve the quality of the raw images by low-rank plus group lasso decomposition. Then, the image segmentation is performed by incorporating both spatial and spectral information by total variation (TV) and row-wise constraints. Finally image classification is conducted by considering the structural information of neighboring pixels with a row-wise sparse representation model. In each step, new methods and sophisticated algorithms were developed and compared with several popularly used methods, It shows that (1) the preprocessing model improves the quality of the raw images; (2) the segmentation model outperforms than both fuzzy c-means (FCM) and improved adaptive fuzzy c-means (IAFCM) models in terms of correct ratio and false rate; and (3) the classification model corrects the misclassification to improve the accuracy of chromosomal abnormalities detection, especially for the complex inter-chromosomal rearrangements. / 1 / Jingyao Li
2

Total Variation Based Restoration of Bilevel Waveforms

McCarter, Rebecca 27 April 2012 (has links)
A series of Total Variation based algorithms are presented for the restoration of bilevel waveforms from observed signals. The proposed model is discussed analytically and numerically via the gradient descent minimization of the TV energy. The application of restoration of bilevel waveforms encoded within barcode images is presented. A super- resolution technique is proposed as a reduction of dimensionality of the image data. The result is a high resolution image from which the encoded bilevel waveform is restored. Implementation of results is shown for synthetic and real images.
3

Total variation and adjoint state methods for seismic wavefield imaging

Anagaw, Amsalu Y. 11 1900 (has links)
Many geophysical inverse problems are ill-posed and have to be regularized. The most often used solution methods for solving ill-posed problems are based on the use of quadratic regularization that results in smooth solutions. Solutions of this type are not to be suitable when the model parameter is piecewise continuous blocky and edges are desired in the regularized solution. To avoid the smoothing of edges, which are very important attributes of an image, an edge-preserving regularization (non-quadratic regularization) term has to be employed. Total Variation (TV) regularization is one of the most effective regularization techniques for allowing sharp edges and the existence of discontinuities in the solutions. The edge-preserving regularization based on the TV method for small-scale geophysical inverse problems to the problem of estimating the acoustic velocity perturbation from a multi-source-receiver geophysical experiment is studied. The acoustic velocity perturbation is assumed to be piecewise continuous and blocky. The problem is based on linearization acoustic modeling using the framework of the single-scattering Born approximation from a known constant background medium. To solve this non-linear and ill-posed problem, an iterative scheme based on the conjugate gradient method is employed. The TV regularization method provides us with the opportunity to recover more useful information of velocity profiles from the measured seismic data. Though it requires more effort in implementing the TV term to control the smoothing and regularization parameter, the algorithm possesses the strong ability of marking the discontinuities and ensures their preservation from over-smoothing. / Geophysics
4

Total variation and adjoint state methods for seismic wavefield imaging

Anagaw, Amsalu Y. Unknown Date
No description available.
5

Numerical Algorithms for Discrete Models of Image Denoising

Zhao, Hanqing Unknown Date
No description available.
6

Numerical Algorithms for Discrete Models of Image Denoising

Zhao, Hanqing 11 1900 (has links)
In this thesis, we develop some new models and efficient algorithms for image denoising. The total variation model of Rudin, Osher, and Fatemi(ROF) for image denoising is considered to be one of the most successful deterministic denoising models. It exploits the non-smooth total variation (TV) semi-norm to preserve discontinuities and to keep the edges of smooth regions sharp. Despite its simple form, the TV semi-norm results in a strongly nonlinear Euler-Lagrange equation and poses computational challenge in solving the model efficiently. Moreover, this model produces so-called staircase effect. In this thesis, we propose several new algorithms and models to solve these problems. We study the discretized ROF model and propose a new algorithm which does not involve partial differential equations. Convergence of the algorithm is analyzed. Numerical results show that this algorithm is efficient and stable. We then introduce a denoising model which utilizes high-order difference to approximate piece-wise smooth functions. This model eliminates undesirable staircases, and improves both visual quality and signal-to-noise ratio. Our algorithm is generalized to solve the high-order models. A relaxation technique is proposed for the iteration scheme, aiming to accelerate our solution process. Finally, we propose a method combining total variation and wavelet packets to improve performance on texture-rich images. The ROF model is utilized to eliminate noise, and a wavelet packet transform is used to enhance textures. The numerical results show that the combinational method exploits the advantages of both total variation and wavelet packets. / Mathematics
7

Low-Complexity Regularization Algorithms for Image Deblurring

Alanazi, Abdulrahman 11 1900 (has links)
Image restoration problems deal with images in which information has been degraded by blur or noise. In practice, the blur is usually caused by atmospheric turbulence, motion, camera shake, and several other mechanical or physical processes. In this study, we present two regularization algorithms for the image deblurring problem. We first present a new method based on solving a regularized least-squares (RLS) problem. This method is proposed to find a near-optimal value of the regularization parameter in the RLS problems. Experimental results on the non-blind image deblurring problem are presented. In all experiments, comparisons are made with three benchmark methods. The results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and structural similarity, as well as the visual quality of the deblurred images. To reduce the complexity of the proposed algorithm, we propose a technique based on the bootstrap method to estimate the regularization parameter in low and high-resolution images. Numerical results show that the proposed technique can effectively reduce the computational complexity of the proposed algorithms. In addition, for some cases where the point spread function (PSF) is separable, we propose using a Kronecker product so as to reduce the computations. Furthermore, in the case where the image is smooth, it is always desirable to replace the regularization term in the RLS problems by a total variation term. Therefore, we propose a novel method for adaptively selecting the regularization parameter in a so-called square root regularized total variation (SRTV). Experimental results demonstrate that our proposed method outperforms the other benchmark methods when applied to smooth images in terms of PSNR, SSIM and the restored image quality. In this thesis, we focus on the non-blind image deblurring problem, where the blur kernel is assumed to be known. However, we developed algorithms that also work in the blind image deblurring. Experimental results show that our proposed methods are robust enough in the blind deblurring and outperform the other benchmark methods in terms of both output PSNR and SSIM values.
8

Variational Regularization Strategy for Atmospheric Tomography

Altuntac, Erdem 04 April 2016 (has links)
No description available.
9

Application of Persistent Homology in Signal and Image Denoising

Zheng, Yi 12 June 2015 (has links)
No description available.
10

Reconstrução tomográfica de imagens SPECT a partir de poucos dados utilizando variação total / Tomographic reconstruction of SPECT images from few data using total variation

Araujo, João Guilherme Vicente de 13 April 2017 (has links)
Para realizar a correção de atenuação em uma tomografia computadorizada por emissão de fóton único (SPECT, em inglês) é necessário medir e reconstruir o mapa dos coeficientes de atenuação utilizando uma leitura de um tomógrafo de transmissão, feita antes ou simultaneamente à leitura de emissão. Essa abordagem encarece a produção da imagem e, em alguns casos, aumenta consideravelmente a duração do exame, sendo a imobilidade do paciente um fator importante para o sucesso da reconstrução. Uma alternativa que dispensa a leitura de transmissão é reconstruir tanto a imagem de atividade quanto o mapa de atenuação somente através dos dados de uma leitura de emissão. Dentro dessa abordagem propusermos um método baseado no algoritmo criado por Censor, cujo objetivo é resolver um problema misto de viabilidade côncavo-convexo para reconstruir simultaneamente as imagens. O método proposto é formulado como um problema de minimização, onde a função objetivo é dada pela variação total das imagens sujeita à viabilidade mista de Censor. Os teste foram feitos em imagens simuladas e os resultados obtidos na ausência de ruídos, mesmo para uma pequena quantidade de dados, foram satisfatórios. Na presença de dados ruidosos com distribuição de Poisson o método foi instável e a escolha das tolerâncias, nesse caso, ainda é um problema aberto. / In order to perform attenuation correction in single photon emission computed tomography (SPECT), we need to measure and reconstruct the attenuation coefficients map using a transmission tomography scan, performed either sequentially or simultaneously with an emission scan. This approach increases the cost required to produce the image and, in some cases, increases considerably the scanning time, therefore the patient immobility is an important factor to the reconstruction success. An alternative that dispense the transmission scan is reconstruct both the activity image and the attenuation map only from emission scan data. In this approach we proposed a method based on the Censors algorithm, which objective is to solve a mixed convex-concave feasibility problem to reconstruct simultaneously all images. The method proposed is formulated as a minimization problem, where the objective function is given by the total variation of the images subject to Censors mixed feasibility. In the simulations, artificial images were used and the obtained results without noised data, even for small amount of data, were satisfactory. The method was unstable in the presence of Poisson distributed noise and the tolerance choice, in this case, is an open problem yet.

Page generated in 0.1134 seconds