• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 8
  • 7
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 75
  • 75
  • 23
  • 20
  • 17
  • 12
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Mulit-Resolution Aitchison Geometry Image Denoising for Low-Light Photography

Miller, Sarah Victoria 01 September 2020 (has links)
No description available.
12

A Wavelet Based Method for ToF Camera Depth Images Denoising

Idoughi, Achour 11 August 2022 (has links)
No description available.
13

Prior Information Guided Image Processing and Compressive Sensing

Qin, Jing 19 August 2013 (has links)
No description available.
14

Analyzing the efficacy of the FFT in Image Denoising and DigitalWatermarking

Fagerström, Emil January 2023 (has links)
The FFT has been a staple in the field of mathematics and computer science for almost 60 years. And yet, it still endures as an efficient algorithm in a multitude of fields. However, as significant technical advances has been made since its inception the demand on methods constantly get higher and higher, and the FFT is seldom enough to solve the problems of this day and age on its own. So how does the FFT perform on its own with today’s standards? This thesis aims to use the FFT to create two algorithms, an Image Denoising algorithm and a Digital Watermarking algorithm respectively, and analyse the efficacy of the algorithms with today’s standards. The results showed that the FFT on its own competently tackles problems well, however with increased demands on the algorithms, the limitations of the FFT became apparent. This underscores the prevalent trend of integrating the FFT with other specializedmethods, ensuring its continued relevance in an era of continuously advancing technologicaldemands.
15

Speckle image denoising methods based on total variation and non-local means

Jones, Chartese 01 May 2020 (has links)
Speckle noise occurs in a wide range of images due to sampling and digital degradation. Understanding how noise can be present in images have led to multiple denoising techniques. Most of these denoising techniques assume equal noise distribution. When the noise present in the image is not uniform, the resulting denoised image becomes less than the highest standard or quality. For this research, we will be focusing on speckle noise. Unlike Gaussian noise, which affects single pixels on an image, speckle noise affects multiple pixels. Hence it is not possible to remove speckle noise with the traditional gaussian denoising model. We develope a more accurate speckle denoising model and its stable numerical methods. This model is based on the TV minimization and the associated non-linear PDE and Krissian $et$ $al$.'s speckle noise equation model. A realistic and efficient speckle noise equation model was introduced with an edge enhancing feature by adopting a non-convex functional. An effective numerical scheme was introduced and its stability was proved. Also, while working with TV minimization for non-linear PDE and Krissian $et$ $al$ we used a dual approach for faster computation. This work is based on Chambolle's approach for image denoising. The NLM algorithm takes advantage of the high degree of redundancy of any natural image. Also, the NLM algorithm is very accurate since all pixels contribute for denoising at any given pixel. However, due to non-local averaging, one major drawback is computational cost. For this research, we will discuss new denoising techniques based on NLM and total variation for images contaminated by speckle noise. We introduce blockwise and selective denoising methods based on NLM technique and Partial Differential Equations (PDEs) methods for total variation to enhance computational efficiency. Our PDE methods have shown to be very computational efficient and as mentioned before the NLM process is very accurate.
16

Total Variation Based Methods for Speckle Image Denoising

Bagchi Misra, Arundhati 11 August 2012 (has links)
This dissertation is about the partial differential equation (PDE) based image denoising models. In particular, we are interested about speckle noise images. We provide the mathematical analysis of existing speckle denoising models and propose three new models based on total variation minimization methods. The first model is developed using a new speckle noise model and the solution of associated numerical scheme is proven to be stable. The second one is a speckle version of Chambolle algorithm and the convergence of the numerical solution was proved under certain assumptions. The final model is a nonlocal PDE based speckle denoising model derived by combining the excellent noise removal properties of the nonlocal means algorithm with the PDE models. We enhanced the computational efficiency of this model by adopting the Split Bregman method. Numerical results of all three models show that they compare favorably to the conventional models.
17

Bayesian inference and wavelet methods in image processing

Silwal, Sharad Deep January 1900 (has links)
Master of Science / Department of Statistics / Diego M. Maldonado / Haiyan Wang / This report addresses some mathematical and statistical techniques of image processing and their computational implementation. Fundamental theories have been presented, applied and illustrated with examples. To make the report as self-contained as possible, key terminologies have been defined and some classical results and theorems are stated, in the most part, without proof. Some algorithms and techniques of image processing have been described and substantiated with experimentation using MATLAB. Several ways of estimating original images from noisy image data and their corresponding risks are discussed. Two image processing concepts selected to illustrate computational implementation are: "Bayes classification" and "Wavelet denoising". The discussion of the latter involves introducing a specialized area of mathematics, namely, wavelets. A self-contained theory for wavelets is built by first reviewing basic concepts of Fourier Analysis and then introducing Multi-resolution Analysis and wavelets. For a better understanding of Fourier Analysis techniques in image processing, original solutions to some problems in Fourier Analysis have been worked out. Finally, implementation of the above-mentioned concepts are illustrated with examples and MATLAB codes.
18

Application of Persistent Homology in Signal and Image Denoising

Zheng, Yi 12 June 2015 (has links)
No description available.
19

Denoising of Carpal Bones for Computerised Assessment of Bone Age

O'Keeffe, Darin January 2010 (has links)
Bone age assessment is a method of assigning a level of biological maturity to a child. It is usually performed either by comparing an x-ray of a child's left hand and wrist with an atlas of known bones, or by analysing specific features of bones such as ratios of width to height, or the degree of overlap with other bones. Both methods of assessment are labour intensive and prone to both inter- and intra-observer variability. This is motivation for developing a computerised method of bone age assessment. The majority of research and development on computerised bone age assessment has focussed on analysing the bones of the hand. The wrist bones, especially the carpal bones, have received far less attention and have only been analysed in young children in which there is clear separation of the bones. An argument is presented that the evidence for excluding the carpal bones from computerised bone age assessment is weak and that research is required to identify the role of carpal bones in the computerised assessment of bone age for children over eight years of age. Computerised analysis of the carpal bones in older children is a difficult computer vision problem plagued by radiographic noise, poor image contrast, and especially poor definition of bone contours. Traditional image processing methods such as region growing fail and even the very successful Canny linear edge detector can only find the simplest of bone edges in these images. The field of partial differential equation-based image processing provides some possible solutions to this problem, such as the use of active contour models to impose constraints upon the contour continuity. However, many of these methods require regularisation to achieve unique and stable solutions. An important part of this regularisation is image denoising. Image denoising was approached through development of a noise model for the Kodak computed radiography system, estimation of noise parameters using a robust estimator of noise per pixel intensity bin, and incorporation of the noise model into a denoising method based on oriented Laplacians. The results for this approach only showed a marginal improvement when using the signal-dependent noise model, although this likely reflects how the noise characteristics were incorporated into the anisotropic diffusion method, rather than the principle of this approach. Even without the signal-dependent noise term the oriented Laplacians denoising of the hand-wrist radiographs was very effective at removing noise and preserving edges.
20

A Comparison of Data Transformations in Image Denoising

Michael, Simon January 2018 (has links)
The study of signal processing has wide applications, such as in hi-fi audio, television, voice recognition and many other areas. Signals are rarely observed without noise, which obstruct our analysis of signals. Hence, it is of great interest to study the detection, approximation and removal of noise.  In this thesis we compare two methods for image denoising. The methods are each based on a data transformation. Specifically, Fourier Transform and Singular Value Decomposition are utilized in respective methods and compared on grayscale images. The comparison is based on the visual quality of the resulting image, the maximum peak signal-to-noise ratios attainable for the respective methods and their computational time. We find that the methods are fairly equal in visual quality. However, the method based on the Fourier transform scores higher in peak signal-to-noise ratio and demands considerably less computational time.

Page generated in 0.0994 seconds