• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 2
  • 1
  • Tagged with
  • 18
  • 18
  • 11
  • 8
  • 6
  • 6
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Spatial-Spectral Feature Extraction on Pansharpened Hyperspectral Imagery

Kaufman, Jason R. January 2014 (has links)
No description available.
12

Image and Video Resolution Enhancement Using Sparsity Constraints and Bilateral Total Variation Filter

Ashouri, Talouki Zahra 10 1900 (has links)
<p>In this thesis we present new methods for image and video super resolution and video deinterlacing. For image super resolution a new approach for finding a High Resolution (HR) image from a single Low Resolution (LR) image has been introduced. We have done this by employing Compressive Sensing (CS) theory. In CS framework images are assumed to be sparse in a transform domain such as wavelets or contourlets. Using this fact we have developed an approach in which the contourlet domain is considered as the transform domain and a CS algorithm is used to find the high resolution image. Following that, we extend our image super resolution scheme to video super resolution. Our video super resolution method has two steps, the first step consists of our image super resolution method which is applied on each frame separately. Then a post processing step is performed on estimated outputs to increase the video quality. The post processing step consists of a deblurring and a Bilateral Total Variation (BTV) filtering for increasing the video consistency. Experimental results show significant improvement over existing image and video super resolution methods both objectively and subjectively.</p> <p>For video deinterlacing problem a method has been proposed which is also a two step approach. At first 6 interpolators are applied to each missing line and the interpolator which gives the minimum error is selected. An initial deinterlaced frame is constructed using selected interpolator. In the next step this initial deinterlaced frame is fed into a post processing step. The post processing step is a modified version of 2-D Bilateral Total Variation filter. The proposed deinterlacing technique outperforms many existing deinterlacing algorithms.</p> / Master of Science (MSc)
13

Inverse Problems and Self-similarity in Imaging

Ebrahimi Kahrizsangi, Mehran 28 July 2008 (has links)
This thesis examines the concept of image self-similarity and provides solutions to various associated inverse problems such as resolution enhancement and missing fractal codes. In general, many real-world inverse problems are ill-posed, mainly because of the lack of existence of a unique solution. The procedure of providing acceptable unique solutions to such problems is known as regularization. The concept of image prior, which has been of crucial importance in image modelling and processing, has also been important in solving inverse problems since it algebraically translates to the regularization procedure. Indeed, much recent progress in imaging has been due to advances in the formulation and practice of regularization. This, coupled with progress in optimization and numerical analysis, has yielded much improvement in computational methods of solving inverse imaging problems. Historically, the idea of self-similarity was important in the development of fractal image coding. Here we show that the self-similarity properties of natural images may be used to construct image priors for the purpose of addressing certain inverse problems. Indeed, new trends in the area of non-local image processing have provided a rejuvenated appreciation of image self-similarity and opportunities to explore novel self-similarity-based priors. We first revisit the concept of fractal-based methods and address some open theoretical problems in the area. This includes formulating a necessary and sufficient condition for the contractivity of the block fractal transform operator. We shall also provide some more generalized formulations of fractal-based self-similarity constraints of an image. These formulations can be developed algebraically and also in terms of the set-based method of Projection Onto Convex Sets (POCS). We then revisit the traditional inverse problems of single frame image zooming and multi-frame resolution enhancement, also known as super-resolution. Some ideas will be borrowed from newly developed non-local denoising algorithms in order to formulate self-similarity priors. Understanding the role of scale and choice of examples/samples is also important in these proposed models. For this purpose, we perform an extensive series of numerical experiments and analyze the results. These ideas naturally lead to the method of self-examples, which relies on the regularity properties of natural images at different scales, as a means of solving the single-frame image zooming problem. Furthermore, we propose and investigate a multi-frame super-resolution counterpart which does not require explicit motion estimation among video sequences.
14

Inverse Problems and Self-similarity in Imaging

Ebrahimi Kahrizsangi, Mehran 28 July 2008 (has links)
This thesis examines the concept of image self-similarity and provides solutions to various associated inverse problems such as resolution enhancement and missing fractal codes. In general, many real-world inverse problems are ill-posed, mainly because of the lack of existence of a unique solution. The procedure of providing acceptable unique solutions to such problems is known as regularization. The concept of image prior, which has been of crucial importance in image modelling and processing, has also been important in solving inverse problems since it algebraically translates to the regularization procedure. Indeed, much recent progress in imaging has been due to advances in the formulation and practice of regularization. This, coupled with progress in optimization and numerical analysis, has yielded much improvement in computational methods of solving inverse imaging problems. Historically, the idea of self-similarity was important in the development of fractal image coding. Here we show that the self-similarity properties of natural images may be used to construct image priors for the purpose of addressing certain inverse problems. Indeed, new trends in the area of non-local image processing have provided a rejuvenated appreciation of image self-similarity and opportunities to explore novel self-similarity-based priors. We first revisit the concept of fractal-based methods and address some open theoretical problems in the area. This includes formulating a necessary and sufficient condition for the contractivity of the block fractal transform operator. We shall also provide some more generalized formulations of fractal-based self-similarity constraints of an image. These formulations can be developed algebraically and also in terms of the set-based method of Projection Onto Convex Sets (POCS). We then revisit the traditional inverse problems of single frame image zooming and multi-frame resolution enhancement, also known as super-resolution. Some ideas will be borrowed from newly developed non-local denoising algorithms in order to formulate self-similarity priors. Understanding the role of scale and choice of examples/samples is also important in these proposed models. For this purpose, we perform an extensive series of numerical experiments and analyze the results. These ideas naturally lead to the method of self-examples, which relies on the regularity properties of natural images at different scales, as a means of solving the single-frame image zooming problem. Furthermore, we propose and investigate a multi-frame super-resolution counterpart which does not require explicit motion estimation among video sequences.
15

Design and development of material-based resolution enhancement techniques for optical lithography

Gu, Xinyu 18 November 2013 (has links)
The relentless commercial drive for smaller, faster, and cheaper semi-conductor devices has pushed the existing patterning technologies to their limits. Photolithography, one of the crucial processes that determine the feature size in a microchip, is currently facing this challenge. The immaturity of next generation lithography (NGL) technology, particularly EUV, forces the semiconductor industry to explore new processing technologies that can extend the use of the existing lithographic method (i.e. ArF lithography) to enable production beyond the 32 nm node. Two new resolution enhancement techniques, double exposure lithography (DEL) and pitch division lithography (PDL), were proposed that could extend the resolution capability of the current lithography tools. This thesis describes the material and process development for these two techniques. DEL technique requires two exposure passes in a single lithographic cycle. The first exposure is performed with a mask that has a relaxed pitch, and the mask is then shifted by half pitch and re-used for the second exposure. The resolution of the resulting pattern on the wafer is doubled with respect to the features on the mask. This technique can be enabled with a type of material that functions as optical threshold layer (OTL). The key requirements for materials to be useful for OTL are a photoinduced isothermal phase transition and permeance modulation with reverse capabilities. A number of materials were designed and tested based on long alkyl side chain crystalline polymers that bear azobenzene pendant groups on the main chain. The target copolymers were synthesized and fully characterized. A proof-of-concept for the OTL design was successfully demonstrated with a series of customized analytical techniques. PDL technique doubles the line density of a grating mask with only a single exposure and is fully compatible with current lithography tools. Thus, this technique is capable of extending the resolution limit of the current ArF lithography without increasing the cost-of-ownership. Pitch division with a single exposure is accomplished by a dual-tone photoresist. This thesis presents a novel method to enable a dual-tone behavior by addition of a photobase generator (PBG) into a conventional resist formulation. The PBG was optimized to function as an exposure-dependent base quencher, which mainly neutralizes the acid generated in high dose regions but has only a minor influence in low dose regions. The resulting acid concentration profile is a parabola-like function of exposure dose, and only the medium exposure dose produces a sufficient amount of acid to switch the resist solubility. This acid response is exploited to produce pitch division patterns by creating a set of negative-tone lines in the overexposed regions in addition to the conventional positive-tone lines. A number of PBGs were synthesized and characterized, and their decomposition rate constants were studied using various techniques. Simulations were carried out to assess the feasibility of pitch division lithography. It was concluded that pitch division lithography is advantageous when the process aggressiveness factor k₁ is below 0.27. Finally, lithography evaluations of these dual-tone resists demonstrated a proof-of-concept for pitch division lithography with 45 nm pitch divided line and space patterns for a k₁ of 0.13. / text
16

Reconstruction of enhanced ultrasound images from compressed measurements / Reconstruction d'images ultrasonores déconvoluées à partir de données compressées

Chen, Zhouye 21 October 2016 (has links)
L'intérêt de l'échantillonnage compressé dans l'imagerie ultrasonore a été récemment évalué largement par plusieurs équipes de recherche. Suite aux différentes configurations d'application, il a été démontré que les données RF peuvent être reconstituées à partir d'un faible nombre de mesures et / ou en utilisant un nombre réduit d'émission d'impulsions ultrasonores. Selon le modèle de l'échantillonnage compressé, la résolution des images ultrasonores reconstruites à partir des mesures compressées dépend principalement de trois aspects: la configuration d'acquisition, c.à.d. l'incohérence de la matrice d'échantillonnage, la régularisation de l'image, c.à.d. l'a priori de parcimonie et la technique d'optimisation. Nous nous sommes concentrés principalement sur les deux derniers aspects dans cette thèse. Néanmoins, la résolution spatiale d'image RF, le contraste et le rapport signal sur bruit dépendent de la bande passante limitée du transducteur d'imagerie et du phénomène physique lié à la propagation des ondes ultrasonores. Pour surmonter ces limitations, plusieurs techniques de traitement d'image en fonction de déconvolution ont été proposées pour améliorer les images ultrasonores. Dans cette thèse, nous proposons d'abord un nouveau cadre de travail pour l'imagerie ultrasonore, nommé déconvolution compressée, pour combiner l'échantillonnage compressé et la déconvolution. Exploitant une formulation unifiée du modèle d'acquisition directe, combinant des projections aléatoires et une convolution 2D avec une réponse impulsionnelle spatialement invariante, l'avantage de ce cadre de travail est la réduction du volume de données et l'amélioration de la qualité de l'image. Une méthode d'optimisation basée sur l'algorithme des directions alternées est ensuite proposée pour inverser le modèle linéaire, en incluant deux termes de régularisation exprimant la parcimonie des images RF dans une base donnée et l'hypothèse statistique gaussienne généralisée sur les fonctions de réflectivité des tissus. Nous améliorons les résultats ensuite par la méthode basée sur l'algorithme des directions simultanées. Les deux algorithmes sont évalués sur des données simulées et des données in vivo. Avec les techniques de régularisation, une nouvelle approche basée sur la minimisation alternée est finalement développée pour estimer conjointement les fonctions de réflectivité des tissus et la réponse impulsionnelle. Une investigation préliminaire est effectuée sur des données simulées. / The interest of compressive sampling in ultrasound imaging has been recently extensively evaluated by several research teams. Following the different application setups, it has been shown that the RF data may be reconstructed from a small number of measurements and/or using a reduced number of ultrasound pulse emissions. According to the model of compressive sampling, the resolution of reconstructed ultrasound images from compressed measurements mainly depends on three aspects: the acquisition setup, i.e. the incoherence of the sampling matrix, the image regularization, i.e. the sparsity prior, and the optimization technique. We mainly focused on the last two aspects in this thesis. Nevertheless, RF image spatial resolution, contrast and signal to noise ratio are affected by the limited bandwidth of the imaging transducer and the physical phenomenon related to Ultrasound wave propagation. To overcome these limitations, several deconvolution-based image processing techniques have been proposed to enhance the ultrasound images. In this thesis, we first propose a novel framework for Ultrasound imaging, named compressive deconvolution, to combine the compressive sampling and deconvolution. Exploiting an unified formulation of the direct acquisition model, combining random projections and 2D convolution with a spatially invariant point spread function, the benefit of this framework is the joint data volume reduction and image quality improvement. An optimization method based on the Alternating Direction Method of Multipliers is then proposed to invert the linear model, including two regularization terms expressing the sparsity of the RF images in a given basis and the generalized Gaussian statistical assumption on tissue reflectivity functions. It is improved afterwards by the method based on the Simultaneous Direction Method of Multipliers. Both algorithms are evaluated on simulated and in vivo data. With regularization techniques, a novel approach based on Alternating Minimization is finally developed to jointly estimate the tissue reflectivity function and the point spread function. A preliminary investigation is made on simulated data.
17

Dense 3D Point Cloud Representation of a Scene Using Uncalibrated Monocular Vision

Diskin, Yakov 23 May 2013 (has links)
No description available.
18

En jämförelse av Deep Learning-modeller för Image Super-Resolution / A Comparison of Deep Learning Models for Image Super-Resolution

Bechara, Rafael, Israelsson, Max January 2023 (has links)
Image Super-Resolution (ISR) is a technology that aims to increase image resolution while preserving as much content and detail as possible. In this study, we evaluate four different Deep Learning models (EDSR, LapSRN, ESPCN, and FSRCNN) to determine their effectiveness in increasing the resolution of lowresolution images. The study builds on previous research in the field as well as the results of the comparison between the different deep learning models. The problem statement for this study is: “Which of the four Deep Learning-based models, EDSR, LapSRN, ESPCN, and FSRCNN, generates an upscaled image with the best quality from a low-resolution image on a dataset of Abyssinian cats, with a factor of four, based on quantitative results?” The study utilizes a dataset consisting of pictures of Abyssinian cats to evaluate the performance and results of these different models. Based on the quantitative results obtained from RMSE, PSNR, and Structural Similarity (SSIM) measurements, our study concludes that EDSR is the most effective Deep Learning-based model. / Bildsuperupplösning (ISR) är en teknik som syftar till att öka bildupplösningen samtidigt som så mycket innehåll och detaljer som möjligt bevaras. I denna studie utvärderar vi fyra olika Deep Learning modeller (EDSR, LapSRN, ESPCN och FSRCNN) för att bestämma deras effektivitet när det gäller att öka upplösningen på lågupplösta bilder. Studien bygger på tidigare forskning inom området samt resultatjämförelser mellan olika djupinlärningsmodeller. Problemet som studien tar upp är: “Vilken av de fyra Deep Learning-baserade modellerna, EDSR, LapSRN, ESPCN och FSRCNN generarar en uppskalad bild med bäst kvalité, från en lågupplöst bild på ett dataset med abessinierkatter, med skalningsfaktor fyra, baserat på kvantitativa resultat?” Studien använder en dataset av bilder på abyssinierkatter för att utvärdera prestandan och resultaten för dessa olika modeller. Baserat på de kvantitativa resultaten som erhölls från RMSE, PSNR och Structural Similarity (SSIM) mätningar, drar vår studie slutsatsen att EDSR är den mest effektiva djupinlärningsmodellen.

Page generated in 0.0961 seconds