• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 8
  • 8
  • 8
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Using multiple digital image to synthesize a high-resolution image

Zeng, Jhao-Yu 31 August 2011 (has links)
In this paper, we propose an image registration algorithm to form a set of images to a high-resolution image. This algorithm employs a fringe projected scheme to perform the registration. The proposed algorithm provides several advantages, such as high precision, low computation cost, simple system configuration and robotic performance. An example which used three images to form a hight-resolution image was given. It was found that the resolution had enhanced 2.72 times.
2

Using Fringe Projection technique to form a high-resolution image from multiple low-resolution image

Yao, Yu-ting 31 July 2012 (has links)
This paper presents a set of Image Registration, Image Integration, interpolation and image restoration and other technology, the number of low-resolution images synthesized high-resolution image. Relative to the existing image fusion technology, the method provided in this paper has more advantages, such as: (1) high-precision value; (2)low computation cost; (3)a compact system; (4) applicable to noise images; (5) robotic and automatic performance.
3

Holoscopic 3D image depth estimation and segmentation techniques

Alazawi, Eman January 2015 (has links)
Today’s 3D imaging techniques offer significant benefits over conventional 2D imaging techniques. The presence of natural depth information in the scene affords the observer an overall improved sense of reality and naturalness. A variety of systems attempting to reach this goal have been designed by many independent research groups, such as stereoscopic and auto-stereoscopic systems. Though the images displayed by such systems tend to cause eye strain, fatigue and headaches after prolonged viewing as users are required to focus on the screen plane/accommodation to converge their eyes to a point in space in a different plane/convergence. Holoscopy is a 3D technology that targets overcoming the above limitations of current 3D technology and was recently developed at Brunel University. This work is part W4.1 of the 3D VIVANT project that is funded by the EU under the ICT program and coordinated by Dr. Aman Aggoun at Brunel University, West London, UK. The objective of the work described in this thesis is to develop estimation and segmentation techniques that are capable of estimating precise 3D depth, and are applicable for holoscopic 3D imaging system. Particular emphasis is given to the task of automatic techniques i.e. favours algorithms with broad generalisation abilities, as no constraints are placed on the setting. Algorithms that provide invariance to most appearance based variation of objects in the scene (e.g. viewpoint changes, deformable objects, presence of noise and changes in lighting). Moreover, have the ability to estimate depth information from both types of holoscopic 3D images i.e. Unidirectional and Omni-directional which gives horizontal parallax and full parallax (vertical and horizontal), respectively. The main aim of this research is to develop 3D depth estimation and 3D image segmentation techniques with great precision. In particular, emphasis on automation of thresholding techniques and cues identifications for development of robust algorithms. A method for depth-through-disparity feature analysis has been built based on the existing correlation between the pixels at a one micro-lens pitch which has been exploited to extract the viewpoint images (VPIs). The corresponding displacement among the VPIs has been exploited to estimate the depth information map via setting and extracting reliable sets of local features. ii Feature-based-point and feature-based-edge are two novel automatic thresholding techniques for detecting and extracting features that have been used in this approach. These techniques offer a solution to the problem of setting and extracting reliable features automatically to improve the performance of the depth estimation related to the generalizations, speed and quality. Due to the resolution limitation of the extracted VPIs, obtaining an accurate 3D depth map is challenging. Therefore, sub-pixel shift and integration is a novel interpolation technique that has been used in this approach to generate super-resolution VPIs. By shift and integration of a set of up-sampled low resolution VPIs, the new information contained in each viewpoint is exploited to obtain a super resolution VPI. This produces a high resolution perspective VPI with wide Field Of View (FOV). This means that the holoscopic 3D image system can be converted into a multi-view 3D image pixel format. Both depth accuracy and a fast execution time have been achieved that improved the 3D depth map. For a 3D object to be recognized the related foreground regions and depth information map needs to be identified. Two novel unsupervised segmentation methods that generate interactive depth maps from single viewpoint segmentation were developed. Both techniques offer new improvements over the existing methods due to their simple use and being fully automatic; therefore, producing the 3D depth interactive map without human interaction. The final contribution is a performance evaluation, to provide an equitable measurement for the extent of the success of the proposed techniques for foreground object segmentation, 3D depth interactive map creation and the generation of 2D super-resolution viewpoint techniques. The no-reference image quality assessment metrics and their correlation with the human perception of quality are used with the help of human participants in a subjective manner.
4

An Examination Of Super Resolution Methods

Sert, Yilca Baris 01 April 2006 (has links) (PDF)
The resolution of the image is one of the main measures of image quality. Higher resolution is desired and often required in most of the applications, because higher resolution means more details in the image. The use of better image sensors and optics is an expensive and also limiting way of increasing pixel density within the image. The use of image processing methods, to obtain a high resolution image from low resolution images is a cheap and effective solution. This kind of image enhancement is called super resolution image reconstruction. This thesis focuses on the definition, implementation and analysis on well-known techniques of super resolution. The comparison and analysis are the main concerns to understand the improvements of the super resolution methods over single frame interpolation techniques. In addition, the comparison also gives us an insight to the practical uses of super resolution methods. As a result of the analysis, the critical examination of the techniques and their performance evaluation are achieved.
5

Super-resolution imaging

Van der Walt, Stefan Johann 12 1900 (has links)
Thesis (PhD (Electronic Engineering))--University of Stellenbosch, 2010. / Contains bibliography and index. / ENGLISH ABSTRACT: Super-resolution imaging is the process whereby several low-resolution photographs of an object are combined to form a single high-resolution estimation. We investigate each component of this process: image acquisition, registration and reconstruction. A new feature detector, based on the discrete pulse transform, is developed. We show how to implement and store the transform efficiently, and how to match the features using a statistical comparison that improves upon correlation under mild geometric transformation. To simplify reconstruction, the imaging model is linearised, whereafter a polygon-based interpolation operator is introduced to model the underlying camera sensor. Finally, a large, sparse, over-determined system of linear equations is solved, using regularisation. The software developed to perform these computations is made available under an open source license, and may be used to verify the results. / AFRIKAANSE OPSOMMING: In super-resolusie beeldvorming word verskeie lae-resolusie foto's van 'n onderwerp gekombineer in 'n enkele, hoë-resolusie afskatting. Ons ondersoek elke stap van hierdie proses: beeldvorming, -belyning en hoë-resolusie samestelling. 'n Nuwe metode wat staatmaak op die diskrete pulstransform word ontwikkel om belangrike beeldkenmerke te vind. Ons wys hoe om die transform e ektief te bereken en hoe om resultate kompak te stoor. Die kenmerke word vergelyk deur middel van 'n statistiese model wat bestand is teen klein lineêre beeldvervormings. Met die oog op 'n vereenvoudigde samestellingsberekening word die beeldvormingsmodel gelineariseer. In die nuwe model word die kamerasensor gemodelleer met behulp van veelhoek-interpolasie. Uiteindelik word 'n groot, yl, oorbepaalde stelsel lineêre vergelykings opgelos met behulp van regularisering. Die sagteware wat vir hierdie berekeninge ontwikkel is, is beskikbaar onderhewig aan 'n oopbron-lisensie en kan gebruik word om die gegewe resultate te veri eer.
6

Studies On Bayesian Approaches To Image Restoration And Super Resolution Image Reconstruction

Chandra Mohan, S 07 1900 (has links) (PDF)
High quality image /video has become an integral part in our day-to-day life ranging from many areas of science, engineering and medical diagnosis. All these imaging applications call for high resolution, properly focused and crisp images. However, in real situations obtaining such a high quality image is expensive, and in some cases it is not practical. In imaging systems such as digital camera, blur and noise degrade the image quality. The recorded images look blurred, noisy and unable to resolve the finer details of the scene, which are clearly notable under zoomed conditions. The post processing techniques based on computational methods extract the hidden information and thereby improve the quality of the captured images. The study in this thesis focuses on deconvolution and eventually blind de-convolution problem of a single frame captured at low light imaging conditions arising from digital photography/surveillance imaging applications. Our intention is to restore a sharp image from its blurred and noisy observation, when the blur is completely known/unknown and such inverse problems are ill-posed/twice ill-posed. This thesis consists of two major parts. The first part addresses deconvolution/blind deconvolution problem using Bayesian approach with fuzzy logic based gradient potential as a prior functional. In comparison with analog cameras, artifacts are visible in digital cameras when the images are enlarged and there is a demand to enhance the resolution. The increased resolution can be in spatial, temporal or even in both the dimensions. Super resolution reconstruction methods reconstruct images/video containing spectral information beyond that is available in the captured low resolution images. The second part of the thesis addresses resolution enhancement of observed monochromatic/color images using multiple frames of the same scene. This reconstruction problem is formulated in Bayesian domain with an aspiration of reducing blur, noise, aliasing and increasing the spatial resolution. The image is modeled as Markov random field and a fuzzy logic filter based gradient potential is used to differentiate between edge and noisy pixels. Suitable priors are adaptively applied to obtain artifact free/reduced images. In this work, all our approaches are experimentally validated using standard test images. The Matlab based programming tools are used for carrying out the validation. The performance of the approaches are qualitatively compared with results of recently proposed methods. Our results turn out to be visually pleasing and quantitatively competitive.
7

Object Detection with Deep Convolutional Neural Networks in Images with Various Lighting Conditions and Limited Resolution / Detektion av objekt med Convolutional Neural Networks (CNN) i bilder med dåliga belysningförhållanden och lågupplösning

Landin, Roman January 2021 (has links)
Computer vision is a key component of any autonomous system. Real world computer vision applications rely on a proper and accurate detection and classification of objects. A detection algorithm that doesn’t guarantee reasonable detection accuracy is not applicable in real time scenarios where safety is the main objective. Factors that impact detection accuracy are illumination conditions and image resolution. Both contribute to degradation of objects and lead to low classifications and detection accuracy. Recent development of Convolutional Neural Networks (CNNs) based algorithms offers possibilities for low-light (LL) image enhancement and super resolution (SR) image generation which makes it possible to combine such models in order to improve image quality and increase detection accuracy. This thesis evaluates different CNNs models for SR generation and LL enhancement by comparing generated images against ground truth images. To quantify the impact of the respective model on detection accuracy, a detection procedure was evaluated on generated images. Experimental results evaluated on images selected from NoghtOwls and Caltech Pedestrian datasets proved that super resolution image generation and low-light image enhancement improve detection accuracy by a substantial margin. Additionally, it has been proven that a cascade of SR generation and LL enhancement further boosts detection accuracy. However, the main drawback of such cascades is related to an increased computational time which limits possibilities for a range of real time applications. / Datorseende är en nyckelkomponent i alla autonoma system. Applikationer för datorseende i realtid är beroende av en korrekt detektering och klassificering av objekt. En detekteringsalgoritm som inte kan garantera rimlig noggrannhet är inte tillämpningsbar i realtidsscenarier, där huvudmålet är säkerhet. Faktorer som påverkar detekteringsnoggrannheten är belysningförhållanden och bildupplösning. Dessa bidrar till degradering av objekt och leder till låg klassificerings- och detekteringsnoggrannhet. Senaste utvecklingar av Convolutional Neural Networks (CNNs) -baserade algoritmer erbjuder möjligheter för förbättring av bilder med dålig belysning och bildgenerering med superupplösning vilket gör det möjligt att kombinera sådana modeller för att förbättra bildkvaliteten och öka detekteringsnoggrannheten. I denna uppsats utvärderas olika CNN-modeller för superupplösning och förbättring av bilder med dålig belysning genom att jämföra genererade bilder med det faktiska data. För att kvantifiera inverkan av respektive modell på detektionsnoggrannhet utvärderades en detekteringsprocedur på genererade bilder. Experimentella resultat utvärderades på bilder utvalda från NoghtOwls och Caltech datauppsättningar för fotgängare och visade att bildgenerering med superupplösning och bildförbättring i svagt ljus förbättrar noggrannheten med en betydande marginal. Dessutom har det bevisats att en kaskad av superupplösning-generering och förbättring av bilder med dålig belysning ytterligare ökar noggrannheten. Den största nackdelen med sådana kaskader är relaterad till en ökad beräkningstid som begränsar möjligheterna för en rad realtidsapplikationer.
8

Μέθοδοι βελτίωσης της χωρικής ανάλυσης ψηφιακής εικόνας

Παναγιωτοπούλου, Αντιγόνη 12 April 2010 (has links)
Η αντιμετώπιση της περιορισμένης χωρικής ανάλυσης των εικόνων, η οποία οφείλεται στους φυσικούς περιορισμούς που εμφανίζουν οι αισθητήρες σύλληψης εικόνας, αποτελεί το αντικείμενο μελέτης της παρούσας διδακτορικής διατριβής. Στη διατριβή αυτή αρχικά γίνεται προσπάθεια μοντελοποίησης της λειτουργίας του ψηφιοποιητή εικόνας κατά τη δημιουργία αντίγραφου ενός εγγράφου μέσω απλών μοντέλων. Στην εξομοίωση της λειτουργίας του ψηφιοποιητή, το προτεινόμενο μοντέλο θα πρέπει να προτιμηθεί έναντι των μοντέλων Gaussian και Cauchy, που συναντώνται στη βιβλιογραφία, καθώς είναι ισοδύναμο στην απόδοση, απλούστερο στην υλοποίηση και δεν παρουσιάζει εξάρτηση από συγκεκριμένα χαρακτηριστικά λειτουργίας του ψηφιοποιητή. Έπειτα, μορφοποιούνται νέες μέθοδοι για τη βελτίωση της χωρικής ανάλυσης σε εικόνες. Προτείνεται μέθοδος μη ομοιόμορφης παρεμβολής για ανακατασκευή εικόνας Super-Resolution (SR). Αποδεικνύεται πειραματικά πως η προτεινόμενη μέθοδος η οποία χρησιμοποιεί την παρεμβολή Kriging υπερτερεί της μεθόδου η οποία δημιουργεί το πλέγμα υψηλής ανάλυσης μέσω της σταθμισμένης παρεμβολής κοντινότερου γείτονα που αποτελεί συμβατική τεχνική. Επίσης, παρουσιάζονται τρεις νέες μέθοδοι για στοχαστική ανακατασκευή εικόνας SR regularized. Ο εκτιμητής Tukey σε συνδυασμό με το Bilateral Total Variation (BTV) regularization, ο εκτιμητής Lorentzian σε συνδυασμό με το BTV regularization και ο εκτιμητής Huber συνδυασμένος με το BTV regularization είναι οι τρεις μέθοδοι που προτείνονται. Μία πρόσθετη καινοτομία αποτελεί η απευθείας σύγκριση των τριών εκτιμητών Tukey, Lorentzian και Huber στην ανακατασκευή εικόνας super-resolution, άρα στην απόρριψη outliers. Η απόδοση των προτεινόμενων μεθόδων συγκρίνεται απευθείας με εκείνη μίας τεχνικής SR regularized που υπάρχει στη βιβλιογραφία, η οποία αποδεικνύεται κατώτερη. Σημειώνεται πως τα πειραματικά αποτελέσματα οδηγούν σε επαλήθευση της θεωρίας εύρωστης στατιστικής συμπεριφοράς. Επίσης, εκπονείται μία πρωτότυπη μελέτη σχετικά με την επίδραση που έχει κάθε ένας από τους όρους έκφρασης πιστότητας στα δεδομένα και regularization στη διαμόρφωση του αποτελέσματος της ανακατασκευής εικόνας SR. Τα συμπεράσματα που προκύπτουν βοηθούν στην επιλογή μίας αποτελεσματικής μεθόδου για ανακατασκευή εικόνας SR ανάμεσα σε διάφορες υποψήφιες μεθόδους για κάποια δεδομένη ακολουθία εικόνων χαμηλής ανάλυσης. Τέλος, προτείνεται μία μέθοδος παρεμβολής σε εικόνα μέσω νευρωνικού δικτύου. Χάρη στην προτεινόμενη τεχνική εκπαίδευσης το νευρωνικό δίκτυο μαθαίνει το point spread function του ψηφιοποιητή εικόνας. Τα πειραματικά αποτελέσματα αποδεικνύουν πως η προτεινόμενη μέθοδος υπερτερεί σε σχέση με τους κλασικούς αλγόριθμους δικυβικής παρεμβολής και παρεμβολής spline. Η τεχνική που προτείνεται εξετάζει για πρώτη φορά το ζήτημα της σειράς της παρουσίασης των δεδομένων εκπαίδευσης στην είσοδο του νευρωνικού δικτύου. / Coping with the limited spatial resolution of images, which is caused by the physical limitations of image sensors, is the objective of this thesis. Initially, an effort to model the scanner function when generating a document copy by means of simple models is made. In a task of scanner function simulation the proposed model should be preferred over the Gaussian and Cauchy models met in bibliography as it is equivalent in performance, simpler in implementation and does not present any dependence on certain scanner characteristics. Afterwards, new methods for improving images spatial resolution are formulated. A nonuniform interpolation method for Super-Resolution (SR) image reconstruction is proposed. Experimentation proves that the proposed method employing Kriging interpolation predominates over the method which creates the high-resolution grid by means of the weighted nearest neighbor interpolation that is a conventional interpolation technique. Also, three new methods for stochastic regularized SR image reconstruction are presented. The Tukey error norm in combination with the Bilateral Total Variation (BTV) regularization, the Lorentzian error norm in combination with the BTV regularization and the Huber error norm combined with the BTV regularization are the three proposed methods. An additional novelty is the direct comparison of the three estimators Tukey, Lorentzian and Huber in the task of super-resolution image reconstruction, thus in rejecting outliers. The performance of the proposed methods proves superior to that of a regularized SR technique met in bibliography. Experimental results verify the robust statistics theory. Moreover, a novel study which considers the effect of each one of the data-fidelity and regularization terms on the SR image reconstruction result is carried out. The conclusions reached help to select an effective SR image reconstruction method, among several potential ones, for a given low-resolution sequence of frames. Finally, an image interpolation method employing a neural network is proposed. The presented training procedure results in the network learning the scanner point spread function. Experimental results prove that the proposed technique predominates over the classical algorithms of bicubic and spline interpolation. The proposed method is novel as it treats, for the first time, the issue of the training data presentation order to the neural network input.

Page generated in 0.1245 seconds