• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 157
  • 37
  • 21
  • 10
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 298
  • 298
  • 86
  • 58
  • 57
  • 56
  • 48
  • 41
  • 39
  • 38
  • 36
  • 31
  • 28
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Video quality assessment based on motion models

Seshadrinathan, Kalpana, 1980- 04 September 2012 (has links)
A large amount of digital visual data is being distributed and communicated globally and the question of video quality control becomes a central concern. Unlike many signal processing applications, the intended receiver of video signals is nearly always the human eye. Video quality assessment algorithms must attempt to assess perceptual degradations in videos. My dissertation focuses on full reference methods of image and video quality assessment, where the availability of a perfect or pristine reference image/video is assumed. A large body of research on image quality assessment has focused on models of the human visual system. The premise behind such metrics is to process visual data by simulating the visual pathway of the eye-brain system. Recent approaches to image quality assessment, the structural similarity index and information theoretic models, avoid explicit modeling of visual mechanisms and use statistical properties derived from the images to formulate measurements of image quality. I show that the structure measurement in structural similarity is equivalent to contrast masking models that form a critical component of many vision based methods. I also show the equivalence of the structural and the information theoretic metrics under certain assumptions on the statistical distribution of the reference and distorted images. Videos contain many artifacts that are specific to motion and are largely temporal. Motion information plays a key role in visual perception of video signals. I develop a general, spatio-spectrally localized multi-scale framework for evaluating dynamic video fidelity that integrates both spatial and temporal aspects of distortion assessment. Video quality is evaluated in space and time by evaluating motion quality along computed motion trajectories. Using this framework, I develop a full-reference video quality assessment algorithm known as the MOtion-based Video Integrity Evaluation index, or MOVIE index. Lastly, and significantly, I conducted a large-scale subjective study on a database of videos distorted by present generation video processing and communication technology. The database contains 150 distorted videos obtained from 10 naturalistic reference videos and each video was evaluated by 38 human subjects in the study. I study the performance of leading, publicly available objective video quality assessment algorithms on this database. / text
52

Image communication system design based on the structural similarity index

Channappayya, Sumohana S., 1977- 28 August 2008 (has links)
The amount of digital image and video content being generated and shared has grown explosively in the recent past. The primary goal of image and video communication systems is to achieve the best possible visual quality at a given rate constraint and channel conditions. In this dissertation, the focus is limited to image communication systems. In order to optimize the components of the communication system to maximize perceptual quality, it is important to use a good measure of quality. Even though this fact has been long recognized, the mean squared error (MSE), which is not the best measure of perceptual quality, has been a popular choice in the design of various components of an image communication system. Recent developments in the field of image quality assessment (IQA) have resulted in the development of powerful new algorithms. A few of these new algorithms include the structural similarity (SSIM) index, the visual information fidelity (VIF) criterion, and the visual signal to noise ratio (VSNR). The SSIM index is considered in this dissertation. I demonstrate that optimizing image processing algorithms for the SSIM index does indeed result in an improvement in the perceptual quality of the processed images. All the comparisons in this thesis are made against appropriate MSE-optimal equivalents. First, an SSIM-optimal linear estimator is derived and applied to the problem of image denoising. An algorithm for SSIM-optimal linear equalization is developed and applied to the problem of image restoration. Followed by the development of the linear solution, I addressed the problem of SSIM-optimal soft thresholding which is a nonlinear technique. The estimation, equalization, and soft-thresholding results all show a gain in the visual quality compared to their MSE-optimal counterparts. These solutions are typically used at the receiver of an image communication system. On the transmitter side of the system, bounds on the SSIM index as a function of the rate allocated to a uniform quantizer are derived.
53

Μελέτη του ποσοστού νεφοκάλυψης στην περιοχή του όρους Χελμός για τη βελτιστοποίηση της ποιότητας εικόνας του νέου ελληνικού τηλεσκοπίου Αρίσταρχος

Γαλανάκης, Νικόλαος 04 August 2009 (has links)
Η παρούσα εργασία χωρίζεται σε δύο μέρη. Το πρώτο έχει ως κύριο σκοπό τον υπολογισμό του ποσοστού νεφοκάλυψης στην περιοχή του τηλεσκοπίου Αρίσταρχος για την βελτιστοποίηση της ποιότητας εικόνας. Στο πρώτο κεφάλαιο δίνεται μια συνοπτική περιγραφή του νέου ελληνικού τηλεσκοπίου Αρίσταρχος και των οργάνων που αυτό χρησιμοποιεί. Στη συνέχεια, στο δεύτερο κεφάλαιο, περιγράφεται ο τρόπος που πραγματοποιούνται οι αστρονομικές παρατηρήσεις και οι δυσκολίες που παρουσιάζονται σε αυτές. Το κύριο βάρος δίνεται στις ατμοσφαιρικές διαταραχές λόγω της τύρβης της ατμόσφαιρας και της διαφορετικής πυκνότητας που αυτή παρουσιάζει στα διάφορα στρώματά της. Στις τελευταίες παραγράφους του κεφαλαίου υπάρχει μια συνοπτική εικόντα του τρόπου με τον οποίο πραγματοποιούνται οι φωτομετρικές μετρήσεις καθώς και η χρησιμότητά τους. Στο τρίτο κεφάλαιο υπάρχει αναλυτική παρουσίαση των δεδομένων που λήφθησαν με τη βοήθεια του δορυφόρου MeteoSAT 7. Στους πίνακες που περιέχει παρατίθεται αναλυτικά η νεφοκάλυψη για τις περιοχές του Αρίσταρχου και της Πεντέλης. Τα δεδομένα αναφέρονται στα έτη 2004 έως και 2007 ανά μήνα και ανά έξι ώρες, στις 00:00 UTC, στις 06:00 UTC και στις 18:00 UTC. Στο τέταρτο κεφάλαιο υπάρχει αναλυτική μελέτη των δεδομένων του τρίτου κεφαλαίου και υπολογισμός του ποσοστού νεφοκάλυψης για τις περιοχές του Αρίσταρχου και της Πεντέλης, ανά ώρα, ανά μήνα, ανά έτος και συνολικά. Στο τέλος του κεφαλαίου υπάρχουν και οι αντίστοιχες γραφικές παραστάσεις με τις οποίες γίνεται πιο σαφής η εικόνα της μελέτης. Στο δεύτερο μέρος της εργασίας αναφέρεται σε μία από τις κυριότερες εφαρμογές του τηλεσκοπίου Αρίσταρχος: στη μελέτη των ενεργών γαλαξιακών πυρήνων. Έτσι, παρουσιάζονται αναλυτικά οι ιδιότητες τόσο των κανονικών όσο και των ενεργών γαλαξιών. Στη συνέχεια αναφέρεται εκτενώς η θεωρία περί ύπαρξης γιγάντιων μελανών οπών στους πυρήνες των ενεργών γαλαξιών, οι οποίες πιθανότατα αποτελούν τον κύριο μηχανισμό παραγωγής των τεράστιων ποσών ενέργειας που αυτοί εκπέμπουν, καθώς και οι προσπάθειες που γίνονται για ανίχνευσή τους. Στο τέλος, υπάρχει αναφορά στα γαλαξιακά σμήνη και στις συγκρούσεις μεταξύ των γαλαξιών, οι οποίες πιθανότατα ευθύνονται για την τροφοδοσία των ανενεργών γιγάντιων μελανών οπών με νέα καύσιμη ύλη καθιστώντας τις και πάλι ενεργές. / This work is seperated in two parts. At the first one, the main goal is to determine the cloudiness fraction at the area of Aristarchos telescope for the optimizing of the image quality. At the first chapter we give a compendiously description of the New Greek Telescope Aristarchos and its instruments. At the second chapter, there is an outline of the most usual astronomical measurments and of the difficulties that an astronomer confront. Also, there is a detailed analysis of the atmospheric ert purbations at the image quality, because of the turbulence and the different densities between the layers of the atmosphere. At the third chapter there is a deteiled presentation of the data that obtained from meteorogical setellite MeteoSAT 7. At the existing tables there is a detailed presentation of the cloudiness at the Aristarchos and Penteli observatories. This h data refers to the period between 2004 and 2007 and for every 6 ours (00:00 UTC, 06:00 UTC, 12:00 UTC and 18:00 UTC). At the forth chapter the is a detailed analysis of the obtained data and the calculation of the cloudiness fraction for the Aristarchos and Penteli observato‐ ries. This analysis splits in three diferrent parts, for each hour, for each month and for each year. At the second part of this work is dedicated at the probably most important application of the Aristarchios telescope: the study of the active galactic nucleis. There is a presentation of the properties of the galaxies. Next, we refer at the theory of massive black holes at the center of active galaxies, which represents the main mechanism of power supply for the AGNs.
54

Iterative algorithms for fast, signal-to-noise ratio insensitive image restoration

Lie Chin Cheong, Patrick January 1987 (has links)
No description available.
55

WAVELET AND SINE BASED ANALYSIS OF PRINT QUALITY EVALUATIONS

Mahalingam, Vijay Venkatesh 01 January 2004 (has links)
Recent advances in imaging technology have resulted in a proliferation of images across different media. Before it reaches the end user, these signals undergo several transformations, which may introduce defects/artifacts that affect the perceived image quality. In order to design and evaluate these imaging systems, perceived image quality must be measured. This work focuses on analysis of print image defects and characterization of printer artifacts such as banding and graininess by using a human visual system (HVS) based framework. Specifically the work addresses the prediction of visibility of print defects (banding and graininess) by representing the print defects in terms of the orthogonal wavelet and sinusoidal basis functions and combining the detection probabilities of each basis functions to predict the response of the human visual system (HVS). The detection probabilities for basis function components and the simulated print defects are obtained from separate subjective tests. The prediction performance from both the wavelet based and sine based approaches is compared with the subjective testing results .The wavelet based prediction performs better than the sinusoidal based approach and can be a useful technique in developing measures and methods for print quality evaluations based on HVS.
56

Micro-satellite Camera Design

Balli, Gulsum Basak 01 January 2003 (has links) (PDF)
The aim of this thesis has been summarized as the design of a micro-satellite camera system and its focal plane simulations. The average micro-satellite orbit heights ranges in between 600-850 km and obviously a multipayload satellite brings volume and power restrictions for each payload. In this work, an orbit height of 600 km and a volume of 20&times / 20&times / 30 cm is assumed, since minimizing the payload dimensions increases the probability of the launch. The pixel size and the dimensions of an imaging detector such as charge-coupled device (CCD) have been defined by the useful image area with acceptable aberration limits on the focal plane. In order to predict the minimum pixel size to be used at the focal plane modulation transfer function (MTF), point spread function (PSF), image distortion and aberration simulations have been carried out and detector parameters for the designed camera have been presented.
57

On optimality and efficiency of parallel magnetic resonance imaging reconstruction: challenges and solutions

Nana, Roger 12 November 2008 (has links)
Imaging speed is an important issue in magnetic resonance imaging (MRI), as subject motion during image acquisition is liable to produce artifacts in the image. However, the speed at which data can be collected in conventional MRI is fundamentally limited by physical and physiological constraints. Parallel MRI is a technique that utilizes multiple receiver coils to increase the imaging speed beyond previous limits by reducing the amount of acquired data without degrading the image quality. In order to remove the image aliasing due to k-space undersampling, parallel MRI reconstructions invert the encoding matrix that describes the net effect of the magnetic field gradient encoding and the coil sensitivity profiles. The accuracy, stability, and efficiency of a matrix inversion strategy largely dictate the quality of the reconstructed image. This thesis addresses five specific issues pertaining to this linear inverse problem with practical solutions to improve clinical and research applications. First, for reconstruction algorithms adopting a k-space interpolation approach to the linear inverse problem, two methods are introduced that automatically select the optimal k-space subset samples participating in the synthesis of a missing datum, guaranteeing an optimal compromise between accuracy and stability, i.e. the best balance between artifacts and signal-to-noise ratio (SNR). While the former is based on cross-validation re-sampling technique, the second utilizes a newly introduced data consistency error (DCE) metric that exploits the shift invariance property of the reconstruction kernel to provide a goodness measure of k-space interpolation in parallel MRI. Additionally, the utility of DCE as a metric for characterizing and comparing reconstruction methods is demonstrated. Second, a DCE-based strategy is introduced to improve reconstruction efficiency in real time parallel dynamic MRI. Third, an efficient and reliable reconstruction method that operates on gridded k-space for parallel MRI using non-Cartesian trajectories is introduced with a significant computational gain for applications involving repetitive measurements. Finally, a pulse sequence that combines parallel MRI and multi-echo strategy is introduced for improving SNR and reducing the geometric distortion in diffusion tensor imaging. In addition, the sequence inherently provides a T2 map, complementing information that can be useful for some applications.
58

Studies on the salient properties of digital imagery that impact on human target acquisition and the implications for image measures.

Ewing, Gary John January 1999 (has links)
Electronically displayed images are becoming increasingly important as an interface between man and information systems. Lengthy periods of intense observation are no longer unusual. There is a growing awareness that specific demands should be made on displayed images in order to achieve an optimum match with the perceptual properties of the human visual system. These demands may vary greatly, depending on the task for which the displayed image is to be used and the ambient conditions. Optimal image specifications are clearly not the same for a home TV, a radar signal monitor or an infrared targeting image display. There is, therefore, a growing need for means of objective measurement of image quality, where "image quality" is used in a very broad sense and is defined in the thesis, but includes any impact of image properties on human performance in relation to specified visual tasks. The aim of this thesis is to consolidate and comment on the image measure literatures, and to find through experiment the salient properties of electronically displayed real world complex imagery that impacts on human performance. These experiments were carried out for well specified visual tasks (of real relevance), and the appropriate application of image measures to this imagery, to predict human performance, was considered. An introduction to certain aspects of image quality measures is given, and clutter metrics are integrated into this concept. A very brief and basic introduction to the human visual system (HVS) is given, with some basic models. The literature on image measures is analysed, with a resulting classification of image measures, according to which features they were attempting to quantify. A series of experiments were performed to evaluate the effects of image properties on human performance, using appropriate measures of performance. The concept of image similarity was explored, by objectively measuring the subjective perception of imagery of the same scene, as obtained through different sensors, and which underwent different luminance transformations. Controlled degradations were introduced, by using image compression. Both still and video compression were used to investigate both spatial and temporal aspects of HVS processing. The effects of various compression schemes on human target acquisition performance were quantified. A study was carried out to determine the "local" extent, to which the clutter around a target, affects its detectability. It was found in this case, that the excepted wisdom, of setting the local domain (support of the metric) to twice the expected target size, was incorrect. The local extent of clutter was found to be much greater, with this having implications for the application of clutter metrics. An image quality metric called the gradient energy measure (GEM), for quantifying the affect of filtering on Nuclear Medicine derived images, was developed and evaluated. This proved to be a reliable measure of image smoothing and noise level, which in preliminary studies agreed with human perception. The final study discussed in this thesis determined the performance of human image analysts, in terms of their receiver-operating characteristic, when using Synthetic Aperture Radar (SAR) derived images in the surveillance context. In particular, the effects of target contrast and background clutter on human analyst target detection performance were quantified. In the final chapter, suggestions to extend the work of this thesis are made, and in this context a system to predict human visual performance, based on input imagery, is proposed. This system intelligently uses image metrics based on the particular visual task and human expectations and human visual system performance parameters. / Thesis (Ph.D.)--Medical School; School of Computer Science, 1999.
59

Analysis and Performance Optimization of a GPGPU Implementation of Image Quality Assessment (IQA) Algorithm VSNR

January 2017 (has links)
abstract: Image processing has changed the way we store, view and share images. One important component of sharing images over the networks is image compression. Lossy image compression techniques compromise the quality of images to reduce their size. To ensure that the distortion of images due to image compression is not highly detectable by humans, the perceived quality of an image needs to be maintained over a certain threshold. Determining this threshold is best done using human subjects, but that is impractical in real-world scenarios. As a solution to this issue, image quality assessment (IQA) algorithms are used to automatically compute a fidelity score of an image. However, poor performance of IQA algorithms has been observed due to complex statistical computations involved. General Purpose Graphics Processing Unit (GPGPU) programming is one of the solutions proposed to optimize the performance of these algorithms. This thesis presents a Compute Unified Device Architecture (CUDA) based optimized implementation of full reference IQA algorithm, Visual Signal to Noise Ratio (VSNR) that uses M-level 2D Discrete Wavelet Transform (DWT) with 9/7 biorthogonal filters among other statistical computations. The presented implementation is tested upon four different image quality databases containing images with multiple distortions and sizes ranging from 512 x 512 to 1600 x 1280. The CUDA implementation of VSNR shows a speedup of over 32x for 1600 x 1280 images. It is observed that the speedup scales with the increase in size of images. The results showed that the implementation is fast enough to use VSNR on high definition videos with a frame rate of 60 fps. This work presents the optimizations made due to the use of GPU’s constant memory and reuse of allocated memory on the GPU. Also, it shows the performance improvement using profiler driven GPGPU development in CUDA. The presented implementation can be deployed in production combined with existing applications. / Dissertation/Thesis / Masters Thesis Computer Science 2017
60

Image Quality-Driven Level of Detail Selection on a Triangle Budget

Arlebrink, Ludvig, Linde, Fredrik January 2018 (has links)
Background. Level of detail is an optimization technique used by several modern games. The level of detail systems uses simplified triangular meshes to determine the optimal combinations of 3D-models to use in order to meet a user-defined criterion for achieving fast performance. Prior work has also pre-computed level of detail settings to only apply the most optimal settings for any given view in a 3D scene. Objectives. The aim of this thesis is to determine the difference in image quality between a custom level of detail pre-preprocessing approach proposed in this paper, and the level of detail system built in the game engine Unity. This is investigated by implementing a framework in Unity for the proposed level of detail pre-preprocessing approach in this paper and designing representative test scenes to collect all data samples. Once the data is collected, the image quality produced by the proposed level of detail pre-preprocessing approach is compared to Unity's existing level of detail approach using perceptual-based metrics. Methods. The method used is an experiment. Unity's method was chosen because of the popularity of the engine, and it was decided to implement the proposed level of detail pre-preprocessing approach also in Unity to have the most fair comparison with Unity's implementation. The two approaches will only differ in how the level of detail is selected, the rest of the rendering pipeline will be exactly the same. Results. The pre-preprocessing time ranged between 13 to 30 hours. The results showed only a small difference in image quality between the two approaches, Unity's built-in system provides a better overall image quality in two out of three test scenes. Conclusions. Due to the pre-processing time and no overall improvement, it was concluded that the proposed level of detail pre-preprocessing approach is not feasible.

Page generated in 0.0838 seconds