• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 158
  • 37
  • 21
  • 10
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 300
  • 300
  • 86
  • 59
  • 58
  • 56
  • 48
  • 41
  • 40
  • 38
  • 36
  • 31
  • 28
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

The effects of image quality on reading performance and perceived image quality from CRT and hard-copy displays

Jorna, Gerard C. 07 February 2013 (has links)
The effects of physical image quality on reading and on perceived image quality from CRT and hard copy were studied in this experiment. The results showed that as the image quality of a display increased, indicated by an increase in the value of the MTFA, the reading speed increased and subjective image quality V; ratings increased. This change in reading speed and perceived image quality occurred in the hard copy as well as in the soft copy condition. Image quality, therefore, is concluded to be the major determinant of subjects' performance with respect to displayed information. This implies that if the image quality the displayed text ls the same on the display techniques used, subjects will read from CRT displays as fast as from hard copy displays. / Master of Science
172

The evaluation of chest images compressed with JPEG and wavelet techniques

Wen, Cathlyn Y. 22 August 2008 (has links)
Image compression reduces the amount of space necessary to store digital images and allows quick transmission of images to other hospitals, departments, or clinics. However, the degradation of image quality due to compression may not be acceptable to radiologists or it may affect diagnostic results. A preliminary study was conducted using several chest images with common lung diseases and compressed with JPEG and wavelet techniques at various ratios. Twelve board-certified radiologists were recruited to perform two types of experiments. In the first part of the experiment, presence of lung disease, confidence of presence of lung disease, severity of lung disease, confidence of severity of lung disease, and difficulty of making a diagnosis were rated by radiologists. The six images presented were either uncompressed or compressed at 32:1 or 48:1 compression ratios. In the second part of the experiment, radiologists were asked to make subjective ratings by comparing the image quality of the uncompressed version of an image with the compressed version of the same image, and judging the acceptability of the compressed image for diagnosis. The second part examined a finer range of compression ratios (8:1, 16:1, 24:1, 32:1, 44:1, and 48:1). In all cases, radiologists were able to judge the presence of lung disease and experienced little difficulty diagnosing the images. Image degradation perceptibility increased as the compression ratio increased; however, among the levels of compression ratio tested, the quality of compressed images was judged to be only slightly worse than the original image. At higher compression ratios, JPEG images were judged to be less acceptable than wavelet-based images but radiologists believed that all the images were still acceptable for diagnosis. These results should be interpreted carefully because there were only six original images tested, but results indicate that compression ratios of up to 48:1 are acceptable using the two medically optimized compression methods, JPEG and wavelet techniques. / Master of Science
173

Image Quality Assessment of 3D Synthesized Views / Évaluation de la qualité des images obtenues par synthèse de vues 3D

Tian, Shishun 22 March 2019 (has links)
Depth-Image-Based Rendering (DIBR) est une technologie fondamentale dans plusieurs applications liées à la 3D, telles que la vidéo en mode point de vue libre (FVV), la réalité virtuelle (VR) et la réalité augmentée (AR). Cependant, l'évaluation de la qualité des vues synthétisées par DIBR a également posé de nouveaux problèmes, car ce processus induit de nouveaux types de distorsions, qui sont intrinsèquement différentes des distorsions provoquées par le codage vidéo. Ce travail est destiné à mieux évaluer la qualité des vues synthétisées par DIBR en multimédia immersif. Au chapitre 2, nous proposons deux métriques complètements sans référence (NR). Le principe de la première métrique NR NIQSV consiste à utiliser plusieurs opérations morphologiques d’ouverture et de fermeture pour détecter et mesurer les distorsions, telles que les régions floues et l’effritement. Dans la deuxième métrique NR NIQSV+, nous améliorons NIQSV en ajoutant un détecteur de “black hole” et une détection “stretching”.Au chapitre 3, nous proposons deux métriques de référence complète pour traiter les distorsions géométriques à l'aide d'un masque de désocclusion et d'une méthode de correspondance de blocs multi-résolution. Au chapitre 4, nous présentons une nouvelle base de données d'images synthétisée par DIBR avec ses scores subjectifs associés. Ce travail se concentre sur les distorsions uniquement induites par différentes méthodes de synthèse de DIBR qui déterminent la qualité d’expérience (QoE) de ces applications liées à DIBR. En outre, nous effectuons également une analyse de référence des mesures d'évaluation de la qualité objective de pointe pour les vues synthétisées par DIBR sur cette base de données. Le chapitre 5 conclut les contributions de cette thèse et donne quelques orientations pour les travaux futurs. / Depth-Image-Based Rendering (DIBR) is a fundamental technology in several 3D-related applications, such as Free viewpoint video (FVV), Virtual Reality (VR) and Augmented Reality (AR). However, new challenges have also been brought in assessing the quality of DIBR-synthesized views since this process induces some new types of distortions, which are inherently different from the distortions caused by video coding. This work is dedicated to better evaluate the quality of DIBRsynthesized views in immersive multimedia. In chapter 2, we propose a completely No-reference (NR) metric. The principle of the first NR metrics NIQSV is to use a couple of opening and closing morphological operations to detect and measure the distortions, such as “blurry regions” and “crumbling”. In the second NR metric NIQSV+, we improve NIQSV by adding a “black hole” and a “stretching” detection. In chapter 3, we propose two Fullreference metrics to handle the geometric distortions by using a dis-occlusion mask and a multi-resolution block matching methods.In chapter 4, we present a new DIBR-synthesized image database with its associated subjective scores. This work focuses on the distortions only induced by different DIBR synthesis methods which determine the quality of experience (QoE) of these DIBR related applications. In addition, we also conduct a benchmark of the state-of-the-art objective quality assessment metrics for DIBR-synthesized views on this database. The chapter 5 concludes the contributions of this thesis and gives some directions of future work.
174

Tuned aperture computed tomography (TACT) : an investigation on the factors associated with its image quality for caries detection

De Abreu, Murillo Jose Nunes 03 1900 (has links)
Dissertation (PhD)--University of Stellenbosch, 2001. / ENGLISH ABSTRACT: The purpose of this investigation was to explore the multiple variables involved in TACT® image generation in an attempt to optimize this imaging modality for the diagnostic task of primary dental caries detection. The work is divided in seven phases in which the variables are evaluated individually. Teeth from the study samples were mounted in dental stone and imaged with a solid state digital radiography sensor. As a requisite of TACT® imaging, multiple images of the teeth were acquired from different projection angles. These resulting basis images were then used to generate TACT® slices. Variables tested in the investigation included the number of iterative restorations to which the slices were submitted, the number of basis images, the angle formed between the basis images, the two- and three-dimensional distribution of the basis projections in space, and the method through which the slices were reconstructed. For all phases, observers were asked to assess the presence or absence of primary caries in the teeth imaged using the TACT® slices treated with the different variables. Finally, to determine whether the best combination of variables produced a significant improvement in diagnostic performance, a comparison with conventional digital radiography images was carried out. No statistically significant differences were found in caries detection between TACT® slices submitted to different numbers of iterative restorations, reconstructed from basis images bearing different angular disparities, spatial distributions (in both two and three dimensions), or through different reconstruction methods. A statistically significant difference was detected between TACT® slices reconstructed from different numbers of basis projections. The final comparison showed that TACT® was not statistically superior to conventional digital radiography for the task of caries detection. The results of this investigation suggest that, although TACT® has been shown to be useful in many tasks performed in dentistry, its application in caries detection is not essential inasmuch as there are modalities that are simpler, more practical, less expensive, and that submit the patient to smaller radiation doses. Keywords: TACT, tomosynthesis, image reconstruction, digital radiography, caries detection, ROC analysis, analysis of variance / AFRIKAANSE OPSOMMING: Die doel van hierdie proefskrif was om die veelvuldige veranderlikes wat betrokke is by gewysigde spleet rekenaartomografie (Tuned Aperture Computed Tomography (TACT)) te ondersoek in 'n poging om die beeldingsmodaliteit te optimaliseer in die diagnostiese opsporing van primere karies. Die proefskrif bestaan uit 'n ekstensiewe literatuur oorsig en word in 7 fases aangebied waarin die veranderlikes individueel geevalueer word. Tande is in gips ingebed en radiografiese opnames is gemaak met behulp van 'n digitale radiografiese sensor. As 'n voorvereiste vir TACT beelding is veelvuldige beelde uit verskillende projeksiehoeke van die tande gemaak. Die resulterende basisbeelde is dan gebruik om TACT snitte te produseer. Veranderlikes wat in die proefskrif getoets is, sluit die volgende in: 'n aantal herhalende herstellings waaraan die snitte blootgestel is, die aantal basisbeelde, die hoek gevorm tussen die basisbeelde, die 2 en 3 dimensionele verspeiding van die basis projeksies in die ruimte en die metodes waardeur die snitte gerekonstrueer is. In aIle fases is waarnemers gevra om die teenwoordigheid of afwesigheid van primere karies te evalueer wat met TACT afgeneem is met in ag neming van die verskillende veranderlikes. Ten slotte, om te bepaal of die beste kombinasie van veranderlikes 'n aansienlike verbetering in diagnostiese prestasies sou meebring, is 'n vergelyking met konvensionele digitale radiografiese beelding uitgevoer. Geen statistiese beduidende verskille is waargeneem in die opsporing van karies tussen TACT snitte wat blootgestel is aan verskillende aantal herhalende herstellings, rekonstruksie van basis beelding met verskillende hoek veranderinge, ruimtelike verspreiding (beide 2 en 3 dimensioneel) of deur verskillende rekonstruksie metodes nie. 'n Statistiese beduidende verskil is waargeneem tussen TACT snitte wat van 'n verskeidenheid basis projeksies gerekonstrueer is. Die finale vergelyking het aangetoon dat TACT nie statisties beter is as konvensionele radiografie in die opsporing van karies nie. Die resultate van hierdie proefskrif het getoon dat alhoewel TACT bruikbaar is in vele prosedures wat in die tandheelkunde uitgevoer word, is die toepassing daarvan in die diagnose van karies nie noodsaaklik nie, omdat daar in die tandheelkunde modaliteite beskikbaar is, wat meer eenvoudig, meer prakties en goedkoper is, met 'n laer stralingsdosis vir die pasient.
175

Secure digital documents using Steganography and QR Code

Hassanein, Mohamed Sameh January 2014 (has links)
With the increasing use of the Internet several problems have arisen regarding the processing of electronic documents. These include content filtering, content retrieval/search. Moreover, document security has taken a centre stage including copyright protection, broadcast monitoring etc. There is an acute need of an effective tool which can find the identity, location and the time when the document was created so that it can be determined whether or not the contents of the document were tampered with after creation. Owing the sensitivity of the large amounts of data which is processed on a daily basis, verifying the authenticity and integrity of a document is more important now than it ever was. Unsurprisingly document authenticity verification has become the centre of attention in the world of research. Consequently, this research is concerned with creating a tool which deals with the above problem. This research proposes the use of a Quick Response Code as a message carrier for Text Key-print. The Text Key-print is a novel method which employs the basic element of the language (i.e. Characters of the alphabet) in order to achieve authenticity of electronic documents through the transformation of its physical structure into a logical structured relationship. The resultant dimensional matrix is then converted into a binary stream and encapsulated with a serial number or URL inside a Quick response Code (QR code) to form a digital fingerprint mark. For hiding a QR code, two image steganography techniques were developed based upon the spatial and the transform domains. In the spatial domain, three methods were proposed and implemented based on the least significant bit insertion technique and the use of pseudorandom number generator to scatter the message into a set of arbitrary pixels. These methods utilise the three colour channels in the images based on the RGB model based in order to embed one, two or three bits per the eight bit channel which results in three different hiding capacities. The second technique is an adaptive approach in transforming domain where a threshold value is calculated under a predefined location for embedding in order to identify the embedding strength of the embedding technique. The quality of the generated stego images was evaluated using both objective (PSNR) and Subjective (DSCQS) methods to ensure the reliability of our proposed methods. The experimental results revealed that PSNR is not a strong indicator of the perceived stego image quality, but not a bad interpreter also of the actual quality of stego images. Since the visual difference between the cover and the stego image must be absolutely imperceptible to the human visual system, it was logically convenient to ask human observers with different qualifications and experience in the field of image processing to evaluate the perceived quality of the cover and the stego image. Thus, the subjective responses were analysed using statistical measurements to describe the distribution of the scores given by the assessors. Thus, the proposed scheme presents an alternative approach to protect digital documents rather than the traditional techniques of digital signature and watermarking.
176

A Multi Sensor System for a Human Activities Space : Aspects of Planning and Quality Measurement

Chen, Jiandan January 2008 (has links)
In our aging society, the design and implementation of a high-performance autonomous distributed vision information system for autonomous physical services become ever more important. In line with this development, the proposed Intelligent Vision Agent System, IVAS, is able to automatically detect and identify a target for a specific task by surveying a human activities space. The main subject of this thesis is the optimal configuration of a sensor system meant to capture the target objects and their environment within certain required specifications. The thesis thus discusses how a discrete sensor causes a depth spatial quantisation uncertainty, which significantly contributes to the 3D depth reconstruction accuracy. For a sensor stereo pair, the quantisation uncertainty is represented by the intervals between the iso-disparity surfaces. A mathematical geometry model is then proposed to analyse the iso-disparity surfaces and optimise the sensors’ configurations according to the required constrains. The thesis also introduces the dithering algorithm which significantly reduces the depth reconstruction uncertainty. This algorithm assures high depth reconstruction accuracy from a few images captured by low-resolution sensors. To ensure the visibility needed for surveillance, tracking, and 3D reconstruction, the thesis introduces constraints of the target space, the stereo pair characteristics, and the depth reconstruction accuracy. The target space, the space in which human activity takes place, is modelled as a tetrahedron, and a field of view in spherical coordinates is proposed. The minimum number of stereo pairs necessary to cover the entire target space and the arrangement of the stereo pairs’ movement is optimised through integer linear programming. In order to better understand human behaviour and perception, the proposed adaptive measurement method makes use of a fuzzily defined variable, FDV. The FDV approach enables an estimation of a quality index based on qualitative and quantitative factors. The suggested method uses a neural network as a tool that contains a learning function that allows the integration of the human factor into a quantitative quality index. The thesis consists of two parts, where Part I gives a brief overview of the applied theory and research methods used, and Part II contains the five papers included in the thesis.
177

Adaptive Image Quality Improvement with Bayesian Classification for In-line Monitoring

Yan, Shuo 01 August 2008 (has links)
Development of an automated method for classifying digital images using a combination of image quality modification and Bayesian classification is the subject of this thesis. The specific example is classification of images obtained by monitoring molten plastic in an extruder. These images were to be classified into two groups: the “with particle” (WP) group which showed contaminant particles and the “without particle” (WO) group which did not. Previous work effected the classification using only an adaptive Bayesian model. This work combines adaptive image quality modification with the adaptive Bayesian model. The first objective was to develop an off-line automated method for determining how to modify each individual raw image to obtain the quality required for improved classification results. This was done in a very novel way by defining image quality in terms of probability using a Bayesian classification model. The Nelder Mead Simplex method was then used to optimize the quality. The result was a “Reference Image Database” which was used as a basis for accomplishing the second objective. The second objective was to develop an in-line method for modifying the quality of new images to improve classification over that which could be obtained previously. Case Based Reasoning used the Reference Image Database to locate reference images similar to each new image. The database supplied instructions on how to modify the new image to obtain a better quality image. Experimental verification of the method used a variety of images from the extruder monitor including images purposefully produced to be of wide diversity. Image quality modification was made adaptive by adding new images to the Reference Image Database. When combined with adaptive classification previously employed, error rates decreased from about 10% to less than 1% for most images. For one unusually difficult set of images that exhibited very low local contrast of particles in the image against their background it was necessary to split the Reference Image Database into two parts on the basis of a critical value for local contrast. The end result of this work is a very powerful, flexible and general method for improving classification of digital images that utilizes both image quality modification and classification modeling.
178

Development of the fast steering secondary mirror assembly of GMT

Lee, Sungho, Cho, Myung K., Park, Chan, Han, Jeong-Yeol, Jeong, Ueejeong, Yoon, Yang-noh, Song, Je Heon, Park, Byeong-Gon, Dribusch, Christoph, Park, Won Hyun, Jun, Youra, Yang, Ho-Soon, Moon, Il-Kwon, Oh, Chang Jin, Kim, Ho-Sang, Lee, Kyoung-Don, Bernier, Robert, Alongi, Chris, Rakich, Andrew, Gardner, Paul, Dettmann, Lee, Rosenthal, Wylie 22 July 2016 (has links)
The Giant Magellan Telescope (GMT) will be featured with two Gregorian secondary mirrors, an adaptive secondary mirror (ASM) and a fast-steering secondary mirror (FSM). The FSM has an effective diameter of 3.2 m and built as seven 1.1 m diameter circular segments, which are conjugated 1:1 to the seven 8.4m segments of the primary. Each FSM segment contains a tip-tilt capability for fine co-alignment of the telescope subapertures and fast guiding to attenuate telescope wind shake and mount control jitter. This tip-tilt capability thus enhances performance of the telescope in the seeing limited observation mode. As the first stage of the FSM development, Phase 0 study was conducted to develop a program plan detailing the design and manufacturing process for the seven FSM segments. The FSM development plan has been matured through an internal review by the GMTO-KASI team in May 2016 and fully assessed by an external review in June 2016. In this paper, we present the technical aspects of the FSM development plan.
179

Algorithms for the enhancement of dynamic range and colour constancy of digital images & video

Lluis-Gomez, Alexis L. January 2015 (has links)
One of the main objectives in digital imaging is to mimic the capabilities of the human eye, and perhaps, go beyond in certain aspects. However, the human visual system is so versatile, complex, and only partially understood that no up-to-date imaging technology has been able to accurately reproduce the capabilities of the it. The extraordinary capabilities of the human eye have become a crucial shortcoming in digital imaging, since digital photography, video recording, and computer vision applications have continued to demand more realistic and accurate imaging reproduction and analytic capabilities. Over decades, researchers have tried to solve the colour constancy problem, as well as extending the dynamic range of digital imaging devices by proposing a number of algorithms and instrumentation approaches. Nevertheless, no unique solution has been identified; this is partially due to the wide range of computer vision applications that require colour constancy and high dynamic range imaging, and the complexity of the human visual system to achieve effective colour constancy and dynamic range capabilities. The aim of the research presented in this thesis is to enhance the overall image quality within an image signal processor of digital cameras by achieving colour constancy and extending dynamic range capabilities. This is achieved by developing a set of advanced image-processing algorithms that are robust to a number of practical challenges and feasible to be implemented within an image signal processor used in consumer electronics imaging devises. The experiments conducted in this research show that the proposed algorithms supersede state-of-the-art methods in the fields of dynamic range and colour constancy. Moreover, this unique set of image processing algorithms show that if they are used within an image signal processor, they enable digital camera devices to mimic the human visual system s dynamic range and colour constancy capabilities; the ultimate goal of any state-of-the-art technique, or commercial imaging device.
180

Estudo experimental da otimização em sistemas de mamografia digital CR e DR / Experimental study of optimization in CR and DR digital mammography systems

Perez, Alessandra Maia Marques Martinez 29 January 2015 (has links)
A recente inserção e forte avanço da mamografia digital no Brasil como ferramenta de rastreamento do câncer mamário e as evidências de outras condições de otimização, quando comparadas à mamografia convencional (tela filme), requerem que novos parâmetros de qualidade sejam incluídos e estudados, bem como que as condições de otimização sejam revistas. O objetivo deste trabalho foi determinar a técnica radiográfica otimizada para dois sistemas de detecção (CR e DR) em uso em três unidades de mamografia: Mammomat 3000 Nova (Siemens), Senographe DMR (GE) e Senographe 2000D (GE). A otimização foi conduzida para uma variedade de combinações de fatores técnicos e configurações de simuladores de mama, tais como valores de kilovoltagem (26 a 32 kV), combinações anodo/filtro (Mo/Mo, Mo/Rh e Rh/Rh), material simulador de mama de várias espessuras (2 a 8 cm) e lesões simuladas como massas e calcificações, usando uma figura de mérito (FOM) como parâmetro. Verificou-se que o uso da combinação anodo/filtro que gera os espectros mais energéticos em cada equipamento proporcionou os maiores valores de FOM para todas as espessuras de simulador de mama e voltagens, devido a redução da dose. As combinações anodo/filtro que deram esses resultados foram Mo/Rh para o equipamento da marca Siemens e Rh/Rh para ambos os equipamentos da marca GE, correspondentes aos espectros mais energéticos de cada unidade. Foi observada ainda uma tendência de aumento do kV que maximiza FOM com o aumento da espessura. / The recent introduction and intense advance of digital mammography in Brazil as a tool in breast cancer screening and the evidences of new optimization conditions when compared to conventional mammography (screen-film) require adding and studying novel quality parameters, as well as revisiting optimization conditions. The objective of this work was to determine optimized radiographic technique for two detection systems (CR and DR) in use in three mammography units: Mammomat 3000 Nova (Siemens), Senographe DMR (GE) and Senographe 2000D (GE). Optimization was conducted for various combinations of technique factors and breast phantom configurations, such as kilovoltage settings (26 to 32 kV), target/filter combinations (Mo/Mo, Mo/Rh and Rh/Rh), breast equivalent material in various thicknesses (2 to 8 cm) and simulated mass and calcification lesions, using a figure of merit (FOM) as a parameter. When using anode/filter combination which generates higher energy spectra in each equipment, it was verified that higher FOM values were achieved for all voltages and phantom thicknesses, due to dose reduction. Anode/filter combinations which led to those results were Mo/Rh for Siemens equipment and Rh/Rh for both GE equipments, corresponding to the higher energy spectra in each unity. It was also observed an increasing tendency of kV which maximizes FOM with the increase of thickness.

Page generated in 0.0679 seconds