• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 158
  • 37
  • 21
  • 10
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 300
  • 300
  • 86
  • 59
  • 58
  • 56
  • 48
  • 41
  • 40
  • 38
  • 36
  • 31
  • 28
  • 26
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

IMAGE AND VIDEO QUALITY ASSESSMENT WITH APPLICATIONS IN FIRST-PERSON VIDEOS

Chen Bai (6760616) 12 August 2019 (has links)
<div>First-person videos (FPVs) captured by wearable cameras provide a huge amount of visual data. FPVs have different characteristics compared to broadcast videos and mobile videos. The video quality of FPVs are influenced by motion blur, tilt, rolling shutter and exposure distortions. In this work, we design image and video assessment methods applicable for FPVs. </div><div><br></div><div>Our video quality assessment mainly focuses on three quality problems. The first problem is the video frame artifacts including motion blur, tilt, rolling shutter, that are caused by the heavy and unstructured motion in FPVs. The second problem is the exposure distortions. Videos suffer from exposure distortions when the camera sensor is not exposed to the proper amount of light, which often caused by bad environmental lighting or capture angles. The third problem is the increased blurriness after video stabilization. The stabilized video is perceptually more blurry than its original because the masking effect of motion is no longer present. </div><div><br></div><div>To evaluate video frame artifacts, we introduce a new strategy for image quality estimation, called mutual reference (MR), which uses the information provided by overlapping content to estimate the image quality. The MR strategy is applied to FPVs by partitioning temporally nearby frames with similar content into sets, and estimating their visual quality using their mutual information. We propose one MR quality estimator, Local Visual Information (LVI), that estimates the relative quality between two images which overlap.</div><div><br></div><div>To alleviate exposure distortions, we propose a controllable illumination enhancement method that adjusts the amount of enhancement with a single knob. The knob can be controlled by our proposed over-enhancement measure, Lightness Order Measure (LOM). Since the visual quality is an inverted U-shape function of the amount of enhancement, our design is to control the amount of enhancement so that the image is enhanced to the peak visual quality. </div><div><br></div><div>To estimate the increased blurriness after stabilization, we propose a visibility-inspired temporal pooling (VTP) mechanism. VTP mechanism models the motion masking effect on perceived video blurriness as the influence of the visibility of a frame on the temporal pooling weight of the frame quality score. The measure for visibility is estimated as the proportion of spatial details that is visible for human observers.</div>
92

Desentrelaçamento de vídeo com suporte de um detector de efeito feathering e de um índice de concentração de artefatos / Video de-interlacing with support from a feathering effect detector and an artifact concentration index

Martins, André Luis 27 February 2018 (has links)
Este trabalho apresenta uma nova solução para conversão de campos de vídeo entrelaçado em quadros progressivos, processo conhecido como desentrelaçamento. Os algoritmos de desentrelaçamento \"estado da arte\", na tentativa de evitar a geração de artefatos de vídeo do tipo \"efeito feathering\", tendem a gerar borramento e degradar a qualidade da imagem. O objetivo é melhorar a qualidade dos quadros produzidos pelo processo de desentrelaçamento pela combinação de dois processos já existentes, sendo um deles do tipo intracampo e outro do tipo intercampos. A estratégia proposta se baseia na identificação de artefatos gerados por um processo de desentrelaçamento intercampos, com suporte de um detector de artefatos do tipo \"efeito feathering\" e da análise dos dados gerados por este detector utilizando um \"índice de aglomeração de artefatos\", que neste trabalho foi denominado Spot Index. As regiões afetadas pelo efeito feathering são identificadas e substituídas pelas regiões equivalentes extraídas de um quadro gerado por um método intracampo, normalmente pouco afetado por feathering. Os testes demonstraram que a estratégia proposta é capaz de produzir quadros desentrelaçados com maior qualidade visual do que a obtida com um único tipo de método aplicado de modo global, pois é capaz de explorar, extrair e combinar as qualidades de cada método. Uma avaliação estatística de hipóteses demonstrou que a estratégia proposta traz vantagens consideráveis em relação às técnicas aplicadas de modo global. / This work presents a new solution for converting interlaced video fields into progressive frames, a process known as video deinterlacing. Current state-of-the-art de-interlacing algorithms, in an attempt to avoid the generation of feathering effect video artifacts, tend to generate blurring and degrade image quality. The objective is to improve the quality of the frames produced by the de-interlacing process by combining two existing processes, one of them being intra-field and the other an inter-field type. The proposed strategy is based on the identification of artifacts generated by an inter-field deinterlacing process, supported by a \"feathering effect\" artifact detector and the analysis of the data generated by this detector using an \"agglomeration index of artifacts\", which in this work was called \"Spot Index\". The regions affected by the feathering effect are identified and replaced by the equivalent regions extracted from a frame produced by an intra-field process known as \"Edge-based Line Averaging\" (ELA). Tests have demonstrated that the proposed strategy is able to produce deinterlaced frames with higher quality than the one obtained with a single type of method applied in a global way, because it is able to explore, extract and combine the qualities of each method. A statistical evaluation of hypotheses has shown that the proposed strategy brings considerable advantages over globally applied deinterlacing techniques.
93

Corrélation entre les performances physiques mesurées des détecteurs et la qualité diagnostique de l'image en mammographie numérique / Correlation between the physical performances measured from detectors and the diagnosic image quality in digital mammography

Perez-Ponce, Hector 19 May 2009 (has links)
En mammographie numérique, il existe deux approches pour évaluer la qualité de l’image. Dans la première approche, une méthode visuelle basée sur un observateur humain permet la détectabilité des lésions en radiographie mammographique. Malheureusement, cette approche subjective de qualité nécessite d’importantes ressources humaines et du temps lors de son implémentation. Dans une deuxième approche, des paramètres objectifs et indépendants de l’observateur humain, présentant une relation avec la résolution et le contraste de l’image (MTF et NPS), sont utilisés pour une évaluation de la performance des détecteurs. Cependant, il n’existe pas de relation directe entre ces paramètres et la détectabilité des lésions. Une méthode permettant d’avoir une approche de la qualité de l’image, à la fois indépendante de facteurs humains et présentant une relation directe avec la détectabilité des lésions, permettrait d’utiliser l’équipement mammographique de manière optimale. Ce travail de thèse présente le développement d’une telle méthode, consistant à générer par ordinateur des images virtuelles à partir d’un modèle « Source à rayons X/Détecteur numérique », qui prend en compte les valeurs mesurées de la MTF et du NPS des équipements mammographiques en conditions cliniques d’utilisation. De par les résultats obtenus dans notre travail, nous avons contribué à établir le lien entre les caractéristiques physiques des détecteurs et la qualité clinique de l’image dans les conditions habituelles d’exposition. De plus, nous suggérons l’utilisation de notre modèle pour déterminer rapidement les conditions optimales en mammographie, aspect essentiel pour la radioprotection des patientes / In digital mammography two approaches exist to estimate image quality. In the first approach, human observer assesses the lesion detection in mammograms. Unfortunately, such quality assessment is subject to inter-observer variability, and requires a large amount of time and human resources. In the second approach, objective and human-independent parameters relating to image spatial resolution and noise (MTF and NPS) are used to evaluate digital detector performance; even if these parameters are objective, they are not directly related to lesion detection. A method leading to image quality assessment which is both human independent, and directly related to lesion detection is very important for the optimal use of mammographic units. This PhD thesis presents the steps towards such a method: the computation of realistic virtual images using an “X-ray source/digital detector” model taking into account the physical parameters of the detector (spatial resolution and noise measurements) measured under clinical conditions. From results obtained in this work, we have contributed to establish the link between the physical characteristics of detectors and the clinical quality of the image for usual exposition conditions. Furthermore, we suggest the use of our model for the creation of virtual images, in order to rapidly determine the optimal conditions in mammography, which usually is a long and tedious experimental process. This is an essential aspect to be taken into account for radioprotection of patients, especially in the context of organized mass screening of breast cancer
94

Enhancement in Low-Dose Computed Tomography through Image Denoising Techniques: Wavelets and Deep Learning

Unknown Date (has links)
Reducing the amount of radiation in X-ray computed tomography has been an active area of research in the recent years. The reduction of radiation has the downside of degrading the quality of the CT scans by increasing the ratio of the noise. Therefore, some techniques must be utilized to enhance the quality of images. In this research, we approach the denoising problem using two class of algorithms and we reduce the noise in CT scans that have been acquired with 75% less dose to the patient compared to the normal dose scans. Initially, we implemented wavelet denoising to successfully reduce the noise in low-dose X-ray computed tomography (CT) images. The denoising was improved by finding the optimal threshold value instead of a non-optimal selected value. The mean structural similarity (MSSIM) index was used as the objective function for the optimization. The denoising performance of combinations of wavelet families, wavelet orders, decomposition levels, and thresholding methods were investigated. Results of this study have revealed the best combinations of wavelet orders and decomposition levels for low dose CT denoising. In addition, a new shrinkage function is proposed that provides better denoising results compared to the traditional ones without requiring a selected parameter. Alternatively, convolutional neural networks were employed using different architectures to resolve the same denoising problem. This new approach improved denoising even more in comparison to the wavelet denoising. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2018. / FAU Electronic Theses and Dissertations Collection
95

Perceptual methods for video coding

Unknown Date (has links)
The main goal of video coding algorithms is to achieve high compression efficiency while maintaining quality of the compressed signal at the highest level. Human visual system is the ultimate receiver of compressed signal and final judge of its quality. This dissertation presents work towards optimal video compression algorithm that is based on the characteristics of our visual system. Modeling phenomena such as backward temporal masking and motion masking we developed algorithms that are implemented in the state-of- the-art video encoders. Result of using our algorithms is visually lossless compression with improved efficiency, as verified by standard subjective quality and psychophysical tests. Savings in bitrate compared to the High Efficiency Video Coding / H.265 reference implementation are up to 45%. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2014. / FAU Electronic Theses and Dissertations Collection
96

Reporting on radiographic images in after-hours trauma units :Experiences of radiographers and medical practitioners

Van der Venter, Riaan January 2016 (has links)
Globally there is a lack of radiologists, which results in unreported radiographic examinations, or a delay in reporting on radiographic images even in emergency situations. In order to mitigate and alleviate the situation, and optimise the utilisation of radiographers a red dot system was introduced in the United Kingdom, which later aided in the transformation of the role of radiographers in terms of formal reporting of various radiographic examinations. Although there is a shortage of medical practitioners and radiologists in South Africa the extended role of radiographers has not been yet realised for radiographers. At present, radiographers and medical practitioners work in collaboration to interpret and report on radiographic examinations informally, to facilitate effective and efficient patient management, but this is done illegally because the regulations defining the scope of the profession of radiography does not allow for such practice, putting radiographers and organisations at risk of litigation. In order to gain an in-depth knowledge of the phenomena, to enable the researcher to provide recommendations to the Professional Board of Radiography and Clinical Technology (PBRCT) of the Health Professions Council of South Africa (HPCSA), a qualitative, exploratory, descriptive, and contextual research study was undertaken. Radiographers and medical practitioners were interviewed in order to elicit rich descriptions of their experiences regarding reporting of trauma related radiographic images in the after-hours trauma units. Data were gathered using in-depth semi-structured interviews, and the data were analysed using kesch’s method of thematic synthesis. Three themes emerged from the data, namely the challenges radiographers and medical practitioners face in the after-hours trauma units respectively, with regards to reporting of trauma related adiographs, and suggestions were proposed to optimize the participation of radiographers with regard to trauma related radiographs in these units. A thick description and literature control was done using quotes from participants. Measures to ensure trustworthiness and ethical research practices were also implemented. Thereafter, recommendations were put forward for the PBRCT of the HPCSA, using current literature and inferences made from the findings of the study.
97

A comparative study on water vapor extracted from interferometric SAR images and synchronized data. / CUHK electronic theses & dissertations collection

January 2011 (has links)
Cheng, Shilai. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2011. / Includes bibliographical references (leaves 138-150). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstract also in Chinese.
98

Reconstruction itérative en scanographie : optimisation de la qualité image et de la dose pour une prise en charge personnalisée / Iterative reconstruction in CT : optimization of image quality and dose for personalized care

Greffier, Joël 17 November 2016 (has links)
Avec l’augmentation du nombre de scanner et de la dose collective, le risque potentiel d’apparition d’effets stochastiques est accentué. Pour limiter au maximum ce risque, les principes de justification et d’optimisation doivent être appliqués avec rigueur. L’optimisation des pratiques a pour but de délivrer la dose la plus faible possible tout en conservant une qualité diagnostique des images. C’est une tâche complexe qui implique de trouver en permanence un compromis entre la dose délivrée et la qualité image résultante. Pour faciliter cette démarche, des évolutions technologiques ont été développées. Les deux évolutions majeures sont la modulation du courant du tube en fonction de l’atténuation du patient et l’apparition des reconstructions itératives (IR). L’introduction des IR a modifié les habitudes puisqu’elles permettent de conserver des indices de qualité image équivalents en réduisant les doses. Cependant, leurs utilisations s’accompagnent d’une modification de la composition et de la texture de l’image nécessitant d’utiliser des métriques adaptées pour les évaluer. Le but de cette thèse est d’évaluer l’impact d’une utilisation des IR sur la réduction de la dose et sur la qualité des images afin de proposer en routine pour tous les patients, des protocoles avec la dose la plus faible possible et une qualité image adaptée au diagnostic. La première partie de cette thèse est consacrée à une mise au point sur la problématique du compromis dose/qualité image en scanographie. Les métriques de qualité image et les indicateurs dosimétriques à utiliser, ainsi que le principe et l’apport des reconstructions itératives y sont exposés. La deuxième partie est consacrée à la description des trois étapes réalisées dans cette thèse pour atteindre les objectifs. La troisième partie est constituée d’une production scientifique de 7 articles. Le 1er article présente la méthodologie d’optimisation globale permettant la mise en place de protocoles Basses Doses en routine avec utilisation de niveaux modérés des IR. Le 2ème article évalue l’impact et l’apport sur la qualité des images obtenues pour des niveaux de doses très bas. Le 3ème et le 4ème article montrent l’intérêt d’adapter ou de proposer des protocoles optimisés selon la morphologie du patient. Enfin les 3 derniers articles, illustrent la mise en place de protocoles Très Basses Doses pour des structures ayant un fort contraste spontané. Pour ces protocoles les doses sont proches des examens radiographiques avec des niveaux élevés des IR. La démarche d’optimisation mise en place a permis de réduire considérablement les doses. Malgré une modification de la texture et de la composition des images, la qualité des images obtenues pour tous les protocoles était jugée satisfaisante pour le diagnostic par les radiologues. L’utilisation des IR en routine nécessite une évaluation particulière et un temps d’apprentissage pour les radiologues. / The increasing number of scanner and the cumulative dose delivered lead to potential risk of stochastic effects. To minimize this risk, optimization on CT usage should be rigorously employed. Optimization aims to deliver the lowest dose but maintaining image quality for an accurate diagnosis. This is a complex task, which requires setting up the compromise between the dose delivered and the resulting image quality. To achieve such goal, several CT technological evolutions have been developed. Two predominant developments are the Tube Current Modulation and the Iterative Reconstruction (IR). The former lays one patient's attenuation, the latter depend on advanced mathematical approaches. Using IR allows one to maintain equivalent image quality values by reducing the dose. However, it changes the composition and texture of the image and requires the use of appropriate metric to evaluate them. The aim of this thesis was to evaluate the impact of using IR on dose reduction and image quality in routine for all patients, protocols with the lowest dose delivered with an image quality suitable for diagnosis. The first part of the thesis addressed the compromise between dose delivered and image quality. Metrics of the image quality and the dosimetric indicators were applied as well the principle and the contribution of IRs were explored. The second part targets the description of the three steps performed in this thesis to achieve the objectives. The third part of the thesis consists of a scientific production of seven papers. The first paper presents the global optimization methodology for the establishment of low dose protocols in routine using moderate levels of IR. The second paper assesses the impact and contribution of IR to the image quality obtained to levels very low doses. The third and the fourth papers show the interest to adapt or propose protocols optimized according to patient's morphology. Finally the last three papers illustrate the development of Very Low Dose protocols for structures with high spontaneous contrast. For these protocols, doses are close to radiographic examinations with high levels of IR. The optimization process implementation has significantly doses reduction. Despite the change on the texture and on composition of the images, the quality of images obtained for all protocols was satisfactory for the diagnosis by radiologists. However, the use of routine IR requires special assessment and a learning time for radiologists.
99

Adapting iris feature extraction and matching to the local and global quality of iris image / Comparaison des personnes par l'iris : adaptation des étapes d'extraction de caractéristiques et de comparaison à la qualité locale et globale des images d'entrées

Cremer, Sandra 09 October 2012 (has links)
La reconnaissance d'iris est un des systèmes biométriques les plus fiables et les plus précis. Cependant sa robustesse aux dégradations des images d'entrées est limitée. Généralement les systèmes basés sur l'iris peuvent être décomposés en quatre étapes : segmentation, normalisation, extraction de caractéristiques et comparaison. Des dégradations de la qualité des images d'entrées peuvent avoir des répercussions sur chacune de ses étapes. Elles compliquent notamment la segmentation, ce qui peut engendrer des images normalisées contenant des distorsions ou des artefacts non détectés. De plus, la quantité d'information disponible pour la comparaison peut être réduite. Dans cette thèse, nous proposons des solutions pour améliorer la robustesse des étapes d'extraction de caractéristiques et de comparaison à la dégradation des images d'entrées. Nous travaillons avec deux algorithmes pour ces deux étapes, basés sur les convolutions avec des filtres de Gabor 2D, mais des processus de comparaison différents. L'objectif de la première partie de notre travail est de contrôler la qualité et la quantité d'information sélectionnée pour la comparaison dans les images d'iris normalisées. Dans ce but nous avons défini des mesures de qualité locale et globale qui mesurent la quantité d'occlusions et la richesse de la texture dans les images d'iris. Nous utilisons ces mesures pour déterminer la position et le nombre de régions à exploiter pour l'extraction. Dans une seconde partie de ce travail, nous étudions le lien entre la qualité des images et les performances de reconnaissance des deux algorithmes de reconnaissance décrits ci-dessus. Nous montrons que le second est plus robuste aux images dégradées contenant des artefacts, des distorsions ou une texture pauvre. Enfin, nous proposons un système complet pour la reconnaissance d'iris, qui combine l'utilisation de nos mesures de qualités locale et globale pour optimiser les performances des algorithmes d'extraction de caractéristiques et de comparaison / Iris recognition has become one of the most reliable and accurate biometric systems available. However its robustness to degradations of the input images is limited. Generally iris based systems can be cut into four steps : segmentation, normalization, feature extraction and matching. Degradations of the input image quality can have repercussions on all of these steps. For instance, they make the segmentation more difficult which can result in normalized iris images that contain distortion or undetected artefacts. Moreover the amount of information available for matching can be reduced. In this thesis we propose methods to improve the robustness of the feature extraction and matching steps to degraded input images. We work with two algorithms for these two steps. They are both based on convolution with 2D Gabor filters but use different techniques for matching. The first part of our work is aimed at controlling the quality and quantity of information selected in the normalized iris images for matching. To this end we defined local and global quality metrics that measure the amount of occlusion and the richness of texture in iris images. We use these measures to determine the position and the number of regions to exploit for feature extraction and matching. In the second part, we study the link between image quality and the performance of the two recognition algoritms just described. We show that the second one is more robust to degraded images that contain artefacts, distortion or a poor iris texture. Finally, we propose a complete system for iris recognition that combines the use of our local and global quality metrics to optimize recognition performance
100

Detecting scene changes using synthetic aperture radar interferometry / Mark Preiss.

Preiss, Mark January 2004 (has links)
"November 2004" / Includes bibliographical references (leaves 283-293) / xxix, 293 leaves : ill. ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / Thesis (Ph.D.)--University of Adelaide, School of Electrical and Electronic Engineering, 2004

Page generated in 0.0728 seconds