• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 1
  • 1
  • Tagged with
  • 15
  • 15
  • 6
  • 6
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Similarity-based Test Case Quality Metric using Historical Failure Data

Noor, Tanzeem Bin January 2015 (has links)
A test case is a set of input data and expected output, designed to verify whether the system under test satisfies all requirements and works correctly. An effective test case reveals a fault when the actual output differs from the expected output (i.e., the test case fails). The effectiveness of test cases is estimated using quality metrics, such as code coverage, size, and historical fault detection. Prior studies have shown that previously failing test cases are highly likely to fail again in the next releases; therefore, they are ranked higher. However, in practice, a failing test case may not be exactly the same as a previously failed test case, but quite similar. In this thesis, I have defined a metric that estimates test case quality using its similarity to the previously failing test cases. Moreover, I have evaluated the effectiveness of the proposed test quality metric through detailed empirical study. / February 2016
2

An Accelerated General Purpose No-Reference Image Quality Assessment Metric and an Image Fusion Technique

Hettiarachchi, Don Lahiru Nirmal Manikka 09 September 2016 (has links)
No description available.
3

Perceptual-Based Locally Adaptive Noise and Blur Detection

January 2016 (has links)
abstract: The quality of real-world visual content is typically impaired by many factors including image noise and blur. Detecting and analyzing these impairments are important steps for multiple computer vision tasks. This work focuses on perceptual-based locally adaptive noise and blur detection and their application to image restoration. In the context of noise detection, this work proposes perceptual-based full-reference and no-reference objective image quality metrics by integrating perceptually weighted local noise into a probability summation model. Results are reported on both the LIVE and TID2008 databases. The proposed metrics achieve consistently a good performance across noise types and across databases as compared to many of the best very recent quality metrics. The proposed metrics are able to predict with high accuracy the relative amount of perceived noise in images of different content. In the context of blur detection, existing approaches are either computationally costly or cannot perform reliably when dealing with the spatially-varying nature of the defocus blur. In addition, many existing approaches do not take human perception into account. This work proposes a blur detection algorithm that is capable of detecting and quantifying the level of spatially-varying blur by integrating directional edge spread calculation, probability of blur detection and local probability summation. The proposed method generates a blur map indicating the relative amount of perceived local blurriness. In order to detect the flat/near flat regions that do not contribute to perceivable blur, a perceptual model based on the Just Noticeable Difference (JND) is further integrated in the proposed blur detection algorithm to generate perceptually significant blur maps. We compare our proposed method with six other state-of-the-art blur detection methods. Experimental results show that the proposed method performs the best both visually and quantitatively. This work further investigates the application of the proposed blur detection methods to image deblurring. Two selective perceptual-based image deblurring frameworks are proposed, to improve the image deblurring results and to reduce the restoration artifacts. In addition, an edge-enhanced super resolution algorithm is proposed, and is shown to achieve better reconstructed results for the edge regions. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2016
4

PERFORMANCE RESULTS USING DATA QUALITY ENCAPSULATION (DQE) AND BEST SOURCE SELECTION (BSS) IN AERONAUTICAL TELEMETRY ENVIRONMENTS

Geoghegan, Mark, Schumacher, Robert 10 1900 (has links)
Flight test telemetry environments can be particularly challenging due to RF shadowing, interference, multipath propagation, antenna pattern variations, and large operating ranges. In cases where the link quality is unacceptable, applying multiple receiving assets to a single test article can significantly improve the overall link reliability. The process of combining multiple received streams into a single consolidated stream is called Best Source Selection (BSS). Recent developments in BSS technology include a description of the maximum likelihood detection approach for combining multiple bit sources, and an efficient protocol for providing the real-time data quality metrics necessary for optimal BSS performance. This approach is being standardized and will be included in Appendix 2G of IRIG-106-17. This paper describes the application of this technology and presents performance results obtained during flight testing.
5

An Approach to Utilize a No-Reference Image Quality Metric and Fusion Technique for the Enhancement of Color Images

de Silva, Manawaduge Supun Samudika 09 September 2016 (has links)
No description available.
6

The Multidimensional Quality Metric (MQM) Framework: A New Framework for Translation Quality Assessment

Mariana, Valerie Ruth 01 December 2014 (has links) (PDF)
This document is a supplement to the article entitled “The Multidimensional Quality Metric (MQM) Framework: A New Framework for Translation Quality Assessment”, which has been acepted for publication in the upcoming January volume of JoSTrans, the Journal of Specialized Translation. The article is a coauthored project between Dr. Alan K. Melby, Dr. Troy Cox and myself. In this document you will find a preface describing the process of writing the article, an annotated bibliography of sources consulted in my research, a summary of what I learned, and a conclusion that considers the future avenues opened up by this research. Our article examines a new method for assessing the quality of a translation known as the Multidimensional Quality Metric, MQM. In our experiment we set the MQM framework to mirror, as closely as possible, the American Translators Association's (ATA) translator certification exam. To do this we mapped the ATA error categories to corresponding MQM error categories. We acquired a set of 29 student translations and had a group of student raters use the MQM framework to rate these translations. We measured the practicality of the MQM framework by comparing the time required for ratings to the average time required to rate translations in the industry. In addition, we had 2 ATA certified translators rate the anchor translation (a translation that was scored by every rater in order to have a point of comparison). The certified translators' ratings were used to verify that the scores given by the student raters were valid. Reliability was also measured, which found that the student raters were not interchangeable, but that the measurement estimate of reliability was adequate. The article's goal was to determine the extent to which the Multidimensional Quality Metric framework for translation evaluation is viable (practical, reliable and valid) when designed to mirror the ATA certification exam. Overall, the results of the experiment showed that MQM could be a viable way to rate translation quality when operationalized based on the ATA's translator certification exam. This is an important discovery in the field of translation quality, because it shows that MQM could be a viable tool for future researchers. Our experiment suggests that researchers ought to take advantage of the MQM framework because, not only is it free, but any studies completed using the MQM framework would have a common base, making these studies more easily comparable.
7

A multiresolutional approach for large data visualization

Wang, Chaoli 30 November 2006 (has links)
No description available.
8

Image/video compression and quality assessment based on wavelet transform

Gao, Zhigang 14 September 2007 (has links)
No description available.
9

On the analysis of remd protein structure prediction simulations for reducing volume of analytical data

Macedo, Rafael Cauduro Oliveira 30 August 2017 (has links)
Submitted by PPG Ci?ncia da Computa??o (ppgcc@pucrs.br) on 2018-09-03T14:00:58Z No. of bitstreams: 1 RAFAEL CAUDURO OLIVEIRA MACEDO_DIS.pdf: 6178948 bytes, checksum: 6ed3599e31f122e78b11b322a8c0ac06 (MD5) / Approved for entry into archive by Sheila Dias (sheila.dias@pucrs.br) on 2018-09-04T12:17:04Z (GMT) No. of bitstreams: 1 RAFAEL CAUDURO OLIVEIRA MACEDO_DIS.pdf: 6178948 bytes, checksum: 6ed3599e31f122e78b11b322a8c0ac06 (MD5) / Made available in DSpace on 2018-09-04T12:47:15Z (GMT). No. of bitstreams: 1 RAFAEL CAUDURO OLIVEIRA MACEDO_DIS.pdf: 6178948 bytes, checksum: 6ed3599e31f122e78b11b322a8c0ac06 (MD5) Previous issue date: 2017-08-30 / Prote?nas executam um papel vital em todos os seres vivos, mediando uma s?rie de processos necess?rios para a vida. Apesar de existirem maneiras de determinar a composi??o dessas mol?culas, ainda falta-nos conhecimentos suficiente para determinar de uma maneira r?pida e barata a sua estrutura 3D, que desempenha um papel importante na suas fun??es. Um dos principais m?todos computacionais aplicados ao estudo das prote?nas e o seu processo de enovelamento, o qual determina a sua estrutura, ? Din?mica Molecular. Um aprimoramento deste m?todo, conhecido como Replica Exchange Molecular Dynamics (ou REMD), ? capaz de produzir resultados muito melhores, com o rev?s de significativamente aumentar o seu custo computacional e gerar um volume muito maior de dados. Esta disserta??o apresenta um novo m?todo de otimiza??o deste m?todo, intitulado Filtragem de Dados Anal?ticos, que tem como objetivo otimizar a an?lise p?s-simula??o filtrando as estruturas preditas insatisfat?rias atrav?s do uso de m?tricas de qualidade absolutas. A metodologia proposta tem o potencial de operar em conjunto com outras abordagens de otimiza??o e tamb?m cobrir uma ?rea ainda n?o abordada por elas. Adiante, a ferramenta SnapFi ? apresentada, a qual foi designada especialmente para o prop?sito de filtrar estruturas preditas insatisfat?rias e ainda operar em conjunto com as diferentes abordagens de otimiza??o do m?todo REMD. Um estudo foi ent?o conduzido sobre um conjunto teste de simula??es REMD de predi??o de estruturas de prote?nas afim de elucidar uma s?ries de hip?teses formuladas sobre o impacto das diferentes temperaturas na qualidade final do conjunto de estruturas preditas do processo REMD, a efici?ncia das diferentes m?tricas de qualidade absolutas e uma poss?vel configura??o de filtragem que utiliza essas m?tricas. Foi observado que as temperaturas mais altas do m?todo REMD para predi??o de estruturas de prote?nas podem ser descartadas de forma segura da an?lise posterior ao seu t?rmino e tamb?m que as m?tricas de qualidade absolutas possuem uma alta vari?ncia (em termos de qualidade) entre diferentes simula??es de predi??es de estruturas de prote?nas. Al?m disso, foi observado que diferentes configura??es de filtragem que utilize tais m?tricas carrega consigo esta vari?ncia. / Proteins perform a vital role in all living beings, mediating a series of processes necessary to life. Although we have ways to determine the composition of such molecules, we lack sufficient knowledge regarding the determination of their 3D structure in a cheap and fast manner, which plays an important role in their functions. One of the main computational methods applied to the study of proteins and their folding process, which determine its structure, is Molecular Dynamics. An enhancement of this method, known as Replica-Exchange Molecular Dynamics (or REMD) is capable of producing much better results, at the expense of a significant increase in computational costs and volume of raw data generated. This dissertation presents a novel optimization for this method, titled Analytical Data Filtering, which aims to optimize post-simulation analysis by filtering unsatisfactory predicted structures via the use of different absolute quality metrics. The proposed methodology has the potential of working together with other optimization approaches as well as covering an area still untouched at large by them to the best of the author knowledge. Further on, the SnapFi tool is presented, a tool designed specially for the purpose of filtering unsatisfactory structure predictions and also being able to work with the different optimization approaches of the Replica-Exchange Molecular Dynamics method. A study was then conducted on a test dataset of REMD protein structure prediction simulations aiming to elucidate a series of formulated hypothesis regarding the impact of the different temperatures of the REMD process in the final quality of the predicted structures, the efficiency of the different absolute quality metrics and a possible filtering configuration that take advantage of such metrics. It was observed that high temperatures may be safely discarded from post-simulation analysis of REMD protein structure prediction simulations, that absolute quality metrics posses a high variance of efficiency (regarding quality terms) between different protein structure prediction simulations and that different filtering configurations composed of such quality metrics carry on this inconvenient variance.
10

Metody a prostředky pro hodnocení kvality obrazu / Methods and Tools for Image and Video Quality Assessment

Slanina, Martin January 2009 (has links)
Disertační práce se zabývá metodami a prostředky pro hodnocení kvality obrazu ve videosekvencích, což je velmi aktuální téma, zažívající velký rozmach zejména v souvislosti s digitálním zpracováním videosignálů. Přestože již existuje relativně velké množství metod a metrik pro objektivní, tedy automatizované měření kvality videosekvencí, jsou tyto metody zpravidla založeny na porovnání zpracované (poškozené, například komprimací) a originální videosekvence. Metod pro hodnocení kvality videosekvení bez reference, tedy pouze na základě analýzy zpracovaného materiálu, je velmi málo. Navíc se takové metody převážně zaměřují na analýzu hodnot signálu (typicky jasu) v jednotlivých obrazových bodech dekódovaného signálu, což je jen těžko aplikovatelné pro moderní komprimační algoritmy jako je H.264/AVC, který používá sofistikovené techniky pro odstranění komprimačních artefaktů. V práci je nejprve podán stučný přehled dostupných metod pro objektivní hodnocení komprimovaných videosekvencí se zdůrazněním rozdílného principu metod využívajících referenční materiál a metod pracujících bez reference. Na základě analýzy možných přístupů pro hodnocení video sekvencí komprimovaných moderními komprimačními algoritmy je v dalším textu práce popsán návrh nové metody určené pro hodnocení kvality obrazu ve videosekvencích komprimovaných s využitím algoritmu H.264/AVC. Nová metoda je založena na sledování hodnot parametrů, které jsou obsaženy v transportním toku komprimovaného videa, a přímo souvisí s procesem kódování. Nejprve je provedena úvaha nad vlivem některých takových parametrů na kvalitu výsledného videa. Následně je navržen algoritmus, který s využitím umělé neuronové sítě určuje špičkový poměr signálu a šumu (peak signal-to-noise ratio -- PSNR) v komprimované videosekvenci -- plně referenční metrika je tedy nahrazována metrikou bez reference. Je ověřeno několik konfigurací umělých neuronových sítí od těch nejjednodušších až po třívrstvé dopředné sítě. Pro učení sítí a následnou analýzu jejich výkonnosti a věrnosti určení PSNR jsou vytvořeny dva soubory nekomprimovaných videosekvencí, které jsou následně komprimovány algoritmem H.264/AVC s proměnným nastavením kodéru. V závěrečné části práce je proveden rozbor chování nově navrženého algoritmu v případě, že se změní vlastnosti zpracovávaného videa (rozlišení, střih), případně kodéru (formát skupiny současně kódovaných snímků). Chování algoritmu je analyzováno až do plného vysokého rozlišení zdrojového signálu (full HD -1920 x 1080 obrazových bodů).

Page generated in 0.0507 seconds