• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

CIRCULAR CODING IN HALFTONE IMAGES AND OTHER DIGITAL IMAGING PROBLEMS

Yufang Sun (11243730) 01 September 2021 (has links)
<p>Embedding information into a printed image is useful in many aspects, in which reliable channel encoding/decoding systems are crucial due to the information loss and error propagation during transmission. So how to improve the transmission accuracy and control the decoding error rate under a predictable level is always crucial to the channel design.</p><p>The current dissertation aims to discuss the design and performance of a two-dimensional coding method for printed materials – Circular Coding. It is a general two-dimensional coding method that allows data recovery with only a cropped portion of the code, and without the knowledge of the carrier image. While some traditional methods add redundancy bits to extend the length of the original massage length, this method embeds the message into image rows in a repeated and shifted manner with redundancy, then uses the majority votes of the redundant bits for recovery.</p><p>We introduce the encoding and decoding system and investigate the performance of the method for noisy and distorted images. For a given required decoding rate, we model the transmission error and compute the minimum requirement for the number of bit repeats.</p><p>Also, we develop a closed form solution to find the the corresponding cropped-window size that will be used for the encoding and decoding system design.</p><p>Finally, we develop a closed-form formula to predict its decoding success rate in a noisy channel under various transmission noise levels, using probabilistic modeling. The theoretical result is validated with simulations. This result enables the optimal parameter selection in the encoder and decoder system design, and decoding rate prediction with different levels of transmission error.</p><p>We also briefly discuss two other projects: development of print quality troubleshooting tools and text line detection in scanned pages.</p>
2

Projection separability: A new approach to evaluate embedding algorithms in the geometrical space

Acevedo Toledo, Aldo Marcelino 06 February 2024 (has links)
Evaluating separability is fundamental to pattern recognition. A plethora of embedding methods, such as dimension reduction and network embedding algorithms, have been developed to reveal the emergence of geometrical patterns in a low-dimensional space, where high-dimensional sample and node similarities are approximated by geometrical distances. However, statistical measures to evaluate the separability attained by the embedded representations are missing. Traditional cluster validity indices (CVIs) might be applied in this context, but they present multiple limitations because they are not specifically tailored for evaluating the separability of embedded results. This work introduces a new rationale called projection separability (PS), which provides a methodology expressly designed to assess the separability of data samples in a reduced (i.e., low-dimensional) geometrical space. In a first case study, using this rationale, a new class of indices named projection separability indices (PSIs) is implemented based on four statistical measures: Mann-Whitney U-test p-value, Area Under the ROC-Curve, Area Under the Precision-Recall Curve, and Matthews Correlation Coefficient. The PSIs are compared to six representative cluster validity indices and one geometrical separability index using seven nonlinear datasets and six different dimension reduction algorithms. In a second case study, the PS rationale is extended to define and measure the geometric separability (linear and nonlinear) of mesoscale patterns in complex data visualization by solving the traveling salesman problem, offering experimental evidence on the evaluation of community separability of network embedding results using eight real network datasets and three network embedding algorithms. The results of both studies provide evidence that the implemented statistical-based measures designed on the basis of the PS rationale are more accurate than the other indices and can be adopted not only for evaluating and comparing the separability of embedded results in the low-dimensional space but also for fine-tuning embedding algorithms’ hyperparameters. Besides these advantages, the PS rationale can be used to design new statistical-based separability measures other than the ones presented in this work, providing the community with a novel and flexible framework for assessing separability.
3

Comparative evaluation of video watermarking techniques in the uncompressed domain

Van Huyssteen, Rudolph Hendrik 12 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012. / ENGLISH ABSTRACT: Electronic watermarking is a method whereby information can be imperceptibly embedded into electronic media, while ideally being robust against common signal manipulations and intentional attacks to remove the embedded watermark. This study evaluates the characteristics of uncompressed video watermarking techniques in terms of visual characteristics, computational complexity and robustness against attacks and signal manipulations. The foundations of video watermarking are reviewed, followed by a survey of existing video watermarking techniques. Representative techniques from different watermarking categories are identified, implemented and evaluated. Existing image quality metrics are reviewed and extended to improve their performance when comparing these video watermarking techniques. A new metric for the evaluation of inter frame flicker in video sequences is then developed. A technique for possibly improving the robustness of the implemented discrete Fourier transform technique against rotation is then proposed. It is also shown that it is possible to reduce the computational complexity of watermarking techniques without affecting the quality of the original content, through a modified watermark embedding method. Possible future studies are then recommended with regards to further improving watermarking techniques against rotation. / AFRIKAANSE OPSOMMING: ’n Elektroniese watermerk is ’n metode waardeur inligting onmerkbaar in elektroniese media vasgelê kan word, met die doel dat dit bestand is teen algemene manipulasies en doelbewuste pogings om die watermerk te verwyder. In hierdie navorsing word die eienskappe van onsaamgeperste video watermerktegnieke ondersoek in terme van visuele eienskappe, berekeningskompleksiteit en weerstandigheid teen aanslae en seinmanipulasies. Die onderbou van video watermerktegnieke word bestudeer, gevolg deur ’n oorsig van reedsbestaande watermerktegnieke. Verteenwoordigende tegnieke vanuit verskillende watermerkkategorieë word geïdentifiseer, geïmplementeer en geëvalueer. Bestaande metodes vir die evaluering van beeldkwaliteite word bestudeer en uitgebrei om die werkverrigting van die tegnieke te verbeter, spesifiek vir die vergelyking van watermerktegnieke. ’n Nuwe stelsel vir die evaluering van tussenraampie flikkering in video’s word ook ontwikkel. ’n Tegniek vir die moontlike verbetering van die geïmplementeerde diskrete Fourier transform tegniek word voorgestel om die tegniek se bestandheid teen rotasie te verbeter. Daar word ook aangetoon dat dit moontlik is om die berekeningskompleksiteit van watermerktegnieke te verminder, sonder om die kwaliteit van die oorspronklike inhoud te beïnvloed, deur die gebruik van ’n verbeterde watermerkvasleggingsmetode. Laastens word aanbevelings vir verdere navorsing aangaande die verbetering van watermerktegnieke teen rotasie gemaak.
4

Measuring Group Separability in Geometrical Space for Evaluation of Pattern Recognition and Dimension Reduction Algorithms

Acevedo, Aldo, Duran, Claudio, Kuo, Ming-Ju, Ciucci, Sara, Schroeder, Michael, Cannistraci, Carlo Vittorio 22 January 2024 (has links)
Evaluating group separability is fundamental to pattern recognition. A plethora of dimension reduction (DR) algorithms has been developed to reveal the emergence of geometrical patterns in a lowdimensional space, where high-dimensional sample similarities are approximated by geometrical distances. However, statistical measures to evaluate the group separability attained by DR representations are missing. Traditional cluster validity indices (CVIs) might be applied in this context, but they present multiple limitations because they are not specifically tailored for DR. Here, we introduce a new rationale called projection separability (PS), which provides a methodology expressly designed to assess the group separability of data samples in a DR geometrical space. Using this rationale, we implemented a new class of indices named projection separability indices (PSIs) based on four statistical measures: Mann-Whitney U-test p-value, Area Under the ROC-Curve, Area Under the Precision-Recall Curve, and Matthews Correlation Coeffcient. The PSIs were compared to six representative cluster validity indices and one geometrical separability index using seven nonlinear datasets and six different DR algorithms. The results provide evidence that the implemented statistical-based measures designed on the basis of the PS rationale are more accurate than the other indices and can be adopted not only for evaluating and comparing group separability of DR results but also for fine-tuning DR algorithms' hyperparameters. Finally, we introduce a second methodological innovation termed trustworthiness, a statistical evaluation that accounts for separability uncertainty and associates to the measure of each index a p-value that expresses the significance level in comparison to a null model.
5

Archivage Sécurisé d'Images

Motsch, Jean 27 November 2008 (has links) (PDF)
La multiplication des dispositifs de prise de vues, classiques comme les appareils photographiques, nouveaux comme les imageurs IRM et les analyseurs multispectraux, amène à prendre en charge des volumes de données croissants exponentiellement. En corollaire, la nécessité d'archiver de telles quantités de données introduit des questions nouvelles. Comment gérer des images de très grande taille ? Comment garantir l'intégrité des données enregistrées ? Quel mécanisme utiliser pour lier une image à des données ? Pendant longtemps, l'accent a été mis sur les performances de compression des schémas de codage, avec pour instruments de mesure les courbes débit-distorsion et quelques évaluations subjectives. Des propriétés de scalabilité, de gestion du débit, de résistance aux erreurs sont apparus pour prendre en compte les évolutions des techniques relatives aux media de communication. Cependant, la prise en compte des impératifs liés à l'archivage amène à considérer de nouveaux services, comme la gestion de la copie, l'ajout de métadonnées et la sécurité des images. Dans la plupart des cas, ces services sont fournis en dehors des schémas de compression, à l'exception notable de JPSEC, effort lié à JPEG-2000. L'approche retenue dans cette étude est l'intégration dans un codeur d'images fixes performant, le codeur LAR, des primitives de chiffrement et d'insertion de données permettant d'assurer l'essentiel des services liés à l'archivage des images. Le codeur LAR est hiérarchique, scalable en résolution et qualité, et permet d'aller du codage avec perte vers du codage sans perte. Ses performances sont au-delà de l'état de l'art, particulièrement en très bas débit et en compression sans perte. Le point clef des performances de ce codeur est un partitionnement quadtree adapté au contenu sémantique de l'image. Dans le contexte de l'archivage sécurisé d'images, ce document présente donc le tryptique suivant : compression efficace, chiffrement partiel et insertion de données. Pour la compression, l'utilisation d'une transformée de Hadamard réversible alliée à des filtres non linéaires permet d'obtenir de bonnes performances, avec ou sans pertes. Pour le chiffrement, l'utilisation de la structure hiérarchique du codeur LAR permet de mettre en place un schéma de chiffrement partiel. Il autorise une gestion fine des droits et des clefs, ainsi que le choix dans les niveaux de résolution et qualité possibles. Des éléments permettant une protection à coût nul sont également présentés. Pour l'insertion de données, le parallèle existant entre le LAR-Interleaved S+P et l'extension de la différence permet de mettre en œuvre un schéma efficace.
6

Visipedia - Multi-dimensional Object Embedding Based on Perceptual Similarity / Visipedia - Multi-Dimensional Object Embedding Based on Perceptual Similarity

Matera, Tomáš January 2014 (has links)
Problémy jako je jemnozrnná kategorizace či výpočty s využitím lidských zdrojů se v posledních letech v komunitě stávají stále populárnějšími, což dosvědčuje i značné množství publikací na tato témata. Zatímco většina těchto prací využívá "klasických'' obrazových příznaků extrahovaných počítačem, tato se zaměřuje především na percepční vlastnosti, které nemohou být snadno zachyceny počítači a vyžadují zapojení lidí do procesu sběru dat. Práce zkoumá možnosti levného a efektivního získávání percepčních podobností od uživatelů rovněž ve vztahu ke škálovatelnosti. Dále vyhodnocuje několik relevantních experimentů a představuje metody zlepšující efektivitu sběru dat. Jsou zde také shrnuty a porovnány metody učení multidimenzionálního indexování a prohledávání tohoto prostoru. Získané výsledky jsou následně užity v komplexním experimentu vyhodnoceném na datasetu obrázků jídel. Procedura začíná získáváním podobností od uživatelů, pokračuje vytvořením multidimenzionálního prostoru jídel a končí prohledáváním tohoto prostoru.

Page generated in 0.1332 seconds