• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 14
  • 9
  • 8
  • 7
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 127
  • 67
  • 65
  • 42
  • 30
  • 25
  • 24
  • 23
  • 22
  • 22
  • 19
  • 14
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Digital watermarking in medical images

Zain, Jasni Mohamad January 2005 (has links)
This thesis addresses authenticity and integrity of medical images using watermarking. Hospital Information Systems (HIS), Radiology Information Systems (RIS) and Picture Archiving and Communication Systems (P ACS) now form the information infrastructure for today's healthcare as these provide new ways to store, access and distribute medical data that also involve some security risk. Watermarking can be seen as an additional tool for security measures. As the medical tradition is very strict with the quality of biomedical images, the watermarking method must be reversible or if not, region of Interest (ROI) needs to be defined and left intact. Watermarking should also serve as an integrity control and should be able to authenticate the medical image. Three watermarking techniques were proposed. First, Strict Authentication Watermarking (SAW) embeds the digital signature of the image in the ROI and the image can be reverted back to its original value bit by bit if required. Second, Strict Authentication Watermarking with JPEG Compression (SAW-JPEG) uses the same principal as SAW, but is able to survive some degree of JPEG compression. Third, Authentication Watermarking with Tamper Detection and Recovery (AW-TDR) is able to localise tampering, whilst simultaneously reconstructing the original image.
42

Compression multimodale du signal et de l’image en utilisant un seul codeur / Multimodal compression of digital signal and image data using a unique encoder

Zeybek, Emre 24 March 2011 (has links)
Cette thèse a pour objectif d'étudier et d'analyser une nouvelle stratégie de compression, dont le principe consiste à compresser conjointement des données issues de plusieurs modalités, en utilisant un codeur unique. Cette approche est appelée « Compression Multimodale ». Dans ce contexte, une image et un signal audio peuvent être compressés conjointement et uniquement par un codeur d'image (e.g. un standard), sans la nécessité d'intégrer un codec audio. L'idée de base développée dans cette thèse consiste à insérer les échantillons d'un signal en remplacement de certains pixels de l'image « porteuse » tout en préservant la qualité de l'information après le processus de codage et de décodage. Cette technique ne doit pas être confondue aux techniques de tatouage ou de stéganographie puisqu'il ne s'agit pas de dissimuler une information dans une autre. En Compression Multimodale, l'objectif majeur est, d'une part, l'amélioration des performances de la compression en termes de débit-distorsion et d'autre part, l'optimisation de l'utilisation des ressources matérielles d'un système embarqué donné (e.g. accélération du temps d'encodage/décodage). Tout au long de ce rapport, nous allons étudier et analyser des variantes de la Compression Multimodale dont le noyau consiste à élaborer des fonctions de mélange et de séparation, en amont du codage et de séparation. Une validation est effectuée sur des images et des signaux usuels ainsi que sur des données spécifiques telles que les images et signaux biomédicaux. Ce travail sera conclu par une extension vers la vidéo de la stratégie de la Compression Multimodale / The objective of this thesis is to study and analyze a new compression strategy, whose principle is to compress the data together from multiple modalities by using a single encoder. This approach is called “Multimodal Compression” during which, an image and an audio signal is compressed together by a single image encoder (e.g. a standard), without the need for an integrating audio codec. The basic idea developed in this thesis is to insert samples of a signal by replacing some pixels of the "carrier's image” while preserving the quality of information after the process of encoding and decoding. This technique should not be confused with techniques like watermarking or stéganographie, since Multimodal Compression does not conceal any information with another. Two main objectives of Multimodal Compression are to improve the compression performance in terms of rate-distortion and to optimize the use of material resources of a given embedded system (e.g. acceleration of encoding/decoding time). In this report we study and analyze the variations of Multimodal Compression whose core function is to develop mixing and separation prior to coding and separation. Images and common signals as well as specific data such as biomedical images and signals are validated. This work is concluded by discussing the video of the strategy of Multimodal Compression
43

Restaurace obrazu konvolučními neuronovými sítěmi / Image Restoration Based on Convolutional Neural Networks

Svoboda, Pavel Unknown Date (has links)
Tématem práce je použití konvolučních neuronových sítí pro obecnou restauraci obrazu. Ta se typicky provádí za pomoci specializovaných metod pro konkrétní typ poškození. Model konvoluční sítě zde představuje jednotný přístup, který je aplikován na dva různé typy degradace obrazu, pohybem rozmazané snímky registračních značek a artefakty vznikající vysokou kompresí. Na modely konvolučních sítí je nahlíženo ze dvou úhlů. A to jak dobře si konvoluční sítě vedou v porovnání se současnými metodami pro restauraci konkrétního typu poškození a jak velký rozsah poškození je právě jeden model ještě schopen zpracovat. Klasické metody jsou charakteristické svým úzkým zaměřením na konkrétní typ poškození. Díky své specializaci tyto metody dosahují velmi dobrých výsledků a reprezentují tak dosažené poznání v oboru. Naproti tomu je představena myšlenka jednotného přístupu, tedy mapování poškozeného obrazu přímo na restaurovaný obraz. Ta je primárně ovlivněna současným vývojem konvolučních neuronových sítí a jejich hlubokého učení v počítačovém vidění. Právě učením konvoluční sítě lze jednoduše získat model zaměřený na konkrétní typ poškození. Ten je současně nezřídka schopen pokrýt širokou škálu úrovní konkrétního poškození. V práci je představena metoda přímého mapování z rozmazaného na ostrý obraz pro restauraci pohybem rozmazaných snímků. Ta je odvozena od modelů využívaných v počítačovém vidění pro sémantickou segmentaci obrazu. V případě odstranění kompresních artefaktů je tento přístup rozšířen o specifické učení modelu a různé modifikace samotné architektury sítě. Modely konvolučních sítí v porovnání s tradičními metodami dosahují kvalitativně lepších výsledků. Zároveň se zde představené modely jednoduše vypořádají s širokým rozsahem konkrétního poškození. Ukazuje se tak, že právě modely konvolučních sítí by mohly reprezentovat jednotný přístup pro restauraci různých typů poškozeni.
44

Komprese obrazu v interaktivních aplikacích digitálního televizního vysílání / Image compression in interactive applications in digital video broadcasting

Bodeček, Kamil January 2008 (has links)
Compressed images are used very frequently in interactive applications in digital video broadcasting. New methods increasing efficiency of the image transmission in digital video broadcasting networks are proposed. Adaptive spatial filtering methods have been proposed for enhancement of the visual perception of the compressed images. New optimalization method is based on application of the filtering algorithms on more compressed images (data size are reduced). Visual quality enhancement is processed in interactive application. Further, new compression methods JPEG2000 and H.264 for image compression have been analysed. Novel compound image compression method for standard and high spatial television resolution is proposed in the thesis.
45

Odstraňování artefaktů JPEG komprese obrazových dat / Removal of JPEG compression artefacts in image data

Lopata, Jan January 2014 (has links)
This thesis is concerned with the removal of artefacts typical for JPEG im- age compression. First, we describe the mathematical formulation of the JPEG format and the problem of artefact removal. We then formulate the problem as an optimization problem, where the minimized functional is obtained via Bayes' theorem and complex wavelets. We describe proximal operators and algorithms and apply them to the minimization of the given functional. The final algorithm is implemented in MATLAB and tested on several test problems. 1
46

A study in how to inject steganographic data into videos in a sturdy and non-intrusive manner / En studie i hur steganografisk data kan injiceras i videor på ett robust och icke-påträngande sätt

Andersson, Julius, Engström, David January 2019 (has links)
It is desirable for companies to be able to hide data inside videos to be able to find the source of any unauthorised sharing of a video. The hidden data (the payload) should damage the original data (the cover) by an as small amount as possible while also making it hard to remove the payload without also severely damaging the cover. It was determined that the most appropriate place to hide data in a video was in the visual information, so the cover is an image. Two injection methods were developed and three methods for attacking the payload. One injection method changes the pixel values of an image directly to hide the payload and the other transforms the image to cosine waves that represented the image and it then changes those cosine waves to hide the payload. Attacks were developed to test how hard it was to remove the hidden data. The methods for attacking the payload where to add and remove a random value to each pixel, to set all bits of a certain importance to 1 or to compress the image with JPEG. The result of the study was that the method that changed the image directly was significantly faster than the method that transformed the image and it had a capacity for a larger payload. The injection methods protected the payload differently well against the various attacks so which method that was the best in that regard depends on the type of attack. / Det är önskvärt för företag att kunna gömma data i videor så att de kan hitta källorna till obehörig delning av en video. Den data som göms borde skada den ursprungliga datan så lite som möjligt medans det också är så svårt som möjligt att radera den gömda datan utan att den ursprungliga datan skadas mycket. Studien kom fram till att det bästa stället att gömma data i videor är i den visuella delen så datan göms i bilderna i videon. Två metoder skapades för att injektera gömd data och tre skapades för att förstöra den gömda datan. En injektionsmetod ändrar pixelvärdet av bilden direkt för att gömma datat medans den andra transformerar bilden till cosinusvågor och ändrar sedan de vågorna för att gömma datat. Attacker utformades för att testa hur svårt det var att förstöra den gömda datan. Metoderna för att attackera den gömda datan var att lägga till eller ta bort ett slumpmässigt värde från varje pixel, att sätta varje bit av en särskild nivå till 1 och att komprimera bilden med JPEG. Resultatet av studien var att metoden som ändrade bilden direkt var mycket snabbare än metoden som först transformerade bilden och den hade också plats för mer gömd data. Injektionsmetoderna var olika bra på att skydda den gömda datan mot de olika attackerna så vilken metod som var bäst i den aspekten beror på vilken typ av attack som används.
47

Automatic source camera identification by lens aberration and JPEG compression statistics

Choi, Kai-san., 蔡啟新. January 2006 (has links)
published_or_final_version / abstract / Electrical and Electronic Engineering / Master / Master of Philosophy
48

The effects of evaluation and rotation on descriptors and similarity measures for a single class of image objects

06 June 2008 (has links)
“A picture is worth a thousand words”. If this proverb were taken literally we all know that every person interprets images or photos differently in terms of its content. This is due to the semantics contained in these images. Content-based image retrieval has become a vast area of research in order to successfully describe and retrieve images according to the content. In military applications, intelligence images such as those obtained by the defence intelligence group are taken (mostly on film), developed and then manually annotated thereafter. These photos are then stored in a filing system according to certain attributes such as the location, content etc. To retrieve these images at a later stage might take days or even weeks to locate. Thus, the need for a digital annotation system has arisen. The images of the military contain various military vehicles and buildings that need to be detected, described and stored in a database. For our research we want to look at the effects that the rotation and elevation angle of an object in an image has on the retrieval performance. We chose model cars in order to be able to control the environment the photos were taken in such as the background, lighting, distance between the objects, and the camera etc. There are also a wide variety of shapes and colours of these models to obtain and work with. We look at the MPEG-7 descriptor schemes that are recommended by the MPEG group for video and image retrieval as well as implement three of them. For the military it could be required that when the defence intelligence group is in the field, that the images be directly transmitted via satellite to the headquarters. We have therefore included the JPEG2000 standard which gives a compression performance increase of 20% over the original JPEG standard. It is also capable to transmit images wirelessly as well as securely. Including the MPEG-7 descriptors that we have implemented, we have also implemented the fuzzy histogram and colour correlogram descriptors. For our experimentation we implemented a series of experiments in order to determine the effects that rotation and elevation has on our model vehicle images. Observations are made when each vehicle is considered separately and when the vehicles are described and combined into a single database. After the experiments are done we look at the descriptors and determine which adjustments could be made in order to improve the retrieval performance thereof. / Dr. W.A. Clarke
49

Nonattribution Properties of JPEG Quantization Tables

Tuladhar, Punnya 17 December 2010 (has links)
In digital forensics, source camera identification of digital images has drawn attention in recent years. An image does contain information of its camera and/or editing software somewhere in it. But the interest of this research is to find manufacturers (henceforth will be called make and model) of a camera using only the header information, such as quantization table and huffman table, of the JPEG encoding. Having done research on around 110, 000 images, we reached to state that "For all practical purposes, using quantization and huffman tables alone to predict a camera make and model isn't a viable approach". We found no correlation between quantization and huffman tables of images and makes of camera. Rather, quantization or huffman table is determined by the quality factors like resolution, RGB values, intensity etc.of an image and standard settings of the camera.
50

Error resilience in JPEG2000

Natu, Ambarish Shrikrishna, Electrical Engineering & Telecommunications, Faculty of Engineering, UNSW January 2003 (has links)
The rapid growth of wireless communication and widespread access to information has resulted in a strong demand for robust transmission of compressed images over wireless channels. The challenge of robust transmission is to protect the compressed image data against loss, in such a way as to maximize the received image quality. This thesis addresses this problem and provides an investigation of a forward error correction (FEC) technique that has been evaluated in the context of the emerging JPEG2000 standard. Not much effort has been made in the JPEG2000 project regarding error resilience. The only techniques standardized are based on insertion of marker codes in the code-stream, which may be used to restore high-level synchronization between the decoder and the code-stream. This helps to localize error and prevent it from propagating through the entire code-stream. Once synchronization is achieved, additional tools aim to exploit as much of the remaining data as possible. Although these techniques help, they cannot recover lost data. FEC adds redundancy into the bit-stream, in exchange for increased robustness to errors. We investigate unequal protection schemes for JPEG2000 by applying different levels of protection to different quality layers in the code-stream. More particularly, the results reported in this thesis provide guidance concerning the selection of JPEG2000 coding parameters and appropriate combinations of Reed-Solomon (RS) codes for typical wireless bit error rates. We find that unequal protection schemes together with the use of resynchronization makers and some additional tools can significantly improve the image quality in deteriorating channel conditions. The proposed channel coding scheme is easily incorporated into the existing JPEG2000 code-stream structure and experimental results clearly demonstrate the viability of our approach

Page generated in 0.4264 seconds