• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 24
  • 19
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

FORMULATION OF DETECTION STRATEGIES IN IMAGES

Fadhil, Ahmed Freidoon 01 May 2014 (has links)
This dissertation focuses on two distinct but related problems involving detection in multiple images. The first problem focuses on the accurate detection of runways by fusing Synthetic Vision System (SVS) and Enhanced Vision System (EVS) images. A novel procedure is developed to accurately detect runways and horizons and also enhance runway surrounding areas by fusing enhanced vision system (EVS) and synthetic vision system (SVS) images of the runway while an aircraft is landing. Because the EVS and SVS frames are not aligned, a registration step is introduced to align the EVS and SVS images prior to fusion. The most notable feature of the registration procedure is that it is guided by the information extracted from the weather-invariant SVS images. Four fusion rules based on combining Discrete Wavelet Transform (DWT) sub-bands are implemented and evaluated. The resulting procedure is tested on real EVS-SVS image pairs and also on image pairs containing simulated EVS images with varying levels of turbulence. The subjective and objective evaluations reveal that runways and horizons can be detected accurately even in poor visibility conditions. Furthermore, it is demonstrated that different aspects of the EVS and SVS images can be emphasized by using different DWT fusion rules. Another notable feature is that the entire procedure is autonomous throughout the landing sequence irrespective of the weather conditions. Given the excellent fusion results and the autonomous feature, it can be concluded that the fusion procedure developed is quite promising for incorporation into head-up displays (HUDs) to assist pilots in safely landing aircrafts in varying weather conditions. The second problem focuses on the blind detection of hidden messages that are embedded in images using various steganography methods. A new steganalysis strategy is introduced to blindly detect hidden messages that have been embedded in JPEG images using various steganography techniques. The key contribution is the formulation of a multi-domain feature extraction, ranking, and selection strategy to improve the steganalysis performance. The multi-domain features are statistical measures extracted from DWT, muti-wavelet (MWT), and slantlet (SLT) transforms. Feature ranking and selection is based on evaluating the performance of each feature independently and combining the best uncorrelated features. The resulting feature set is used in conjunction with discriminant analysis and support vector classifiers to detect the presence/absence of hidden messages in images. Numerous experiments are conducted to demonstrate the improved performance of the new steganalysis strategy over existing steganalysis methods.
12

Blind Image Steganalytic Optimization by using Machine Learning

Giarimpampa, Despoina January 2018 (has links)
Since antiquity, steganography has existed in protecting sensitive information against unauthorized unveiling attempts. Nevertheless, digital media’s evolution, reveals that steganography has been used as a tool for activities such as terrorism or child pornography. Given this background, steganalysis arises as an antidote to steganography. Steganalysis can be divided into two main approaches: universal – also called blind – and specific. Specific methods request a previous knowledge of the steganographic technique under analysis. On the other hand, universal methods which can be widely practiced in a variety of algorithms, are more adaptable to real-world applications. Thus, it is necessary to establish even more accurate steganalysis techniques capable of detecting the hidden information coming from the use of diverse steganographic methods. Considering this, a universal steganalysis method specialized in images is proposed. The method is based on the typical steganalysis process, where feature extractors and classifiers are used. The experiments were implemented on different embedding rates and for various steganographic techniques. It turns out that the proposed method succeeds for the most part, providing dignified results on color images and promising results on gray-scale images.
13

Randomização progressiva para esteganalise / Progressive randomization for steganalysis

Rocha, Anderson de Rezende, 1980- 17 February 2006 (has links)
Orientador: Siome Klein Goldenstein / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-06T04:51:36Z (GMT). No. of bitstreams: 1 Rocha_AndersondeRezende_M.pdf: 1408210 bytes, checksum: 086a5c63f2aeae657441d79fa179fe6c (MD5) Previous issue date: 2006 / Resumo: Neste trabalho, nós descrevemos uma nova metodologia para detectar a presença de conteúdo digital escondido nos bits menos significativos (LSBs) de imagens. Nós introduzimos a técnica de Randomização Progressiva (PR), que captura os artefatos estatísticos inseridos durante um processo de mascaramento com aleatoriedade espacial. Nossa metodologia consiste na progressiva aplicação de transformações de mascaramento nos LSBs de uma imagem. Ao receber uma imagem I como entrada, o método cria n imagens, que apenas se diferenciam da imagem original no canal LSB. Cada estágio da Randomização Progressiva representa possíveis processos de mascaramento com mensagens de tamanhos diferentes e crescente entropia no canal LSB. Analisando esses estágios, nosso arcabouço de detecção faz a inferência sobre a presença ou não de uma mensagem escondida na imagem I. Nós validamos nossa metodologia em um banco de dados com 20.000 imagens reais. Nosso método utiliza apenas descritores estatísticos dos LSBs e já apresenta melhor qualidade de classificação que os métodos comparáveis descritos na literatura / Abstract: In this work, we describe a new methodology to detect the presence of hidden digital content in the Least Significant Bits (LSBs) of images. We introduce the Progressive Randomization technique that captures statistical artifacts inserted during the hiding process. Our technique is a progressive application of LSB modifying transformations that receives an image as input, and produces n images that only differ in the LSB from the initial image. Each step of the progressive randomization approach represents a possible content-hiding scenario with increasing size, and increasing LSB entropy. Analyzing these steps, our detection framework infers whether or not the input image I contains a hidden message. We validate our method with 20,000 real, non-synthetic images. Our method only uses statistical descriptors of LSB occurrences and already performs better than comparable techniques in the literature / Mestrado / Visão Computacional / Mestre em Ciência da Computação
14

Évaluation du contenu d'une image couleur par mesure basée pixel et classification par la théorie des fonctions de croyance / Evaluation of the content of a color image by pixel-based measure and classificationthrough the theory of belief functions

Guettari, Nadjib 10 July 2017 (has links)
De nos jours, il est devenu de plus en plus simple pour qui que ce soit de prendre des photos avec des appareils photo numériques, de télécharger ces images sur l'ordinateur et d'utiliser différents logiciels de traitement d'image pour appliquer des modification sur ces images (compression, débruitage, transmission, etc.). Cependant, ces traitements entraînent des dégradations qui influent sur la qualité visuelle de l'image. De plus, avec la généralisation de l'internet et la croissance de la messagerie électronique, des logiciels sophistiqués de retouche d'images se sont démocratisés permettant de falsifier des images à des fins légitimes ou malveillantes pour des communications confidentielles ou secrètes. Dans ce contexte, la stéganographie constitue une méthode de choix pour dissimuler et transmettre de l'information.Dans ce manuscrit, nous avons abordé deux problèmes : l'évaluation de la qualité d'image et la détection d'une modification ou la présence d'informations cachées dans une image. L'objectif dans un premier temps est de développer une mesure sans référence permettant d'évaluer de manière automatique la qualité d'une image en corrélation avec l'appréciation visuelle humaine. Ensuite proposer un outil de stéganalyse permettant de détecter, avec la meilleure fiabilité possible, la présence d'informations cachées dans des images naturelles. Dans le cadre de cette thèse, l'enjeu est de prendre en compte l'imperfection des données manipulées provenant de différentes sources d'information avec différents degrés de précision. Dans ce contexte, afin de profiter entièrement de l'ensemble de ces informations, nous proposons d'utiliser la théorie des fonctions de croyance. Cette théorie permet de représenter les connaissances d'une manière relativement naturelle sous la forme d'une structure de croyances. Nous avons proposé une nouvelle mesure sans référence d'évaluation de la qualité d'image capable d'estimer la qualité des images dégradées avec de multiple types de distorsion. Cette approche appelée wms-EVreg2 est basée sur la fusion de différentes caractéristiques statistiques, extraites de l'image, en fonction de la fiabilité de chaque ensemble de caractéristiques estimée à travers la matrice de confusion. À partir des différentes expérimentations, nous avons constaté que wms-EVreg2 présente une bonne corrélation avec les scores de qualité subjectifs et fournit des performances de prédiction de qualité compétitives par rapport aux mesures avec référence.Pour le deuxième problème abordé, nous avons proposé un schéma de stéganalyse basé sur la théorie des fonctions de croyance construit sur des sous-espaces aléatoires des caractéristiques. La performance de la méthode proposée a été évaluée sur différents algorithmes de dissimulation dans le domaine de transformé JPEG ainsi que dans le domaine spatial. Ces tests expérimentaux ont montré l'efficacité de la méthode proposée dans certains cadres d'applications. Cependant, il reste de nombreuses configurations qui résident indétectables. / Nowadays it has become increasingly simpler for anyone to take pictures with digital cameras, to download these images to the computer and to use different image processing software to apply modifications on these images (Compression, denoising, transmission, etc.). However, these treatments lead to degradations which affect the visual quality of the image. In addition, with the widespread use of the Internet and the growth of electronic mail, sophisticated image-editing software has been democratised allowing to falsify images for legitimate or malicious purposes for confidential or secret communications. In this context, steganography is a method of choice for embedding and transmitting information.In this manuscript we discussed two issues : the image quality assessment and the detection of modification or the presence of hidden information in an image. The first objective is to develop a No-Reference measure allowing to automatically evaluate the quality of an image in correlation with the human visual appreciation. Then we propose a steganalysis scheme to detect, with the best possible reliability, the presence of information embedded in natural images. In this thesis, the challenge is to take into account the imperfection of the manipulated data coming from different sources of information with different degrees of precision. In this context, in order to take full advantage of all this information, we propose to use the theory of belief functions. This theory makes it possible to represent knowledge in a relatively natural way in the form of a belief structure.We proposed a No-reference image quality assessment measure, which is able to estimate the quality of the degraded images with multiple types of distortion. This approach, called wms-EVreg2, is based on the fusion of different statistical features, extracted from the image, depending on the reliability of each set of features estimated through the confusion matrix. From the various experiments, we found that wms-EVreg2 has a good correlation with subjective quality scores and provides competitive quality prediction performance compared to Full-reference image quality measures.For the second problem addressed, we proposed a steganalysis scheme based on the theory of belief functions constructed on random subspaces of the features. The performance of the proposed method was evaluated on different steganography algorithms in the JPEG transform domain as well as in the spatial domain. These experimental tests have shown the performance of the proposed method in some application frameworks. However, there are many configurations that reside undetectable.
15

Text Steganalysis based on Convolutional Neural Networks

Akula, Tejasvi, Pamisetty, Varshitha January 2022 (has links)
The CNN-based steganalysis model is able to capture some complex statistical dependencies and also learn feature representations. The proposed model uses a word embedding layer to map the words into dense vectors thus, achieving more accurate representations of the words. The proposed model extracts both, the syntax and semantic features. Files having less than 200 words are referred to as short text. Preprocessing for short text is done through word segmenting and encoding the words into indexes according to the position of words in the dictionary. Once this is performed, the index sequences are fed to the CNN to learn the feature representations. Files containing over 200 words are considered as long texts. Considering the wide range of length variation of these long texts, the proposed model tokenized long texts into their sentence components with a relatively consistent length prior to preprocessing the data. Eventually, the proposed model uses a decision strategy to make the final decision to check if the text file contains stego text or not.
16

Oriented filters for feature extraction in digital Images : Application to corners detection, Contours evaluation and color Steganalysis / Filtres orientés pour l'extraction de primitives dans les images : Application à la détection de coins, l'évaluation de contours, et à la stéganalyse d'images couleur

Abdulrahman, Hasan 17 November 2017 (has links)
L’interprétation du contenu de l’image est un objectif très important dans le traitement de l’image et la vision par ordinateur. Par conséquent, plusieurs chercheurs y sont intéressés. Une image contient des informations multiples qui peuvent être étudiés, telles que la couleur, les formes, les arêtes, les angles, la taille et l’orientation. En outre, les contours contiennent les structures les plus importantes de l’image. Afin d’extraire les caractéristiques du contour d’un objet, nous devons détecter les bords de cet objet. La détection de bords est un point clé dans plusieurs applications, telles que :la restauration, l’amélioration de l’image, la stéganographie, le filigrane, la récupération, la reconnaissance et la compression de l’image, etc. Toutefois, l’évaluation de la performance de la méthode de détection de bords reste un grand défi. Les images numériques sont parfois modifiées par une procédure légale ou illégale afin d’envoyer des données secrètes ou spéciales. Afin d’être moins visibles, la plupart des méthodes stéganographiques modifient les valeurs de pixels dans les bords/textures de parties de l’image. Par conséquent, il est important de détecter la présence de données cachées dans les images numériques. Cette thèse est divisée principalement en deux parties.La première partie discute l’évaluation des méthodes de détection des bords du filtrage, des contours et des angles. En effet, cinq contributions sont présentées dans cette partie : d’abord, nous avons proposé un nouveau plan de surveillance normalisée de mesure de la qualité. En second lieu, nous avons proposé une nouvelle technique pour évaluer les méthodes de détection des bords de filtrage impliquant le score minimal des mesures considérées. En plus, nous avons construit une nouvelle vérité terrain de la carte de bords étiquetée d’une manière semi-automatique pour des images réelles.En troisième lieu, nous avons proposé une nouvelle mesure prenant en compte les distances de faux points positifs pour évaluer un détecteur de bords d’une manière objective. Enfin, nous avons proposé une nouvelle approche de détection de bords qui combine la dérivée directionnelle et l’homogénéité des grains. Notre approche proposée est plus stable et robuste au bruit que dix autres méthodes célèbres de détection. La seconde partie discute la stéganalyse de l’image en couleurs, basée sur l’apprentissage automatique (machine learning). En effet, trois contributions sont présentées dans cette partie : d’abord, nous avons proposé une nouvelle méthode de stéganalyse de l’image en couleurs, basée sur l’extraction de caractéristiques de couleurs à partir de corrélations entre les gradients de canaux rouge, vert et bleu. En fait, ces caractéristiques donnent le cosinus des angles entre les gradients. En second lieu, nous avons proposé une nouvelle méthode de stéganalyse de l’image en couleurs, basée sur des mesures géométriques obtenues par le sinus et le cosinus des angles de gradients entre tous les canaux de couleurs. Enfin, nous avons proposé une nouvelle méthode de stéganalyse de l’image en couleurs, basée sur une banque de filtres gaussiens orientables. Toutes les trois méthodes proposées présentent des résultats intéressants et prometteur en devançant l’état de l’art de la stéganalyse en couleurs. / Interpretation of image contents is very important objective in image processing and computer vision. Wherefore, it has received much attention of researchers. An image contains a lot of information which can be studied such as color, shapes, edges, corners, size, and orientation. Moreover, contours include the most important structures in the image. In order to extract features contour of an object, we must detect the edges of that object. Edge detection results, remains a key point and very important step in wide range of applications such as: image restoration, enhancement, steganography, watermarking, image retrieval, recognition, compression, and etc. An efficient boundary detection method should create a contour image containing edges at their correct locations with a minimum of misclassified pixels. However, the performance evaluationof the edge detection results is still a challenging problem. The digital images are sometimes modify by a legal or illegal data in order to send special or secret data. These changes modify slight coefficient values of the image. In order to be less visible, most of the steganography methods modify the pixel values in the edge/texture image areas. Therefore, it is important to detect the presence of hidden data in digital images. This thesis is divided mainly into two main parts. The first part, deals with filtering edge detection, contours evaluation and corners detection methods. More deeply, there are five contributions are presented in this part: first, proposed a new normalized supervised edge map quality measure. The strategy to normalize the evaluation enables to consider a score close to 0 as a good edge map, whereas a score 1 translates a poor segmentation. Second, proposed a new technique to evaluate filtering edge detection methods involving the minimum score of the considerate measures. Moreover, build a new ground truth edge map labelled in semi-automatic way in real images. Third, proposed a new measure takes into account the distances of false positive points to evaluate an edge detector in an objective way. Finally, proposed a new approach for corner detection based on the combination of directional derivative and homogeneity kernels. The proposed approach remains more stable and robust to noise than ten famous corner detection methods. The second part, deals with color image steganalysis, based on a machine learning classification. More deeply, there are three contributionsare presented in this part: first, proposed a new color image steganalysis method based on extract color features from correlations between the gradients of red, green and blue channels. Since these features give the cosine of angles between gradients. Second, proposed a new color steganalysis method based on geometric measures obtained by the sine and cosine of gradient angles between all the color channels. Finally, proposed a new approach for color image steganalysisbased on steerable Gaussian filters Bank.All the three proposed methods in this part, provide interesting and promising results by outperforming the state-of-art color image steganalysis.
17

Towards robust steganalysis : binary classifiers and large, heterogeneous data

Lubenko, Ivans January 2013 (has links)
The security of a steganography system is defined by our ability to detect it. It is of no surprise then that steganography and steganalysis both depend heavily on the accuracy and robustness of our detectors. This is especially true when real-world data is considered, due to its heterogeneity. The difficulty of such data manifests itself in a penalty that has periodically been reported to affect the performance of detectors built on binary classifiers; this is known as cover source mismatch. It remains unclear how the performance drop that is associated with cover source mismatch is mitigated or even measured. In this thesis we aim to show a robust methodology to empirically measure its effects on the detection accuracy of steganalysis classifiers. Some basic machine-learning based methods, which take their origin in domain adaptation, are proposed to counter it. Specifically, we test two hypotheses through an empirical investigation. First, that linear classifiers are more robust than non-linear classifiers to cover source mismatch in real-world data and, second, that linear classifiers are so robust that given sufficiently large mismatched training data they can equal the performance of any classifier trained on small matched data. With the help of theory we draw several nontrivial conclusions based on our results. The penalty from cover source mismatch may, in fact, be a combination of two types of error; estimation error and adaptation error. We show that relatedness between training and test data, as well as the choice of classifier, both have an impact on adaptation error, which, as we argue, ultimately defines a detector's robustness. This provides a novel framework for reasoning about what is required to improve the robustness of steganalysis detectors. Whilst our empirical results may be viewed as the first step towards this goal, we show that our approach provides clear advantages over earlier methods. To our knowledge this is the first study of this scale and structure.
18

Digitální steganografie / Digital Steganography

KOCIÁNOVÁ, Helena January 2009 (has links)
Digital steganography is a technique for hiding data mostly into multimedia files (images, audio, video). With the development of information technology this technique has found its use in the field of copyright protection and secret data transfer, could be even applied in places where is limited possibility of using cryptography (e. g. by law). This thesis gives insight into digital steganography and contains an application using this technique.
19

Sécurisation de la communication parlée par une techhnique stéganographique / A technique for secure speech communication using steganography

Rekik, Siwar 16 April 2012 (has links)
Une des préoccupations dans le domaine des communications sécurisées est le concept de sécurité de l'information. Aujourd’hui, la réalité a encore prouvé que la communication entre deux parties sur de longues distances a toujours été sujet au risque d'interception. Devant ces contraintes, de nombreux défis et opportunités s’ouvrent pour l'innovation. Afin de pouvoir fournir une communication sécurisée, cela a conduit les chercheurs à développer plusieurs schémas de stéganographie. La stéganographie est l’art de dissimuler un message de manière secrète dans un support anodin. L’objectif de base de la stéganographie est de permettre une communication secrète sans que personne ne puisse soupçonner son existence, le message secret est dissimulé dans un autre appelé medium de couverture qui peut être image, video, texte, audio,…. Cette propriété a motivé les chercheurs à travailler sur ce nouveau champ d’étude dans le but d’élaborer des systèmes de communication secrète résistante à tout type d’attaques. Cependant, de nombreuses techniques ont été développées pour dissimuler un message secret dans le but d’assurer une communication sécurisée. Les contributions majeures de cette thèse sont en premier lieu, de présenter une nouvelle méthode de stéganographie permettant la dissimulation d’un message secret dans un signal de parole. La dissimulation c’est le processus de cacher l’information secrète de façon à la rendre imperceptible pour une partie tierce, sans même pas soupçonner son existence. Cependant, certaines approches ont été étudiées pour aboutir à une méthode de stéganogaraphie robuste. En partant de ce contexte, on s’est intéressé à développer un système de stéganographie capable d’une part de dissimuler la quantité la plus élevée de paramètre tout en gardant la perceptibilité du signal de la parole. D’autre part nous avons opté pour la conception d’un algorithme de stéganographie assez complexe afin d’assurer l’impossibilité d’extraction de l’information secrète dans le cas ou son existence été détecter. En effet, on peut également garantir la robustesse de notre technique de stéganographie à l’aptitude de préservation du message secret face aux tentatives de détection des systèmes de stéganalyse. Notre technique de dissimulation tire son efficacité de l’utilisation de caractéristiques spécifiques aux signaux de parole et àl’imperfection du système auditif humain. Des évaluations comparatives sur des critères objectifs et subjectifs de qualité sont présentées pour plusieurs types de signaux de parole. Les résultats ont révélé l'efficacité du système développé puisque la technique de dissimulation proposée garantit l’imperceptibilité du message secret voire le soupçon de son existence. Dans la suite expérimentale et dans le même cadre de ce travail, la principale application visée par la thèse concerne la transmission de parole sécurisée par un algorithme de stéganographie. Dans ce but il s’est avéré primordial d’utiliser une des techniques de codage afin de tester la robustesse de notre algorithme stéganographique face au processus de codage et de décodage. Les résultats obtenus montrent la possibilité de reconstruction du signal original (contenant des informations secrètes) après codage. Enfin une évaluation de la robustesse de notre technique de stéganographie vis à vis des attaques est faite de façon à optimiser la technique afin d'augmenter le taux de sécurisation. Devant cette nécessité nous avons proposé une nouvelle technique de stéganalyse basée sur les réseaux de neurones AR-TDNN. La technique présentée ici ne permet pas d'extraire l'éventuel message caché, mais simplement de mettre en évidence sa présence. / One of the concerns in the field of secure communication is the concept of information security. Today’s reality is still showing that communication between two parties over long distances has always been subject to interception. Providing secure communication has driven researchers to develop several cryptography schemes. Cryptography methods achieve security in order to make the information unintelligible to guarantee exclusive access for authenticated recipients. Cryptography consists of making the signal look garbled to unauthorized people. Thus, cryptography indicates the existence of a cryptographic communication in progress, which makes eavesdroppers suspect the existence of valuable data. They are thus incited to intercept the transmitted message and to attempt to decipher the secret information. This may be seen as weakness in cryptography schemes. In contrast to cryptography, steganography allows secret communication by camouflaging the secret signal in another signal (named the cover signal), to avoid suspicion. This quality motivated the researchers to work on this burning field to develop schemes ensuring better resistance to hostile attackers. The word steganography is derived from two Greek words: Stego (means cover) and graphy (means writing). The two combined words constitute steganography, which means covert writing, is the art of hiding written communications. Several steganography techniques were used to send message secretly during wars through the territories of enemies. The major contributions of this thesis are the following ones. We propose a new method to secure speech communication using the Discrete Wavelet Transforms (DWT) and the Fast Fourier Transform (FFT). Our method exploits first the high frequencies using a DWT, then exploits the low-pass spectral properties of the speech magnitude spectrum to hide another speech signal in the low-amplitude high-frequencies region of the cover speech signal. The proposed method allows hiding a large amount of secret information while rendering the steganalysis more complex. Comparative evaluation based on objective and subjective criteria is introduced for original speech signal, stego-signal and reconstructed secret speech signal after the hiding process. Experimental simulations on both female and male speakers revealed that our approach is capable of producing a stego speech that is indistinguishable from the cover speech. The receiver is still able to recover an intelligible copy of the secret speech message. We used an LPC10 coder to test the effect of the coding techniques on the stego-speech signals. Experimental results prove the efficiency of the used coding technique since intelligibility of the stego-speech is maintained after the encoding and decoding processes. We also advocate a new steganalysis technique to ensure the robustness of our steganography method. The proposed classifier is called Autoregressive time delay neural network (ARTDNN). The purpose of this steganalysis system is to identify the presence or not of embedded information, and does not actually attempt to extract or decode the hidden data. The low detecting rate prove the robustness of our hiding technique.
20

A study in how to inject steganographic data into videos in a sturdy and non-intrusive manner / En studie i hur steganografisk data kan injiceras i videor på ett robust och icke-påträngande sätt

Andersson, Julius, Engström, David January 2019 (has links)
It is desirable for companies to be able to hide data inside videos to be able to find the source of any unauthorised sharing of a video. The hidden data (the payload) should damage the original data (the cover) by an as small amount as possible while also making it hard to remove the payload without also severely damaging the cover. It was determined that the most appropriate place to hide data in a video was in the visual information, so the cover is an image. Two injection methods were developed and three methods for attacking the payload. One injection method changes the pixel values of an image directly to hide the payload and the other transforms the image to cosine waves that represented the image and it then changes those cosine waves to hide the payload. Attacks were developed to test how hard it was to remove the hidden data. The methods for attacking the payload where to add and remove a random value to each pixel, to set all bits of a certain importance to 1 or to compress the image with JPEG. The result of the study was that the method that changed the image directly was significantly faster than the method that transformed the image and it had a capacity for a larger payload. The injection methods protected the payload differently well against the various attacks so which method that was the best in that regard depends on the type of attack. / Det är önskvärt för företag att kunna gömma data i videor så att de kan hitta källorna till obehörig delning av en video. Den data som göms borde skada den ursprungliga datan så lite som möjligt medans det också är så svårt som möjligt att radera den gömda datan utan att den ursprungliga datan skadas mycket. Studien kom fram till att det bästa stället att gömma data i videor är i den visuella delen så datan göms i bilderna i videon. Två metoder skapades för att injektera gömd data och tre skapades för att förstöra den gömda datan. En injektionsmetod ändrar pixelvärdet av bilden direkt för att gömma datat medans den andra transformerar bilden till cosinusvågor och ändrar sedan de vågorna för att gömma datat. Attacker utformades för att testa hur svårt det var att förstöra den gömda datan. Metoderna för att attackera den gömda datan var att lägga till eller ta bort ett slumpmässigt värde från varje pixel, att sätta varje bit av en särskild nivå till 1 och att komprimera bilden med JPEG. Resultatet av studien var att metoden som ändrade bilden direkt var mycket snabbare än metoden som först transformerade bilden och den hade också plats för mer gömd data. Injektionsmetoderna var olika bra på att skydda den gömda datan mot de olika attackerna så vilken metod som var bäst i den aspekten beror på vilken typ av attack som används.

Page generated in 0.0741 seconds