• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • 2
  • Tagged with
  • 19
  • 8
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Efficient and Tamper-Resilient Architectures for Pairing Based Cryptography

Ozturk, Erdinc 04 January 2009 (has links)
Identity based cryptography was first proposed by Shamir in 1984. Rather than deriving a public key from private information, which would be the case in traditional public key encryption schemes, in identity based schemes a user's identity plays the role of the public key. This reduces the amount of computations required for authentication, and simplifies key-management. Efficient and strong implementations of identity based schemes are based around easily computable bilinear mappings of two points on an elliptic curve onto a multiplicative subgroup of a field, also called pairing. The idea of utilizing the identity of the user simplifies the public key infrastructure. However, since pairing computations are expensive for both area and timing, the proposed identity based cryptosystem are hard to implement. In order to be able to efficiently utilize the idea of identity based cryptography, there is a strong need for an efficient pairing implementations. Pairing computations could be realized in multiple fields. Since the main building block and the bottleneck of the algorithm is multiplication, we focused our research on building a fast and small arithmetic core that can work on multiple fields. This would allow a single piece of hardware to realize a wide spectrum of cryptographic algorithms, including pairings, with minimal amount of software coding. We present a novel unified core design which is extended to realize Montgomery multiplication in the fields GF(2^n), GF(3^m), and GF(p). Our unified design supports RSA and elliptic curve schemes, as well as identity based encryption which requires a pairing computation on an elliptic curve. The architecture is pipelined and is highly scalable. The unified core utilizes the redundant signed digit representation to reduce the critical path delay. While the carry-save representation used in classical unified architectures is only good for addition and multiplication operations, the redundant signed digit representation also facilitates efficient computation of comparison and subtraction operations besides addition and multiplication. Thus, there is no need for transformation between the redundant and non-redundant representations of field elements, which would be required in classical unified architectures to realize the subtraction and comparison operations. We also quantify the benefits of unified architectures in terms of area and critical path delay. We provide detailed implementation results. The metric shows that the new unified architecture provides an improvement over a hypothetical non-unified architecture of at least 24.88 % while the improvement over a classical unified architecture is at least 32.07 %. Until recently there has been no work covering the security of pairing based cryptographic hardware in the presence of side-channel attacks, despite their apparent suitability for identity-aware personal security devices, such as smart cards. We present a novel non-linear error coding framework which incorporates strong adversarial fault detection capabilities into identity based encryption schemes built using Tate pairing computations. The presented algorithms provide quantifiable resilience in a well defined strong attacker model. Given the emergence of fault attacks as a serious threat to pairing based cryptography, the proposed technique solves a key problem when incorporated into software and hardware implementations. In this dissertation, we also present an efficient accelerator for computing the Tate Pairing in characteristic 3, based on the Modified Duursma Lee algorithm.
12

Keeping an Indefinitely Growing Audit Log / En kontinuerligt växande audit log

Andersson, Måns January 2022 (has links)
An audit log enables us to discover malfeasance in a system and to understand a security breach after it has happened. An audit log is meant to preserve information about important events in a system in a non-repudiable manner. Naturally, the audit log is often a target for malicious actors trying to cover the traces of an attack. The most common type of attack would be to try to remove or modify entries which contain information about some events in the system that a malicious actor does not want anyone to know about. In this thesis, the state-of-the-art research on secure logging is presented together with a design for a new logging system. The new design has superior properties in terms of both security and functionality compared to the current EJBCA implementation. The design is based on a combination of two well-cited logging schemes presented in the literature. Our design is an audit log built on a Merkle tree structure which enables efficient integrity proofs, flexible auditing schemes, efficient queries and exporting capabilities. On top of the Merkle tree structue, an FssAgg (Forward secure sequential Aggregate) MAC (Message Authentication Code) is introduced which strengthens the resistance to truncation-attacks and provides more options for auditing schemes. A proof-of-concept implementation was created and performance was measured to show that the combination of the Merkle tree log and the FssAgg MAC does not significantly reduce the performance compared to the individual schemes, while offering better security. The logging system design and the proof-of-concept implementation presented in this project will serve as a starting point for PrimeKey when developing a new audit log for EJBCA. / En granskningslogg är viktig eftersom den ger oss möjligheten att upptäcka misstänkt aktivitet i ett system. Granskningsloggen ger också möjligheten att undersöka och förstå ett säkerhetsintrång efter att det har inträffat. En attackerare som komprometterar ett system har ofta granskningsloggen som mål, eftersom de ofta vill dölja sina spår. I denna rapport presenteras en litteraturstudie av nuvarande forskning på säkra loggingsystem samt en design av ett nytt loggingsystem. Det nya loggingsystemet har bättre säkerhetsegentskaper och funktionalitet jämfört med den nuvarande implementationen i EJBCA. Designen bygger på en kombination av två välciterade forskningsartiklar. Vår design är en granskningslogg baserad på en Merkle träd-struktur som möjliggör effektiva bevis av loggens integritet, flexibel granskning, effektiv sökning och exportfunktionalitet. Förutom Merkle träd-strukturen består den nya loggen även av en FssAgg (Forward secure sequential Aggregate) MAC (Message Authentication Code) som förstärker loggens motstånd mot trunkeringsattacker och möjliggör fler sätt att granska loggen. En prototypimplementation skapades och prestandamätningar genomfördes som visar att kombinationen av Merkle träd-loggen och FssAgg MAC:en inte försämrar loggens prestanda jämfört med de individuella logglösningarna, trots att starkare säkerhet uppnås. Designen av det nya loggingsystemet samt prototypimplementationen kommer att utgöra en grund för PrimeKeys arbete med att implementera en ny audit log i EJBCA.
13

Digital watermarking in medical images

Zain, Jasni Mohamad January 2005 (has links)
This thesis addresses authenticity and integrity of medical images using watermarking. Hospital Information Systems (HIS), Radiology Information Systems (RIS) and Picture Archiving and Communication Systems (P ACS) now form the information infrastructure for today's healthcare as these provide new ways to store, access and distribute medical data that also involve some security risk. Watermarking can be seen as an additional tool for security measures. As the medical tradition is very strict with the quality of biomedical images, the watermarking method must be reversible or if not, region of Interest (ROI) needs to be defined and left intact. Watermarking should also serve as an integrity control and should be able to authenticate the medical image. Three watermarking techniques were proposed. First, Strict Authentication Watermarking (SAW) embeds the digital signature of the image in the ROI and the image can be reverted back to its original value bit by bit if required. Second, Strict Authentication Watermarking with JPEG Compression (SAW-JPEG) uses the same principal as SAW, but is able to survive some degree of JPEG compression. Third, Authentication Watermarking with Tamper Detection and Recovery (AW-TDR) is able to localise tampering, whilst simultaneously reconstructing the original image.
14

Uniquely Identifiable Tamper-Evident Device Using Coupling between Subwavelength Gratings

Fievre, Ange Marie P 27 March 2015 (has links)
Reliability and sensitive information protection are critical aspects of integrated circuits. A novel technique using near-field evanescent wave coupling from two subwavelength gratings (SWGs), with the input laser source delivered through an optical fiber is presented for tamper evidence of electronic components. The first grating of the pair of coupled subwavelength gratings (CSWGs) was milled directly on the output facet of the silica fiber using focused ion beam (FIB) etching. The second grating was patterned using e-beam lithography and etched into a glass substrate using reactive ion etching (RIE). The slightest intrusion attempt would separate the CSWGs and eliminate near-field coupling between the gratings. Tampering, therefore, would become evident. Computer simulations guided the design for optimal operation of the security solution. The physical dimensions of the SWGs, i.e. period and thickness, were optimized, for a 650 nm illuminating wavelength. The optimal dimensions resulted in a 560 nm grating period for the first grating etched in the silica optical fiber and 420 nm for the second grating etched in borosilicate glass. The incident light beam had a half-width at half-maximum (HWHM) of at least 7 µm to allow discernible higher transmission orders, and a HWHM of 28 µm for minimum noise. The minimum number of individual grating lines present on the optical fiber facet was identified as 15 lines. Grating rotation due to the cylindrical geometry of the fiber resulted in a rotation of the far-field pattern, corresponding to the rotation angle of moiré fringes. With the goal of later adding authentication to tamper evidence, the concept of CSWGs signature was also modeled by introducing random and planned variations in the glass grating. The fiber was placed on a stage supported by a nanomanipulator, which permitted three-dimensional displacement while maintaining the fiber tip normal to the surface of the glass substrate. A 650 nm diode laser was fixed to a translation mount that transmitted the light source through the optical fiber, and the output intensity was measured using a silicon photodiode. The evanescent wave coupling output results for the CSWGs were measured and compared to the simulation results.
15

Designing a Physical Unclonable Function for Cryptographic Hardware

Bergfalck, Ludwig, Engström, Johannes January 2021 (has links)
Hardware Security Modules (HSMs) are embedded systems that provide a physically secure data storage and handling environment. This master thesis evaluates an HSM method incorporating cryptographic key generation, key management, and tamper protection. The HSM concept involves a sensing mesh structured Physical Unclonable Function (PUF), where the cryptographic key is derived from the sum of cross-sectional area capacitance between conductors on adjacent layers of a flex PCB forming a grid. This sensing mesh PUF that stores a digital fingerprint in its microstructure is used to enclose an internal system extracting and managing the keys. This ensures that accessing the internal structure is unmanageable without modifying the enclosure. Since the cryptographic key is derived from the intrinsic properties within the sensing mesh, modifying it will change its intrinsic properties and change the cryptographic key and make it unusable. The Master thesis contains PCB design and development of a prototype of the PUF system and an associated capacitance measurement system, which can handle and extract unique keys from each copy of the PUFs. A hardware assembling, experimenting, and evaluation procedure were performed regarding the robustness of the PUF and its susceptibility to environmental impacts such as temperature changes, invasive attacks, and agitation. Additionally, an performance evaluation is made by estimating a set of quality factors often associated with PUFs, such as uniqueness, reliability, uniformity, and bit-aliasing on the extracted cryptographic keys. The cryptographic keys provide good reliability in stable conditions for each PUF copy of the population. The cryptographic keys also provide gooduniqueness, uniformity, and bit-aliasing estimations with the quality factors. Moreover, an invasive attack experiment indicates that the PUF enclosure prototype provides tamper detection possibilities together with distinct structure modifications when an intrusion attempt is performed. As stated in theory, PUFs are sensitive to environmental changes, which is also observable in the results when the PUF enclosure prototype is exposed to various environmental conditions. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
16

Digital Signature Technologies for Image Information Assurance / Vaizdo skaitmeninis parašas vaizdinės informacijos apsaugai

Kriukovas, Artūras 25 January 2011 (has links)
The dissertation investigates the issues of image authentication and tamper localization after general image processing operations – blurring, sharpening, rotation and JPEG compression. The dissertation consists of four parts including Introduction, 4 chapters, Conclusions, References. The introduction reveals the investigated problem, importance of the thesis and the object of research and describes the purpose and tasks of the paper, research methodology, scientific novelty, the practical significance of results examined in the paper and defended statements. The introduction ends in presenting the author’s publications on the subject of the defended dissertation, offering the material of made presentations in conferences and defining the structure of the dissertation. Chapter 1 revises used literature, analyzes competitive methods. The devastating effect of blur/sharpen methods on digital image matrices is shown. General pixel-wise tamper localization methods are completely inefficient after such casual processing. Block-wise methods demonstrate some resistance against blurring/sharpening, but no tamper localization with the resolution of up to one pixel is possible. There is clearly a need for a method, able to locate damaged pixels despite general image processing operations such as blurring or sharpening. Chapter 2 defines theoretical foundation for the proposed method. It is shown that phase of Fourier transform demonstrates invariance against blurring or sharpening... [to full text] / Disertacijoje nagrinėjamos atvaizdų apsaugos – autentiškumo užtikrinimo ir pažeidimų radimo, po bendrųjų vaizdo apdorojimo procedūrų – blukinimo, ryškinimo, pasukimo ar JPEG suspaudimo – problemos. Disertaciją sudaro įvadas, keturi skyriai, rezultatų apibendrinimas, naudotos literatūros ir autoriaus publikacijų disertacijos tema sąrašai. Įvadiniame skyriuje aptariama tiriamoji problema, darbo aktualumas, aprašomas tyrimų objektas, formuluojamas darbo tikslas bei uždaviniai, aprašoma tyrimų metodika, darbo mokslinis naujumas, darbo rezultatų praktinė reikšmė, ginamieji teiginiai. Įvado pabaigoje pristatomos disertacijos tema autoriaus paskelbtos publikacijos ir pranešimai konferencijose bei disertacijos struktūra. Pirmasis skyrius skirtas literatūros apžvalgai. Jame pateikta konkuruojančių metodų apžvalga. Parodomas globalus blukinimo/ryškinimo operacijų poveikis atvaizdo skaitmeninėms matricoms. Išsiaiškinama kodėl pikselių tikslumo metodai nėra atsparūs tokiam poveikiui. Kodėl blokiniai metodai demonstruoja dalinį atsparumą – bet jie nėra pajėgūs surasti vieno pažeisto pikselio. Parodomas aiškus poreikis metodo, tiek galinčio rasti pažeistus pikselius, tiek atsparaus bendroms vaizdo apdorojimo procedūroms, tokioms kaip blukinimas ar aštrinimas. Antrajame skyriuje pateiktas sukurto metodo teorinis pagrindimas. Įrodoma, kad Furjė fazė pasižymi atsparumu blukinimui ir ryškinimui. Tačiau iškyla sekanti problema dėl to, kad Furjė fazėje neįmanoma rasti konkretaus pikselio –... [toliau žr. visą tekstą]
17

Enhancing security in distributed systems with trusted computing hardware

Reid, Jason Frederick January 2007 (has links)
The need to increase the hostile attack resilience of distributed and internet-worked computer systems is critical and pressing. This thesis contributes to concrete improvements in distributed systems trustworthiness through an enhanced understanding of a technical approach known as trusted computing hardware. Because of its physical and logical protection features, trusted computing hardware can reliably enforce a security policy in a threat model where the authorised user is untrusted or when the device is placed in a hostile environment. We present a critical analysis of vulnerabilities in current systems, and argue that current industry-driven trusted computing initiatives will fail in efforts to retrofit security into inherently flawed operating system designs, since there is no substitute for a sound protection architecture grounded in hardware-enforced domain isolation. In doing so we identify the limitations of hardware-based approaches. We argue that the current emphasis of these programs does not give sufficient weight to the role that operating system security plays in overall system security. New processor features that provide hardware support for virtualisation will contribute more to practical security improvement because they will allow multiple operating systems to concurrently share the same processor. New operating systems that implement a sound protection architecture will thus be able to be introduced to support applications with stringent security requirements. These can coexist alongside inherently less secure mainstream operating systems, allowing a gradual migration to less vulnerable alternatives. We examine the effectiveness of the ITSEC and Common Criteria evaluation and certification schemes as a basis for establishing assurance in trusted computing hardware. Based on a survey of smart card certifications, we contend that the practice of artificially limiting the scope of an evaluation in order to gain a higher assurance rating is quite common. Due to a general lack of understanding in the marketplace as to how the schemes work, high evaluation assurance levels are confused with a general notion of 'high security strength'. Vendors invest little effort in correcting the misconception since they benefit from it and this has arguably undermined the value of the whole certification process. We contribute practical techniques for securing personal trusted hardware devices against a type of attack known as a relay attack. Our method is based on a novel application of a phenomenon known as side channel leakage, heretofore considered exclusively as a security vulnerability. We exploit the low latency of side channel information transfer to deliver a communication channel with timing resolution that is fine enough to detect sophisticated relay attacks. We avoid the cost and complexity associated with alternative communication techniques suggested in previous proposals. We also propose the first terrorist attack resistant distance bounding protocol that is efficient enough to be implemented on resource constrained devices. We propose a design for a privacy sensitive electronic cash scheme that leverages the confidentiality and integrity protection features of trusted computing hardware. We specify the command set and message structures and implement these in a prototype that uses Dallas Semiconductor iButtons. We consider the access control requirements for a national scale electronic health records system of the type that Australia is currently developing. We argue that an access control model capable of supporting explicit denial of privileges is required to ensure that consumers maintain their right to grant or withhold consent to disclosure of their sensitive health information in an electronic system. Finding this feature absent in standard role-based access control models, we propose a modification to role-based access control that supports policy constructs of this type. Explicit denial is difficult to enforce in a large scale system without an active central authority but centralisation impacts negatively on system scalability. We show how the unique properties of trusted computing hardware can address this problem. We outline a conceptual architecture for an electronic health records access control system that leverages hardware level CPU virtualisation, trusted platform modules, personal cryptographic tokens and secure coprocessors to implement role based cryptographic access control. We argue that the design delivers important scalability benefits because it enables access control decisions to be made and enforced locally on a user's computing platform in a reliable way.
18

Applications of perceptual sparse representation (Spikegram) for copyright protection of audio signals / Applications de la représentation parcimonieuse perceptuelle par graphe de décharges (Spikegramme) pour la protection du droit d’auteur des signaux sonores

Erfani, Yousof January 2016 (has links)
Chaque année, le piratage mondial de la musique coûte plusieurs milliards de dollars en pertes économiques, pertes d’emplois et pertes de gains des travailleurs ainsi que la perte de millions de dollars en recettes fiscales. La plupart du piratage de la musique est dû à la croissance rapide et à la facilité des technologies actuelles pour la copie, le partage, la manipulation et la distribution de données musicales [Domingo, 2015], [Siwek, 2007]. Le tatouage des signaux sonores a été proposé pour protéger les droit des auteurs et pour permettre la localisation des instants où le signal sonore a été falsifié. Dans cette thèse, nous proposons d’utiliser la représentation parcimonieuse bio-inspirée par graphe de décharges (spikegramme), pour concevoir une nouvelle méthode permettant la localisation de la falsification dans les signaux sonores. Aussi, une nouvelle méthode de protection du droit d’auteur. Finalement, une nouvelle attaque perceptuelle, en utilisant le spikegramme, pour attaquer des systèmes de tatouage sonore. Nous proposons tout d’abord une technique de localisation des falsifications (‘tampering’) des signaux sonores. Pour cela nous combinons une méthode à spectre étendu modifié (‘modified spread spectrum’, MSS) avec une représentation parcimonieuse. Nous utilisons une technique de poursuite perceptive adaptée (perceptual marching pursuit, PMP [Hossein Najaf-Zadeh, 2008]) pour générer une représentation parcimonieuse (spikegramme) du signal sonore d’entrée qui est invariante au décalage temporel [E. C. Smith, 2006] et qui prend en compte les phénomènes de masquage tels qu’ils sont observés en audition. Un code d’authentification est inséré à l’intérieur des coefficients de la représentation en spikegramme. Puis ceux-ci sont combinés aux seuils de masquage. Le signal tatoué est resynthétisé à partir des coefficients modifiés, et le signal ainsi obtenu est transmis au décodeur. Au décodeur, pour identifier un segment falsifié du signal sonore, les codes d’authentification de tous les segments intacts sont analysés. Si les codes ne peuvent être détectés correctement, on sait qu’alors le segment aura été falsifié. Nous proposons de tatouer selon le principe à spectre étendu (appelé MSS) afin d’obtenir une grande capacité en nombre de bits de tatouage introduits. Dans les situations où il y a désynchronisation entre le codeur et le décodeur, notre méthode permet quand même de détecter des pièces falsifiées. Par rapport à l’état de l’art, notre approche a le taux d’erreur le plus bas pour ce qui est de détecter les pièces falsifiées. Nous avons utilisé le test de l’opinion moyenne (‘MOS’) pour mesurer la qualité des systèmes tatoués. Nous évaluons la méthode de tatouage semi-fragile par le taux d’erreur (nombre de bits erronés divisé par tous les bits soumis) suite à plusieurs attaques. Les résultats confirment la supériorité de notre approche pour la localisation des pièces falsifiées dans les signaux sonores tout en préservant la qualité des signaux. Ensuite nous proposons une nouvelle technique pour la protection des signaux sonores. Cette technique est basée sur la représentation par spikegrammes des signaux sonores et utilise deux dictionnaires (TDA pour Two-Dictionary Approach). Le spikegramme est utilisé pour coder le signal hôte en utilisant un dictionnaire de filtres gammatones. Pour le tatouage, nous utilisons deux dictionnaires différents qui sont sélectionnés en fonction du bit d’entrée à tatouer et du contenu du signal. Notre approche trouve les gammatones appropriés (appelés noyaux de tatouage) sur la base de la valeur du bit à tatouer, et incorpore les bits de tatouage dans la phase des gammatones du tatouage. De plus, il est montré que la TDA est libre d’erreur dans le cas d’aucune situation d’attaque. Il est démontré que la décorrélation des noyaux de tatouage permet la conception d’une méthode de tatouage sonore très robuste. Les expériences ont montré la meilleure robustesse pour la méthode proposée lorsque le signal tatoué est corrompu par une compression MP3 à 32 kbits par seconde avec une charge utile de 56.5 bps par rapport à plusieurs techniques récentes. De plus nous avons étudié la robustesse du tatouage lorsque les nouveaux codec USAC (Unified Audion and Speech Coding) à 24kbps sont utilisés. La charge utile est alors comprise entre 5 et 15 bps. Finalement, nous utilisons les spikegrammes pour proposer trois nouvelles méthodes d’attaques. Nous les comparons aux méthodes récentes d’attaques telles que 32 kbps MP3 et 24 kbps USAC. Ces attaques comprennent l’attaque par PMP, l’attaque par bruit inaudible et l’attaque de remplacement parcimonieuse. Dans le cas de l’attaque par PMP, le signal de tatouage est représenté et resynthétisé avec un spikegramme. Dans le cas de l’attaque par bruit inaudible, celui-ci est généré et ajouté aux coefficients du spikegramme. Dans le cas de l’attaque de remplacement parcimonieuse, dans chaque segment du signal, les caractéristiques spectro-temporelles du signal (les décharges temporelles ;‘time spikes’) se trouvent en utilisant le spikegramme et les spikes temporelles et similaires sont remplacés par une autre. Pour comparer l’efficacité des attaques proposées, nous les comparons au décodeur du tatouage à spectre étendu. Il est démontré que l’attaque par remplacement parcimonieux réduit la corrélation normalisée du décodeur de spectre étendu avec un plus grand facteur par rapport à la situation où le décodeur de spectre étendu est attaqué par la transformation MP3 (32 kbps) et 24 kbps USAC. / Abstract : Every year global music piracy is making billion dollars of economic, job, workers’ earnings losses and also million dollars loss in tax revenues. Most of the music piracy is because of rapid growth and easiness of current technologies for copying, sharing, manipulating and distributing musical data [Domingo, 2015], [Siwek, 2007]. Audio watermarking has been proposed as one approach for copyright protection and tamper localization of audio signals to prevent music piracy. In this thesis, we use the spikegram- which is a bio-inspired sparse representation- to propose a novel approach to design an audio tamper localization method as well as an audio copyright protection method and also a new perceptual attack against any audio watermarking system. First, we propose a tampering localization method for audio signal, based on a Modified Spread Spectrum (MSS) approach. Perceptual Matching Pursuit (PMP) is used to compute the spikegram (which is a sparse and time-shift invariant representation of audio signals) as well as 2-D masking thresholds. Then, an authentication code (which includes an Identity Number, ID) is inserted inside the sparse coefficients. For high quality watermarking, the watermark data are multiplied with masking thresholds. The time domain watermarked signal is re-synthesized from the modified coefficients and the signal is sent to the decoder. To localize a tampered segment of the audio signal, at the decoder, the ID’s associated to intact segments are detected correctly, while the ID associated to a tampered segment is mis-detected or not detected. To achieve high capacity, we propose a modified version of the improved spread spectrum watermarking called MSS (Modified Spread Spectrum). We performed a mean opinion test to measure the quality of the proposed watermarking system. Also, the bit error rates for the presented tamper localization method are computed under several attacks. In comparison to conventional methods, the proposed tamper localization method has the smallest number of mis-detected tampered frames, when only one frame is tampered. In addition, the mean opinion test experiments confirms that the proposed method preserves the high quality of input audio signals. Moreover, we introduce a new audio watermarking technique based on a kernel-based representation of audio signals. A perceptive sparse representation (spikegram) is combined with a dictionary of gammatone kernels to construct a robust representation of sounds. Compared to traditional phase embedding methods where the phase of signal’s Fourier coefficients are modified, in this method, the watermark bit stream is inserted by modifying the phase of gammatone kernels. Moreover, the watermark is automatically embedded only into kernels with high amplitudes where all masked (non-meaningful) gammatones have been already removed. Two embedding methods are proposed, one based on the watermark embedding into the sign of gammatones (one dictionary method) and another one based on watermark embedding into both sign and phase of gammatone kernels (two-dictionary method). The robustness of the proposed method is shown against 32 kbps MP3 with an embedding rate of 56.5 bps while the state of the art payload for 32 kbps MP3 robust iii iv watermarking is lower than 50.3 bps. Also, we showed that the proposed method is robust against unified speech and audio codec (24 kbps USAC, Linear predictive and Fourier domain modes) with an average payload of 5 − 15 bps. Moreover, it is shown that the proposed method is robust against a variety of signal processing transforms while preserving quality. Finally, three perceptual attacks are proposed in the perceptual sparse domain using spikegram. These attacks are called PMP, inaudible noise adding and the sparse replacement attacks. In PMP attack, the host signals are represented and re-synthesized with spikegram. In inaudible noise attack, the inaudible noise is generated and added to the spikegram coefficients. In sparse replacement attack, each specific frame of the spikegram representation - when possible - is replaced with a combination of similar frames located in other parts of the spikegram. It is shown than the PMP and inaudible noise attacks have roughly the same efficiency as the 32 kbps MP3 attack, while the replacement attack reduces the normalized correlation of the spread spectrum decoder with a greater factor than when attacking with 32 kbps MP3 or 24 kbps unified speech and audio coding (USAC).
19

Signal Processing Algorithms For Digital Image Forensics

Prasad, S 02 1900 (has links)
Availability of digital cameras in various forms and user-friendly image editing softwares has enabled people to create and manipulate digital images easily. While image editing can be used for enhancing the quality of the images, it can also be used to tamper the images for malicious purposes. In this context, it is important to question the originality of digital images. Digital image forensics deals with the development of algorithms and systems to detect tampering in digital images. This thesis presents some simple algorithms which can be used to detect tampering in digital images. Out of the various kinds of image forgeries possible, the discussion is restricted to photo compositing (Photo montaging) and copy-paste forgeries. While creating photomontage, it is very likely that one of the images needs to be resampled and hence there will be an inconsistency in some of its underlying characteristics. So, detection of resampling in an image will give a clue to decide whether the image is tampered or not. Two pixel domain techniques to detect resampling have been presented. The rest of them exploits the property of periodic zeros that occur in the second divergences due to interpolation during resembling. It requires a special condition on the resembling factor to be met. The second technique is based on the periodic zero-crossings that occur in the second divergences, which does not require any special condition on the resembling factor. It has been noted that this is an important property of revamping and hence the decay of this technique against mild counter attacks such as JPEG compression and additive noise has been studied. This property has been repeatedly used throughout this thesis. It is a well known fact that interpolation is essentially low-pass filtering. In case of photomontage image which consists of resample and non resample portions, there will be an in consistency in the high-frequency content of the image. This can be demonstrated by a simple high-pass filtering of the image. This fact has also been exploited to detect photomontaging. One approach involves performing block wise DCT and reconstructing the image using only a few high-frequency coercions. Another elegant approach is to decompose the image using wavelets and reconstruct the image using only the diagonal detail coefficients. In both the cases mere visual inspection will reveal the forgery. The second part of the thesis is related to tamper detection in colour filter array (CFA) interpolated images. Digital cameras employ Bayer filters to efficiently capture the RGB components of an image. The output of Bayer filter are sub-sampled versions of R, G and B components and they are completed by using demosaicing algorithms. It has been shown that demos icing of the color components is equivalent to resembling the image by a factor of two. Hence, CFA interpolated images contain periodic zero-crossings in its second differences. Experimental demonstration of the presence of periodic zero-crossings in images captured using four digital cameras of deferent brands has been done. When such an image is tampered, these periodic zero-crossings are destroyed and hence the tampering can be detected. The utility of zero-crossings in detecting various kinds of forgeries on CFA interpolated images has been discussed. The next part of the thesis is a technique to detect copy-paste forgery in images. Generally, while an object or a portion if an image has to be erased from an image, the easiest way to do it is to copy a portion of background from the same image and paste it over the object. In such a case, there are two pixel wise identical regions in the same image, which when detected can serve as a clue of tampering. The use of Scale-Invariant-Feature-Transform (SIFT) in detecting this kind of forgery has been studied. Also certain modifications that can to be done to the image in order to get the SIFT working effectively has been proposed. Throughout the thesis, the importance of human intervention in making the final decision about the authenticity of an image has been highlighted and it has been concluded that the techniques presented in the thesis can effectively help the decision making process.

Page generated in 0.0414 seconds