• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Digital signatures

Swanepoel, Jacques Philip 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2015 / AFRIKAANSE OPSOMMING : In hierdie verhandeling stel ons 'n nuwe strategie vir outomatiese handtekening-verifikasie voor. Die voorgestelde raamwerk gebruik 'n skrywer-onafhanklike benadering tot handtekening- modellering en is dus in staat om bevraagtekende handtekeninge, wat aan enige skrywer behoort, te bekragtig, op voorwaarde dat minstens een outentieke voorbeeld vir vergelykingsdoeleindes beskikbaar is. Ons ondersoek die tradisionele statiese geval (waarin 'n bestaande pen-op-papier handtekening vanuit 'n versyferde dokument onttrek word), asook die toenemend gewilde dinamiese geval (waarin handtekeningdata outomaties tydens ondertekening m.b.v. gespesialiseerde elektroniese hardeware bekom word). Die statiese kenmerk-onttrekkingstegniek behels die berekening van verskeie diskrete Radontransform (DRT) projeksies, terwyl dinamiese handtekeninge deur verskeie ruimtelike en temporele funksie-kenmerke in die kenmerkruimte voorgestel word. Ten einde skryweronafhanklike handtekening-ontleding te bewerkstellig, word hierdie kenmerkstelle na 'n verskil-gebaseerde voorstelling d.m.v. 'n geskikte digotomie-transformasie omgeskakel. Die klassikasietegnieke, wat vir handtekeking-modellering en -verifikasie gebruik word, sluit kwadratiese diskriminant-analise (KDA) en steunvektormasjiene (SVMe) in. Die hoofbydraes van hierdie studie sluit twee nuwe tegnieke, wat op die bou van 'n robuuste skrywer-onafhanklike handtekeningmodel gerig is, in. Die eerste, 'n dinamiese tydsverbuiging digotomie-transformasie vir statiese handtekening-voorstelling, is in staat om vir redelike intra-klas variasie te kompenseer, deur die DRT-projeksies voor vergelyking nie-lineêr te belyn. Die tweede, 'n skrywer-spesieke verskil-normaliseringstrategie, is in staat om inter-klas skeibaarheid in die verskilruimte te verbeter deur slegs streng relevante statistieke tydens die normalisering van verskil-vektore te beskou. Die normaliseringstrategie is generies van aard in die sin dat dit ewe veel van toepassing op beide statiese en dinamiese handtekening-modelkonstruksie is. Die stelsels wat in hierdie studie ontwikkel is, is spesi ek op die opsporing van hoë-kwaliteit vervalsings gerig. Stelselvaardigheid-afskatting word met behulp van 'n omvattende eksperimentele protokol bewerkstellig. Verskeie groot handtekening-datastelle is oorweeg. In beide die statiese en dinamiese gevalle vaar die voorgestelde SVM-gebaseerde stelsel beter as die voorgestelde KDA-gebaseerde stelsel. Ons toon ook aan dat die stelsels wat in hierdie studie ontwikkel is, die meeste bestaande stelsels wat op dieselfde datastelle ge evalueer is, oortref. Dit is selfs meer belangrik om daarop te let dat, wanneer hierdie stelsels met bestaande tegnieke in die literatuur vergelyk word, ons aantoon dat die gebruik van die nuwe tegnieke, soos in hierdie studie voorgestel, konsekwent tot 'n statisties beduidende verbetering in stelselvaardigheid lei. / ENGLISH ABSTRACT : In this dissertation we present a novel strategy for automatic handwritten signature verification. The proposed framework employs a writer-independent approach to signature modelling and is therefore capable of authenticating questioned signatures claimed to belong to any writer, provided that at least one authentic sample of said writer's signature is available for comparison. We investigate both the traditional off-line scenario (where an existing pen-on-paper signature is extracted from a digitised document) as well as the increasingly popular on-line scenario (where the signature data are automatically recorded during the signing event by means of specialised electronic hardware). The utilised off-line feature extraction technique involves the calculation of several discrete Radon transform (DRT) based projections, whilst on-line signatures are represented in feature space by several spatial and temporal function features. In order to facilitate writer-independent signature analysis, these feature sets are subsequently converted into a dissimilarity-based representation by means of a suitable dichotomy transformation. The classification techniques utilised for signature modelling and verification include quadratic discriminant analysis (QDA) and support vector machines (SVMs). The major contributions of this study include two novel techniques aimed towards the construction of a robust writer-independent signature model. The first, a dynamic time warping (DTW) based dichotomy transformation for off-line signature representation, is able to compensate for reasonable intra-class variability by non-linearly aligning DRT-based projections prior to matching. The second, a writer-specific dissimilarity normalisation strategy, improves inter-class separability in dissimilarity space by considering only strictly relevant dissimilarity statistics when normalising the dissimilarity vectors belonging to a specific individual. This normalisation strategy is generic in the sense that it is equally applicable to both off-line and on-line signature model construction. The systems developed in this study are specifically aimed towards skilled forgery detection. System proficiency estimation is conducted using a rigorous experimental protocol. Several large signature corpora are considered. In both the off-line and on-line scenarios, the proposed SVM-based system outperforms the proposed QDA-based system. We also show that the systems proposed in this study outperform most existing systems that were evaluated on the same data sets. More importantly, when compared to state-of-the-art techniques currently employed in the literature, we show that the incorporation of the novel techniques proposed in this study consistently results in a statistically significant improvement in system proficiency.
2

The problem with fraudulent solicitors : issues of trust, investigation and the self-regulation of the legal profession

Rowson, David January 2009 (has links)
No description available.
3

Certification de l'intégrité d'images numériques et de l'authenticité / Certification of authenticity and integrity of digital images

Nguyen, Hoai phuong 07 February 2019 (has links)
Avec l’avènement de l’informatique grand public et du réseau Internet, de nombreuses vidéos circulent un peu partout dans le monde. La falsification de ces supports est devenue une réalité incontournable, surtout dans le domaine de la cybercriminalité. Ces modifications peuvent être relativement anodines (retoucher l’apparence d’une personne pour lui enlever des imperfections cutanées), dérangeantes (faire disparaitre les défauts d’un objet) ou bien avoir de graves répercussions sociales (montage présentant la rencontre improbable de personnalités politiques). Ce projet s’inscrit dans le domaine de l’imagerie légale (digital forensics en anglais). Il s’agit de certifier que des images numériques sont saines ou bien falsifiées. La certification peut être envisagée comme une vérification de la conformité de l’image à tester en rapport à une référence possédée. Cette certification doit être la plus fiable possible car la preuve numérique de la falsification ne pourra être établie que si la méthode de détection employée fournit très peu de résultats erronés. Une image est composée de zones distinctes correspondantes à différentes portions de la scène (des individus, des objets, des paysages, etc.). La recherche d’une falsification consiste à vérifier si une zone suspecte est « physiquement cohérente » avec d’autres zones de l’image. Une façon fiable de définir cette cohérence consiste à se baser sur les « empreintes physiques » engendrées par le processus d’acquisition. Le premier caractère novateur de ce projet est la différenciation entre les notions de conformité et d’intégrité. Un support est dit conforme s’il respecte le modèle physique d’acquisition. Si certains des paramètres du modèle prennent des valeurs non autorisées, le support sera déclaré non-conforme. Le contrôle d’intégrité va plus loin. Il s’agit d’utiliser le test précédent pour vérifier si deux zones distinctes sont conformes à un modèle commun. Autrement dit, contrairement au contrôle de conformité qui s’intéresse au support dans son ensemble, le contrôle d’intégrité examine l’image zone par zone pour vérifier si deux zones sont mutuellement cohérentes, c’est-à-dire si la différence entre les paramètres caractérisant ces deux zones est cohérente avec la réalité physique du processus d’acquisition. L’autre caractère novateur du projet est la construction d’outils permettant de pouvoir calculer analytiquement les probabilités d’erreurs du détecteur de falsifications afin de fournir un critère quantitatif de décision. Aucune méthode ou outil actuels ne répondent à ces contraintes. / Nowadays, with the advent of the Internet, the falsification of digital media such as digital images and video is a security issue that cannot be ignored. It is of vital importance to certify the conformity and the integrity of these media. This project, which is in the domain of digital forensics, is proposed to answer this problematic.
4

Warping-Based Approach to Offline Handwriting Recognition

Kennard, Douglas J. 03 April 2013 (has links) (PDF)
An enormous amount of the historical record is currently trapped in non-indexed handwritten format. Even after being scanned into images, only a minute fraction of the existing records can be manually transcribed / indexed with reasonable amounts of time and cost. Although progress continues to be made with automatic handwriting recognition (HR), it is not yet good enough to replace manual transcription or indexing. Much of the recent HR work has focused on incremental improvements to methods based on Hidden Markov Models (HMMs) and other similar probabilistic approaches. In this dissertation we present a fundamentally new approach to HR based on 2-D geometric warping of word images. The results of our experimentation indicate that our approach is significantly more accurate than an existing whole-word approach used for word-spotting, and may also be better than HMM-based HR approaches. Since it is a completely new method, we also believe there is potential for improvement and future work that builds on this approach. In addition, we demonstrate that the approach can be used effectively in the related application domain of signature verification and forgery detection.
5

An Adversarial Approach to Spliced Forgery Detection and Localization in Satellite Imagery

Emily R Bartusiak (6630773) 11 June 2019 (has links)
The widespread availability of image editing tools and improvements in image processing techniques make image manipulation feasible for the general population. Oftentimes, easy-to-use yet sophisticated image editing tools produce results that contain modifications imperceptible to the human observer. Distribution of forged images can have drastic ramifications, especially when coupled with the speed and vastness of the Internet. Therefore, verifying image integrity poses an immense and important challenge to the digital forensic community. Satellite images specifically can be modified in a number of ways, such as inserting objects into an image to hide existing scenes and structures. In this thesis, we describe the use of a Conditional Generative Adversarial Network (cGAN) to identify the presence of such spliced forgeries within satellite images. Additionally, we identify their locations and shapes. Trained on pristine and falsified images, our method achieves high success on these detection and localization objectives.
6

Signal Processing Algorithms For Digital Image Forensics

Prasad, S 02 1900 (has links)
Availability of digital cameras in various forms and user-friendly image editing softwares has enabled people to create and manipulate digital images easily. While image editing can be used for enhancing the quality of the images, it can also be used to tamper the images for malicious purposes. In this context, it is important to question the originality of digital images. Digital image forensics deals with the development of algorithms and systems to detect tampering in digital images. This thesis presents some simple algorithms which can be used to detect tampering in digital images. Out of the various kinds of image forgeries possible, the discussion is restricted to photo compositing (Photo montaging) and copy-paste forgeries. While creating photomontage, it is very likely that one of the images needs to be resampled and hence there will be an inconsistency in some of its underlying characteristics. So, detection of resampling in an image will give a clue to decide whether the image is tampered or not. Two pixel domain techniques to detect resampling have been presented. The rest of them exploits the property of periodic zeros that occur in the second divergences due to interpolation during resembling. It requires a special condition on the resembling factor to be met. The second technique is based on the periodic zero-crossings that occur in the second divergences, which does not require any special condition on the resembling factor. It has been noted that this is an important property of revamping and hence the decay of this technique against mild counter attacks such as JPEG compression and additive noise has been studied. This property has been repeatedly used throughout this thesis. It is a well known fact that interpolation is essentially low-pass filtering. In case of photomontage image which consists of resample and non resample portions, there will be an in consistency in the high-frequency content of the image. This can be demonstrated by a simple high-pass filtering of the image. This fact has also been exploited to detect photomontaging. One approach involves performing block wise DCT and reconstructing the image using only a few high-frequency coercions. Another elegant approach is to decompose the image using wavelets and reconstruct the image using only the diagonal detail coefficients. In both the cases mere visual inspection will reveal the forgery. The second part of the thesis is related to tamper detection in colour filter array (CFA) interpolated images. Digital cameras employ Bayer filters to efficiently capture the RGB components of an image. The output of Bayer filter are sub-sampled versions of R, G and B components and they are completed by using demosaicing algorithms. It has been shown that demos icing of the color components is equivalent to resembling the image by a factor of two. Hence, CFA interpolated images contain periodic zero-crossings in its second differences. Experimental demonstration of the presence of periodic zero-crossings in images captured using four digital cameras of deferent brands has been done. When such an image is tampered, these periodic zero-crossings are destroyed and hence the tampering can be detected. The utility of zero-crossings in detecting various kinds of forgeries on CFA interpolated images has been discussed. The next part of the thesis is a technique to detect copy-paste forgery in images. Generally, while an object or a portion if an image has to be erased from an image, the easiest way to do it is to copy a portion of background from the same image and paste it over the object. In such a case, there are two pixel wise identical regions in the same image, which when detected can serve as a clue of tampering. The use of Scale-Invariant-Feature-Transform (SIFT) in detecting this kind of forgery has been studied. Also certain modifications that can to be done to the image in order to get the SIFT working effectively has been proposed. Throughout the thesis, the importance of human intervention in making the final decision about the authenticity of an image has been highlighted and it has been concluded that the techniques presented in the thesis can effectively help the decision making process.

Page generated in 0.0935 seconds