31 |
Visual analysis of articulated motionTresadern, Phil January 2002 (has links)
No description available.
|
32 |
Public key cryptosystem based on error control coding and its applications to network codingRashwan, Haitham January 2011 (has links)
attack; the recent introduction of list decoding for binary Goppa codes; and the possibility of choosing code lengths that are not a power of 2. The resulting public-key sizes are considerably smaller than previous parameter choices for the same level of security. The smallest key size against all known attacks is 460 647 bits which is too large for practical implementations to be efficient. In this thesis, we attempt to reduce McEliece's public key size by using other codes instead of Goppa codes. This thesis focuses on Gabidulin -Paramonov-Trejtakov (GPT) cryptosystem which is based on rank distance codes, which is connected to the difficulty of general decoding problem. The GPT cryptosystem is a variant of the McEliece cryptosystem. The use of rank codes in cryptographic applications is advantageous since it is practically impossible to utilize combinatoric decoding. This has enabled using public keys of a smaller size. Respective structural attacks against this system were proposed by Gibson and recently by Overbeck. Overbeck's attacks break many variants of the GPT cryptosystem in polynomial time. Gabidulin has introduced Advanced approach to prevent Overbeck's attacks. We evaluate the overall security of the GPT cryptosystem and its variants against both the structural attacks and the decoding (Brute force) attacks. Furthermore, we apply the Advanced approach to secure other variants of the GPT cryptosystem which are still vulnerable to Overbeck's attacks. Moreover, we introduce two new approaches to combating the GPT cryptosystem against all known attacks; the first approach is called Smart approach and the second one is called constructed Smart approach. We study how to choose the GPT PKC parameters so as to minimize the public key size and implementation complexity, and to maximize the overall security of the GPT cryptosystem against all known attacks in order to make an efficient system for low power handsets. We present different trade-offs for using a combined system for error protection and cryptography. Our results suggest that the McEliece key size has been reduced just 4000 bits with security of 280 , a public key size of 4800 bits with security of 276 , and a public key size of 17 200 bits with security of 2116 that corresponds respectively with the Advanced approach for standard variant of GPT, Advanced approach for simple variant of GPT, and - - ~- - - - . - - - ---- - - _. ~- ---. --- ~- -- ~-- - - ---~- - -- . - .• -.• '- ' . , ....- ..-.......iI: - ...... ... :. . -------~-- ABSTRACT iv Advanced approach for the simple variant of OPT based on reducible rank codes. Similarly, the Smart approach and the constructed Smart approach for simple variant of OPT cryptosystem have reduced McEliece's key size to 5000 bits with security of 294 for the Smart approach and to 7200 bits with security of 295 for the constructed Smart approach. By using the OPT PKC and its variants, we have approximately a 99% reduction in the size of the public key than McEliece cryptosystem with reasonable security level against all known attacks. Network coding substantially increases network throughput. Random network coding is an effective technique for information dissemination in communications networks. The security of network coding is designed against two types of attacks: Wiretapping and Byzantine attacks. The Wiretapping attack can tap some original packets, outgoing from the source to the destination with the purpose of recovering the message; The Byzantine attack can inject error packets; this type of attack has the potential to affect all packets gathered by an information receiver. We introduce a new scheme to provide information security by using the OPT public key cryptosystem together with Silva- Kotter-Kschischang random network codes. Moreover, we investigate the performance of the system, transmitting the encrypted packets to the destination (sink) through wire communication networks using different random network coding models. Our results show that the introduced scheme is secure against Wiretapping and Byzantine attacks under some conditions which depend on rank code parameters.
|
33 |
Articulated human tracking and behavioural analysis in video sequencesHusz, Zsolt Levente January 2008 (has links)
Recently, there has been a dramatic growth of interest in the observation and tracking of human subjects through video sequences. Arguably, the principal impetus has come from the perceived demand for technological surveillance, however applications in entertainment, intelligent domiciles and medicine are also increasing. This thesis examines human articulated tracking and the classi cation of human movement, rst separately and then as a sequential process. First, this thesis considers the development and training of a 3D model of human body structure and dynamics. To process video sequences, an observation model is also designed with a multi-component likelihood based on edge, silhouette and colour. This is de ned on the articulated limbs, and visible from a single or multiple cameras, each of which may be calibrated from that sequence. Second, for behavioural analysis, we develop a methodology in which actions and activities are described by semantic labels generated from a Movement Cluster Model (MCM). Third, a Hierarchical Partitioned Particle Filter (HPPF) was developed for human tracking that allows multi-level parameter search consistent with the body structure. This tracker relies on the articulated motion prediction provided by the MCM at pose or limb level. Fourth, tracking and movement analysis are integrated to generate a probabilistic activity description with action labels. The implemented algorithms for tracking and behavioural analysis are tested extensively and independently against ground truth on human tracking and surveillance datasets. Dynamic models are shown to predict and generate synthetic motion, while MCM recovers both periodic and non-periodic activities, de ned either on the whole body or at the limb level. Tracking results are comparable with the state of the art, however the integrated behaviour analysis adds to the value of the approach.
|
34 |
Surface analysis using polarisationGul-E-Saman January 2012 (has links)
Unpolarised light incident on a surface acquires partial polarisation due to the orientation of the dipoles in the scatterer. This thesis focuses on the use of polarised light for diffuse reflectance for surface analysis. Since, the state of polarisation is acquired on interaction with the surface, the polarised light contains information about the surface properties (of the scatterer). A great amount of research has been carried out in computer vision for surface analysis using image analysis techniques. Recently, the trend has been to combine optical techniques with computer vision in order to arrive at better analysis techniques by methods that analyse the intrinsic qualities of the surfaces under study. An overview of the recent work that has been carried out in the field is given in Chapter 2 in context to this thesis. The contributions of this thesis are: 1. the robust computation of polarisation image using M-estimators, the smoothing of phase of polarisation by using directional statistics and using the calculated parameters for effective surface recovery, 2. estimation of the refractive index of a diverse set of surfaces of known and unknown refractive indices and using the estimates for segmentation, 3. estimating the complex refractive index which incorporates the phenomenon of absorption by two methods existing in literature, using a. ellipsometry and b. multiple polarisation measurements while building up on the case of surface analysis being related to its optical properties and 4. carrying out a preliminary study by modifying the geometric factor of the polarimetric bidirectional reflectance distribution function. Experimental evidence has been presented in the thesis for the methods that have been used for a variety of objects with varying geometrical and surface properties. The approach in this thesis has been to adopt simple and adaptable techniques that can be easily employed without the use of sophisticated equipment.
|
35 |
Gobor-boosting face recognitionZhou, Mian January 2008 (has links)
In the past decade, automatic face recognition has received much attention by both the commercial and public sectors as an efficient and resilient recognition technique in biometrics. This thesis describes a highly accurate appearance-based algorithm for grey scale front-view face recognition - Gabor-Boosting face recognition by means of computer vision, pattern recognition, image processing, machine learning etc. The strong performance of the Gabor-boosting face recognition algorithm is highlighted by combining three key leading edge techniques - the Gabor wavelet transform, AdaBoost, Support Vector Machine (SVM). The Gabor wavelet transform is used to extract features which describe texture variations of human faces. The Adaboost algorithm is used to select most significant features which represent different individuals. The SVM constructs a classifier with high recognition accuracy. Within the AdaBoost algorithm, a novel weak learner - Potsu is designed. The Potsu weak learner is fast due to the simple perception prototype, and is accurate due to large number of training examples available. More importantly, the Potsu weak learner is the only weak learner which satisfies the requirement of AdaBoost. The Potsu weak learners also demonstrate superior performance over other weak learners, such as FLD. The Gabor-Boosting face recognition algorithm is extended into multi-class classification domain, in which a multi-class weak learner called mPotsu is developed. The experiments show that performance is improved by applying loosely controlled face recognition in the multi-class classification.
|
36 |
Novel view synthesisWang, Zhe January 2007 (has links)
Imagine taking a few pictures of a scene with a camera at several different poses, can we generate a novel image that corresponds to a virtual viewpoint where no real camera has captured an image? This is the topic of novel view synthesis. Highly driven by the commercial market in the last decade, there appears a thriving interest on this topic in both the computer graphics community and the computer vision community. The potential applications of which can be found ranging from generating photo-realistic images, the creation of virtual environments, to modelling real three dimensional scenes.
|
37 |
Multi-level image authentication techniques in printing-and-scanningJiang, Weina January 2012 (has links)
Printed media, such as facsimile, newspaper, document, magazine and any other publishing works, plays an important role in communicating information in today's world. The printed media can be easily manipulated by advanced image editing software. Image authentication techniques are, therefore, indispensable for preventing undesired manipulations and protecting infringement of copyright. In this thesis, we investigate image authentication for multi-level greyscale and halftone images using digital watermarking, image hashing and digital forensic techniques including application for printing-and-scanning process. Digital watermarking is the process of embedding information into the cover image which is used to verify its authenticity. The challenge of digital watermarking is the trade-off between embedding capacity and image imperceptibility. In this thesis, we compare the work of halftone watermarking algorithm proposed by Fu and Au. We observe that the image perceptual quality is reduced after watermark embedding due to the problems of sharpening distortion and uneven tonality distribution. To optimize the imperceptibility of watermark embedding, we propose an iterative linear gain halftoning algorithm. Our experiments show that the proposed halftone watermarking algorithm improves a significant amount of image quality of 6.5% to 12% by Weighted Signal-to- Noise Ratio (WSNR) and of 11% to 23% by Visual Information Fidelity (VIF), compared to Fu and Au's algorithm. While halftone watermarking provides a limited robustness against print-and-scan processes, im- age hashing provides an alternative way to verify the authenticity of the content. Little work has been reported for image hashing to printed media. In this thesis, we develop a novel image hashing algorithm based on SIFT local descriptor and introduce a normalization procedure to syn- chronize the printed-and-scanned image. We compare our proposed hashing algorithm with the singular value decomposition based image hashing (SVD-hash) and feature-point based image hashing (FP-hash) using the average Normalized Hamming Distance (NHD) and the Receiver Operating Characteristic (ROC). The proposed hash algorithm has shown good performance trade-off between robustness and discrimination, as compared to the SVD-hash and FP-hash al- gorithms quantified by the results obtained via NHD and ROC. Our proposed algorithm is found to be robust against a wide range of content preserving attacks, including non-geometric attacks, geometric attacks and printing-and-scanning. For our work in digital forensics, we propose in this thesis a statistical approach based on Multi- sized block Benford's Law (MBL), and a texture analysis based on Local Binary Pattern (LBP) to identify the origins of printed documents. We compare MBL-based and LBP-based approaches to a statistical feature-based approach proposed by Gou et al .. The proposed MBL-based approach provides an ability to identify printers from a relatively diverse sets, while it proves less accurate at identifying printers of similar models. The proposed LBP-based approach provides a highly accurate identification rate at approximately 99.4%, with a low variance. In particular, our LBP- based approach only causes 2% mis-identification rate between two identical printers, whereas Gou et al.'s approach causes 20% mis-identification rate. Our proposed LBP-based approach has also successfully demonstrated on printed-and-scanned text documents. Moreover, it remains robust against common image processing attacks, including averaging filtering, median filtering, sharpening, rotation, resizing, and JPEG compression, with computational efficiency of the order of O(N). Key words: Authentication, Watermarking, Image Hashing, Perceptual Quality, Local Binary Pattern, Scale Invariant Feature Transform, Printer Identification, Scanner Identification, Sensor
|
38 |
On the automatic recognition of facial non-verbal communication meaning in informal, spontaneous conversationSheerman-Chase, Tim January 2012 (has links)
Non-Verbal Communication (NVC) comprises all forms of inter-personal communication, apart from those that are based on words. NVC is essential to understand communicated meaning in common social situations, such as informal conversation. The expression and perception of NVC depends on many factors, including social and cultural context. The development of methods to automatically recognise NVC enables new, intuitive computer interfaces for novel applications, particularly when combined with emotion or natural speech recognition. This thesis addresses two questions: how can facial NVC signals be automatically recognised, given cultural differences in NVC perception? and, what do automatic recognition methods tell us about facial behaviour during informal conversations? A new data set was created based on recordings of people engaged in informal conversation. Minimal constraints were applied during the recording of the participants to ensure that the conversations were spontaneous. These conversations were annotated by volunteer observers, as well as paid workers via the Internet. This resulted in three sets of culturally specific annotations based on the geographical location of the annotator (Great Britain, India, Kenya). The cultures differed in the average label that the culture's annotators assigned to each video clip. Annotations were based on four NVC signals: agreement , thinking, questioning and understanding, all of which commonly occur in conversations. An automatic NVC recognition system was trained based on culturally specific annotation data and was able to make predictions that reflected cultural differences in annotation. Various visual feature extraction methods and classifiers were compared to find an effective recognition approach. The problem was also considered from the perspective of regression of dimensional, continuous valued annotation labels, using Support Vector Regression (SVR), which enables the prediction of labels which have richer information content than discrete classes. The use of Sequential Backward Elimination (SBE) feature selection was shown to greatly increase recognition performance. With a method for extracting the relevant facial features, it becomes possible to investigate human behaviour in informal conversation using computer tools. Firstly, the areas of the face used by the automatic recognition system can be identified and visualised. The involvement of gaze in thinking is confirmed, and a. new association between gestures and NVC are identified, i.e. brow lowering (AU4) during questioning. These findings provide clues as to the way humans perceive NVC. Secondly, the existence of coupling in human expression is quantified and visua1ised. Patterns exist in both mutual head pose and in the mouth area, some of which may relate to mutual smiling. This coupling effect is used in an automatic NVC recognition system based on backchannel signals.
|
39 |
Image retrieval using semantic treesTorres, Jose Roberto Perez January 2008 (has links)
No description available.
|
40 |
Progressive document evaluationMacdonald, Alexander J. January 2008 (has links)
No description available.
|
Page generated in 0.0211 seconds