• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 14
  • 14
  • 8
  • 7
  • 6
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Localized Geometric-Distortion Resilient Digital Watermarking Scheme Using Two Kinds of Complementary Feature Points

Wang, Jiyuan 01 May 2012 (has links)
With the rapid development of digital multimedia and internet techniques in the last few years, more and more digital images are being distributed to an ever-growing number of people for sharing, studying, or other purposes. Sharing images digitally is fast and cost-efficient thus highly desirable. However, most of those digital products are exposed without any protection. Thus, without authorization, such information can be easily transferred, copied, and tampered with by using digital multimedia editing software. Watermarking is a popular resolution to the strong need of copyright protection of digital multimedia. In the image forensics scenario, a digital watermark can be used as a tool to discriminate whether original content is tampered with or not. It is embedded on digital images as an invisible message and is used to demonstrate the proof by the owner. In this thesis, we propose a novel localized geometric-distortion resilient digital watermarking scheme to embed two invisible messages to images. Our proposed scheme utilizes two complementary watermarking techniques, namely, local circular region (LCR)-based techniques and block discrete cosine transform (DCT)-based techniques, to hide two pseudo-random binary sequences in two kinds of regions and extract these two sequences from their individual embedding regions. To this end, we use the histogram and mean statistically independent of the pixel position to embed one watermark in the LCRs, whose centers are the scale invariant feature transform (SIFT) feature points themselves that are robust against various affine transformations and common image processing attacks. This watermarking technique combines the advantages of SIFT feature point extraction, local histogram computing, and blind watermark embedding and extraction in the spatial domain to resist geometric distortions. We also use Watson’s DCT-based visual model to embed the other watermark in several rich textured 80×80 regions not covered by any embedding LCR. This watermarking technique combines the advantages of Harris feature point extraction, triangle tessellation and matching, the human visual system (HVS), the spread spectrum-based blind watermark embedding and extraction. The proposed technique then uses these combined features in a DCT domain to resist common image processing attacks and to reduce the watermark synchronization problem at the same time. These two techniques complement each other and therefore can resist geometric and common image processing attacks robustly. Our proposed watermarking approach is a robust watermarking technique that is capable of resisting geometric attacks, i.e., affine transformation (rotation, scaling, and translation) attacks and other common image processing (e.g., JPEG compression and filtering operations) attacks. It demonstrates more robustness and better performance as compared with some peer systems in the literature.
2

Perception and re-synchronization issues for the watermarking of 3D shapes

Rondao Alface, Patrice 26 October 2006 (has links)
Digital watermarking is the art of embedding secret messages in multimedia contents in order to protect their intellectual property. While the watermarking of image, audio and video is reaching maturity, the watermarking of 3D virtual objects is still a technology in its infancy. In this thesis, we focus on two main issues. The first one is the perception of the distortions caused by the watermarking process or by attacks on the surface of a 3D model. The second one concerns the development of techniques able to retrieve a watermark without the availability of the original data and after common manipulations and attacks. Since imperceptibility is a strong requirement, assessing the visual perception of the distortions that a 3D model undergoes in the watermarking pipeline is a key issue. In this thesis, we propose an image-based metric that relies on the comparison of 2D views with a Mutual Information criterion. A psychovisual experiment has validated the results of this metric for the most common watermarking attacks. The other issue this thesis deals with is the blind and robust watermarking of 3D shapes. In this context, three different watermarking schemes are proposed. These schemes differ by the classes of 3D watermarking attacks they are able to resist to. The first scheme is based on the extension of spectral decomposition to 3D models. This approach leads to robustness against imperceptible geometric deformations. The weakness of this technique is mainly related to resampling or cropping attacks. The second scheme extends the first to resampling by making use of the automatic multiscale detection of robust umbilical points. The third scheme then addresses the cropping attack by detecting robust prong feature points to locally embed a watermark in the spatial domain.
3

Identificação de pontos robustos em marcadores naturais e aplicação de metodologia baseada em aprendizagem situada no desenvolvimento de sistemas de realidade aumentada

Forte, Cleberson Eugenio 07 August 2015 (has links)
Made available in DSpace on 2016-03-15T19:38:53Z (GMT). No. of bitstreams: 1 CLEBERSON EUGENIO FORTE.pdf: 2993506 bytes, checksum: e8e990b2681adb61a5be14b6e6282431 (MD5) Previous issue date: 2015-08-07 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / In the past, the Augmented Reality (AR) required advanced technologies in special devices for interaction and visualization. Nowadays, with the emergence of the mobile devices it has become common the usage of these tools in the development of AR systems applied to various purposes, including education, using natural markers. As the quality of images captured by mobile devices has increased the number of detected feature points has increased also, which ultimately hampers, or even prevents the technique to be used in applications, which run in real time. In addition, it becomes clear the necessity of proposing methodologies to be used in the development of educational applications using AR systems in order to improve the user s experience as well as the longevity of these applications, adding elements based on educational theories. The technique presented in this work determine illumination robust feature points, in order to reduce the time required to match high-resolution images. Additionally, the research also provides a conceptual framework methodology that, based on situated learning theory, combines the educational and technological aspects related to the context of developing mobile AR applications. Based on the experiments, it is possible to say that the technique using robust feature points saves about 70% in the processing time for matching high resolution images. / No passado, a Realidade Aumentada (RA) requeria a tecnologia avançada de dispositivos especiais de interação e de visualização. Atualmente, com o surgimento de dispositivos móveis tornou-se comum o uso destas ferramentas no desenvolvimento de sistemas de RA aplicados aos mais diversos fins, dentre eles a educação, utilizando marcadores naturais. Com o aumento na qualidade das imagens captadas pelos dispositivos móveis, preocupa o fato de que, quanto melhor esta qualidade, mais pontos de interesse tendem a ser detectados para o reconhecimento dos marcadores naturais, o que, em última instância, dificulta ou mesmo impede que a técnica seja utilizada em aplicações que exijam funcionamento em tempo real. Soma-se a isto a constatação da necessidade de proposição de metodologias a serem empregadas no desenvolvimento de aplicações educacionais usando RA, visando tanto a melhoria da experiência do usuário quanto à longevidade de utilização destas aplicações, por meio da incorporação de elementos baseados em teorias educacionais. A técnica proposta neste trabalho para determinação de pontos de interesse robustos à variação de iluminação visa diminuir o tempo necessário para a correspondência entre imagens em alta definição. A pesquisa indica também uma metodologia no formato de framework conceitual, que, baseada na teoria de aprendizagem situada, correlaciona os aspectos educacionais e tecnológicos próprios ao contexto do desenvolvimento de aplicações de RA móvel. Com base nos experimentos realizados, é possível observar que a técnica que utiliza os pontos robustos, representa economia de, aproximadamente, 70% no tempo necessário para a correspondência entre imagens em alta definição.
4

Sequential Motion Estimation and Refinement for Applications of Real-time Reconstruction from Stereo Vision

Stefanik, Kevin Vincent 10 August 2011 (has links)
This paper presents a new approach to the feature-matching problem for 3D reconstruction by taking advantage of GPS and IMU data, along with a prior calibrated stereo camera system. It is expected that pose estimates and calibration can be used to increase feature matching speed and accuracy. Given pose estimates of cameras and extracted features from images, the algorithm first enumerates feature matches based on stereo projection constraints in 2D and then backprojects them to 3D. Then, a grid search algorithm over potential camera poses is proposed to match the 3D features and find the largest group of 3D feature matches between pairs of stereo frames. This approach will provide pose accuracy to within the space that each grid region covers. Further refinement of relative camera poses is performed with an iteratively re-weighted least squares (IRLS) method in order to reject outliers in the 3D matches. The algorithm is shown to be capable of running in real-time correctly, where the majority of processing time is taken by feature extraction and description. The method is shown to outperform standard open source software for reconstruction from imagery. / Master of Science
5

A technique for face recognition based on image registration

Gillan, Steven 12 April 2010 (has links)
This thesis presents a technique for face recognition that is based on image registration. The image registration technique is based on finding a set of feature points in the two images and using these feature points for registration. This is done in four steps. In the first, images are filtered with the Mexican hat wavelet to obtain the feature point locations. In the second, the Zernike moments of neighbourhoods around the feature points are calculated and compared in the third step to establish correspondence between feature points in the two images and in the fourth the transformation parameters between images are obtained using an iterative weighted least squares technique. The face recognition technique consists of three parts, a training part, an image registration part and a post-processing part. During training a set of images are chosen as the training images and the Zernike moments for the feature points of the training images are obtained and stored. In the registration part, the transformation parameters to register the training images with the images under consideration are obtained. In the post-processing, these transformation parameters are used to determine whether a valid match is found or not. The performance of the proposed method is evaluated using various face databases and it is compared with the performance of existing techniques. Results indicate that the proposed technique gives excellent results for face recognition in conditions of varying pose, illumination, background and scale. These results are comparable to other well known face recognition techniques.
6

Détection d’objets en mouvement à l’aide d’une caméra mobile / Moving objects detection with a moving camera

Chapel, Marie-Neige 22 September 2017 (has links)
La détection d'objets mobiles dans des flux vidéo est une étape essentielle pour de nombreux algorithmes de vision par ordinateur. Cette tâche se complexifie lorsque la caméra utilisée est en mouvement. En effet, l'environnement capté par ce type de caméra apparaît en mouvement et il devient plus difficile de distinguer les objets qui effectuent réellement un mouvement de ceux qui constituent la partie statique de la scène. Dans cette thèse, nous apportons des contributions au problème de détection d'objets mobiles dans le flux vidéo d'une caméra mobile. L'idée principale qui nous permet de distinguer les éléments mobiles de ceux qui sont statiques repose sur un calcul de distance dans l'espace 3D. Les positions 3D de caractéristiques extraites des images sont estimées par triangulation puis leurs mouvements 3D sont analysés pour réaliser un étiquetage éparse statique/mobile de ces points. Afin de rendre la détection robuste au bruit, l'analyse des mouvements 3D des caractéristiques est comparée à d'autres points précédemment estimés statiques. Une mesure de confiance, mise à jour au cours du temps, est utilisée pour déterminer l'étiquette à attribuer à chacun des points. Nos contributions ont été appliquées à des jeux de données virtuelles (issus du projet Previz 2) et réelles (reconnus dans la communauté [Och+14]) et les comparaisons ont été réalisées avec l'état de l'art. Les résultats obtenus montrent que la contrainte 3D proposée dans cette thèse, couplée à une analyse statistique et temporelle des mouvements, permet de détecter des éléments mobiles dans le flux vidéo d'une caméra en mouvement et ce même dans des cas complexes où les mouvements apparents de la scène ne sont pas uniformes / Moving objects detection in video streams is a commonly used technique in many computer vision algorithms. The detection becomes more complex when the camera is moving. The environment observed by this type of camera appeared moving and it is more difficult to distinguish the objects which are in movement from the others that composed the static part of the scene. In this thesis we propose contributions for the detection of moving objects in the video stream of a moving camera. The main idea to differenciate between moving and static objects based on 3D distances. 3D positions of feature points extracted from images are estimated by triangulation and then their 3D motions are analyzed in order to provide a sparse static/moving labeling. To provide a more robust detection, the analysis of the 3D motions is compared to those of feature points previously estimated static. A confidance value updated over time is used to decide on labels to attribute to each point.We make experiments on virtual (from the Previz project 1) and real datasets (known by the community [Och+14]) and we compare the results with the state of the art. The results show that our 3D constraint coupled with a statistical and temporal analysis of motions allow to detect moving elements in the video stream of a moving camera even in complex cases where apparent motions of the scene are not similars
7

Model-Based Eye Detection and Animation

Trejo Guerrero, Sandra January 2006 (has links)
<p>In this thesis we present a system to extract the eye motion from a video stream containing a human face and applying this eye motion into a virtual character. By the notation eye motion estimation, we mean the information which describes the location of the eyes in each frame of the video stream. Applying this eye motion estimation into a virtual character, we achieve that the virtual face moves the eyes in the same way than the human face, synthesizing eye motion into a virtual character. In this study, a system capable of face tracking, eye detection and extraction, and finally iris position extraction using video stream containing a human face has been developed. Once an image containing a human face is extracted from the current frame of the video stream, the detection and extraction of the eyes is applied. The detection and extraction of the eyes is based on edge detection. Then the iris center is determined applying different image preprocessing and region segmentation using edge features on the eye picture extracted.</p><p>Once, we have extracted the eye motion, using MPEG-4 Facial Animation, this motion is translated into the Facial Animation arameters (FAPs). Thus we can improve the quality and quantity of Facial Animation expressions that we can synthesize into a virtual character.</p>
8

Matching Feature Points in 3D World

Avdiu, Blerta January 2012 (has links)
This thesis work deals with the most actual topic in Computer Vision field which is scene understanding and this using matching of 3D feature point images. The objective is to make use of Saab’s latest breakthrough in extraction of 3D feature points, to identify the best alignment of at least two 3D feature point images. The thesis gives a theoretical overview of the latest algorithms used for feature detection, description and matching. The work continues with a brief description of the simultaneous localization and mapping (SLAM) technique, ending with a case study on evaluation of the newly developed software solution for SLAM, called slam6d. Slam6d is a tool that registers point clouds into a common coordinate system. It does an automatic high-accurate registration of the laser scans. In the case study the use of slam6d is extended in registering 3D feature point images extracted from a stereo camera and the results of registration are analyzed. In the case study we start with registration of one single 3D feature point image captured from stationary image sensor continuing with registration of multiple images following a trail. Finally the conclusion from the case study results is that slam6d can register non-laser scan extracted feature point images with high-accuracy in case of single image but it introduces some overlapping results in the case of multiple images following a trail.
9

Recomendações de obras de arte baseadas em conteúdo

Ribani, Ricardo 11 February 2015 (has links)
Made available in DSpace on 2016-03-15T19:37:55Z (GMT). No. of bitstreams: 1 RICARDO RIBANI.pdf: 13475262 bytes, checksum: 1e8f0a623498d0aa2fda9f44449b7325 (MD5) Previous issue date: 2015-02-11 / Fundo Mackenzie de Pesquisa / With the growing amount of multimedia information, the recommender systems have become more present in digital systems. Together with the growth of the internet, more and more people have access to large multimedia collections and consequently the user is often in doubt situations when making a choice. In order to help the user to make their own choices, this research presents a study around the content-based recommender systems applied to art paintings. Here are included approaches on image retrieval algorithms, computer vision and artificial intelligence concepts such as techniques for pattern recognition. One of the goals of this research was the creation of a software for mobile phones, applied to an art paintings database. The application uses an interface developed for mobile phones, where the user can point the phone s camera to a painting and based on this painting the system generates a recommendation of another painting in the same database, considering some parameters such as style, genre or color. / Os sistemas de recomendações estão cada dia mais presentes no meio digital. Com a crescente quantidade de informações e a popularização da internet, cada vez mais as pessoas tem acesso a grandes acervos multimídia. Com isso, consequentemente o usuário se encontra muitas vezes em situações de dúvida ao fazer uma escolha. Com o objetivo de auxiliar o usuário a fazer suas escolhas, o presente trabalho apresenta um estudo em torno dos sistemas de recomendações baseados em conteúdo de imagens. Este estudo engloba uma abordagem a respeito de algoritmos de recuperação de imagens, além da aplicação de conceitos de visão computacional e inteligência artificial, como técnicas para reconhecimento de padrões. Além do estudo teórico, este trabalho teve como objetivo a criação de um sistema computacional aplicado a um banco de dados de imagens de obras de arte. Uma aplicação que utiliza uma interface desenvolvida para telefones celulares, no qual o usuário pode capturar a imagem de uma obra através da câmera do celular e baseado nessa obra o sistema gera uma recomendação de outra dentro do mesmo banco de dados, considerando parâmetros configuráveis como estilo, gênero ou cores.
10

Model-Based Eye Detection and Animation

Trejo Guerrero, Sandra January 2006 (has links)
In this thesis we present a system to extract the eye motion from a video stream containing a human face and applying this eye motion into a virtual character. By the notation eye motion estimation, we mean the information which describes the location of the eyes in each frame of the video stream. Applying this eye motion estimation into a virtual character, we achieve that the virtual face moves the eyes in the same way than the human face, synthesizing eye motion into a virtual character. In this study, a system capable of face tracking, eye detection and extraction, and finally iris position extraction using video stream containing a human face has been developed. Once an image containing a human face is extracted from the current frame of the video stream, the detection and extraction of the eyes is applied. The detection and extraction of the eyes is based on edge detection. Then the iris center is determined applying different image preprocessing and region segmentation using edge features on the eye picture extracted. Once, we have extracted the eye motion, using MPEG-4 Facial Animation, this motion is translated into the Facial Animation arameters (FAPs). Thus we can improve the quality and quantity of Facial Animation expressions that we can synthesize into a virtual character.

Page generated in 0.0788 seconds