• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • 1
  • Tagged with
  • 12
  • 12
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Progressive Lossless Image Compression Using Image Decomposition and Context Quantization

Zha, Hui 23 January 2007 (has links)
Lossless image compression has many applications, for example, in medical imaging, space photograph and film industry. In this thesis, we propose an efficient lossless image compression scheme for both binary images and gray-scale images. The scheme first decomposes images into a set of progressively refined binary sequences and then uses the context-based, adaptive arithmetic coding algorithm to encode these sequences. In order to deal with the context dilution problem in arithmetic coding, we propose a Lloyd-like iterative algorithm to quantize contexts. Fixing the set of input contexts and the number of quantized contexts, our context quantization algorithm iteratively finds the optimum context mapping in the sense of minimizing the compression rate. Experimental results show that by combining image decomposition and context quantization, our scheme can achieve competitive lossless compression performance compared to the JBIG algorithm for binary images, and the CALIC algorithm for gray-scale images. In contrast to CALIC, our scheme provides the additional feature of allowing progressive transmission of gray-scale images, which is very appealing in applications such as web browsing.
2

Progressive Lossless Image Compression Using Image Decomposition and Context Quantization

Zha, Hui 23 January 2007 (has links)
Lossless image compression has many applications, for example, in medical imaging, space photograph and film industry. In this thesis, we propose an efficient lossless image compression scheme for both binary images and gray-scale images. The scheme first decomposes images into a set of progressively refined binary sequences and then uses the context-based, adaptive arithmetic coding algorithm to encode these sequences. In order to deal with the context dilution problem in arithmetic coding, we propose a Lloyd-like iterative algorithm to quantize contexts. Fixing the set of input contexts and the number of quantized contexts, our context quantization algorithm iteratively finds the optimum context mapping in the sense of minimizing the compression rate. Experimental results show that by combining image decomposition and context quantization, our scheme can achieve competitive lossless compression performance compared to the JBIG algorithm for binary images, and the CALIC algorithm for gray-scale images. In contrast to CALIC, our scheme provides the additional feature of allowing progressive transmission of gray-scale images, which is very appealing in applications such as web browsing.
3

MARDI : Marca d'água Digital Robusta via Decomposição de Imagens : uma proposta para aumentar a robustez de técnicas de marca d'água digital /

Lopes, Ivan Oliveira January 2018 (has links)
Orientador: Alexandre César Rodrigues da Silva / Resumo: Com a crescente evolução dos equipamentos eletrônicos, muitos dados digitais têm sido produzidos, copiados e distribuídos com facilidade, gerando uma grande preocupação com a sua segurança. Dentre as várias técnicas usadas para proteger os dados digitais, tem-se as técnicas de inserção e extração de marca d'água em imagens digitais. Uma marca d'água pode ser qualquer informação como, por exemplo, um código, um logotipo, ou uma sequência aleatória de letras e números, que vise garantir a autenticidade e a proteção dos direitos autoriais dos dados. Neste trabalho, estudou-se sobre as técnicas existentes de inserção e extração de marca d'água digital, abordando desde seu conceito até o desenvolvimento de algoritmos de inserção e extração de marca d'água em imagens digitais. Desenvolveu-se um método para aumentar a robustez de técnicas de marca d' água digital pela decomposição da imagem em duas partes: estrutural (áreas homogêneas) e de detalhes (áreas com ruídos, texturas e bordas). Contudo, a marca d'água é inserida na parte de detalhes por se tratar de áreas menos afetadas por operações de processamento digital de imagens. Os resultados mostraram que o método proposto aumentou a robustez das técnicas da marca d'água testadas. Baseado nos resultados obtidos, desenvolveu-se uma nova técnica de marca d'água digital, utilizando a transformada discreta de wavelets, a decomposição de imagens e a transformada discreta do cosseno. / Abstract: With the increasing evolution of technological equipment, many digital data have been easily produced, copied and distributed by generating a great concern for their security. Among the various techniques used to protect the digital data, there are techniques for inserting and extracting a watermark into digital images. A watermark can be any information, such as a code, a logo, or a random sequence of letters and numbers, aiming to ensure the authenticity and copyright protection. In this work are studied, existing insertion and extraction techniques in digital watermarking, by covering from its concept to the development of watermark insertion and extraction algorithms, in digital images. A method was developed to increase the robustness of digital watermarking techniques by decomposing the image into two parts: structural (homogeneous areas) and details (areas with noises, textures and edges). However, the watermark is inserted in the detail area due to be less affected areas by digital image processing. The results showed that the proposed method increased the robustness of the tested watermarking techniques. Based on the results obtained, we developed a new digital watermark technique using discrete wavelet transform, image decomposition and discrete cosine transform. / Doutor
4

MARDI: Marca d'água Digital Robusta via Decomposição de Imagens : uma proposta para aumentar a robustez de técnicas de marca d'água digital / MARDI: Robust Digital Watermarking by Image Decomposition: a proposed to increse the robustness of digital watermark techniques

Lopes, Ivan Oliveira 27 July 2018 (has links)
Submitted by Ivan Oliveira Lopes (io.lopes@ifsp.edu.br) on 2018-09-22T02:34:15Z No. of bitstreams: 1 MARDI_Marca dagua Robusta via Decomposicao de Imagens.pdf: 69551246 bytes, checksum: 895274cb196d7f5075a7bf92bba60e9f (MD5) / Approved for entry into archive by Cristina Alexandra de Godoy null (cristina@adm.feis.unesp.br) on 2018-09-25T19:59:12Z (GMT) No. of bitstreams: 1 lopes_io_dr_ilha.pdf: 69617454 bytes, checksum: 7a93538756770bc0935aa4b85b95f4b7 (MD5) / Made available in DSpace on 2018-09-25T19:59:12Z (GMT). No. of bitstreams: 1 lopes_io_dr_ilha.pdf: 69617454 bytes, checksum: 7a93538756770bc0935aa4b85b95f4b7 (MD5) Previous issue date: 2018-07-27 / Com a crescente evolução dos equipamentos eletrônicos, muitos dados digitais têm sido produzidos, copiados e distribuídos com facilidade, gerando uma grande preocupação com a sua segurança. Dentre as várias técnicas usadas para proteger os dados digitais, tem-se as técnicas de inserção e extração de marca d'água em imagens digitais. Uma marca d'água pode ser qualquer informação como, por exemplo, um código, um logotipo, ou uma sequência aleatória de letras e números, que vise garantir a autenticidade e a proteção dos direitos autoriais dos dados. Neste trabalho, estudou-se sobre as técnicas existentes de inserção e extração de marca d'água digital, abordando desde seu conceito até o desenvolvimento de algoritmos de inserção e extração de marca d'água em imagens digitais. Desenvolveu-se um método para aumentar a robustez de técnicas de marca d' água digital pela decomposição da imagem em duas partes: estrutural (áreas homogêneas) e de detalhes (áreas com ruídos, texturas e bordas). Contudo, a marca d'água é inserida na parte de detalhes por se tratar de áreas menos afetadas por operações de processamento digital de imagens. Os resultados mostraram que o método proposto aumentou a robustez das técnicas da marca d'água testadas. Baseado nos resultados obtidos, desenvolveu-se uma nova técnica de marca d'água digital, utilizando a transformada discreta de wavelets, a decomposição de imagens e a transformada discreta do cosseno. / With the increasing evolution of technological equipment, many digital data have been easily produced, copied and distributed by generating a great concern for their security. Among the various techniques used to protect the digital data, there are techniques for inserting and extracting a watermark into digital images. A watermark can be any information, such as a code, a logo, or a random sequence of letters and numbers, aiming to ensure the authenticity and copyright protection. In this work are studied, existing insertion and extraction techniques in digital watermarking, by covering from its concept to the development of watermark insertion and extraction algorithms, in digital images. A method was developed to increase the robustness of digital watermarking techniques by decomposing the image into two parts: structural (homogeneous areas) and details (areas with noises, textures and edges). However, the watermark is inserted in the detail area due to be less affected areas by digital image processing. The results showed that the proposed method increased the robustness of the tested watermarking techniques. Based on the results obtained, we developed a new digital watermark technique using discrete wavelet transform, image decomposition and discrete cosine transform.
5

Segmentation of 3D Carotid Ultrasound Images Using Weak Geometric Priors

Solovey, Igor January 2010 (has links)
Vascular diseases are among the leading causes of death in Canada and around the globe. A major underlying cause of most such medical conditions is atherosclerosis, a gradual accumulation of plaque on the walls of blood vessels. Particularly vulnerable to atherosclerosis is the carotid artery, which carries blood to the brain. Dangerous narrowing of the carotid artery can lead to embolism, a dislodgement of plaque fragments which travel to the brain and are the cause of most strokes. If this pathology can be detected early, such a deadly scenario can be potentially prevented through treatment or surgery. This not only improves the patient's prognosis, but also dramatically lowers the overall cost of their treatment. Medical imaging is an indispensable tool for early detection of atherosclerosis, in particular since the exact location and shape of the plaque need to be known for accurate diagnosis. This can be achieved by locating the plaque inside the artery and measuring its volume or texture, a process which is greatly aided by image segmentation. In particular, the use of ultrasound imaging is desirable because it is a cost-effective and safe modality. However, ultrasonic images depict sound-reflecting properties of tissue, and thus suffer from a number of unique artifacts not present in other medical images, such as acoustic shadowing, speckle noise and discontinuous tissue boundaries. A robust ultrasound image segmentation technique must take these properties into account. Prior to segmentation, an important pre-processing step is the extraction of a series of features from the image via application of various transforms and non-linear filters. A number of such features are explored and evaluated, many of them resulting in piecewise smooth images. It is also proposed to decompose the ultrasound image into several statistically distinct components. These components can be then used as features directly, or other features can be obtained from them instead of the original image. The decomposition scheme is derived using Maximum-a-Posteriori estimation framework and is efficiently computable. Furthermore, this work presents and evaluates an algorithm for segmenting the carotid artery in 3D ultrasound images from other tissues. The algorithm incorporates information from different sources using an energy minimization framework. Using the ultrasound image itself, statistical differences between the region of interest and its background are exploited, and maximal overlap with strong image edges encouraged. In order to aid the convergence to anatomically accurate shapes, as well as to deal with the above-mentioned artifacts, prior knowledge is incorporated into the algorithm by using weak geometric priors. The performance of the algorithm is tested on a number of available 3D images, and encouraging results are obtained and discussed.
6

Segmentation of 3D Carotid Ultrasound Images Using Weak Geometric Priors

Solovey, Igor January 2010 (has links)
Vascular diseases are among the leading causes of death in Canada and around the globe. A major underlying cause of most such medical conditions is atherosclerosis, a gradual accumulation of plaque on the walls of blood vessels. Particularly vulnerable to atherosclerosis is the carotid artery, which carries blood to the brain. Dangerous narrowing of the carotid artery can lead to embolism, a dislodgement of plaque fragments which travel to the brain and are the cause of most strokes. If this pathology can be detected early, such a deadly scenario can be potentially prevented through treatment or surgery. This not only improves the patient's prognosis, but also dramatically lowers the overall cost of their treatment. Medical imaging is an indispensable tool for early detection of atherosclerosis, in particular since the exact location and shape of the plaque need to be known for accurate diagnosis. This can be achieved by locating the plaque inside the artery and measuring its volume or texture, a process which is greatly aided by image segmentation. In particular, the use of ultrasound imaging is desirable because it is a cost-effective and safe modality. However, ultrasonic images depict sound-reflecting properties of tissue, and thus suffer from a number of unique artifacts not present in other medical images, such as acoustic shadowing, speckle noise and discontinuous tissue boundaries. A robust ultrasound image segmentation technique must take these properties into account. Prior to segmentation, an important pre-processing step is the extraction of a series of features from the image via application of various transforms and non-linear filters. A number of such features are explored and evaluated, many of them resulting in piecewise smooth images. It is also proposed to decompose the ultrasound image into several statistically distinct components. These components can be then used as features directly, or other features can be obtained from them instead of the original image. The decomposition scheme is derived using Maximum-a-Posteriori estimation framework and is efficiently computable. Furthermore, this work presents and evaluates an algorithm for segmenting the carotid artery in 3D ultrasound images from other tissues. The algorithm incorporates information from different sources using an energy minimization framework. Using the ultrasound image itself, statistical differences between the region of interest and its background are exploited, and maximal overlap with strong image edges encouraged. In order to aid the convergence to anatomically accurate shapes, as well as to deal with the above-mentioned artifacts, prior knowledge is incorporated into the algorithm by using weak geometric priors. The performance of the algorithm is tested on a number of available 3D images, and encouraging results are obtained and discussed.
7

Contributions to image restoration : from numerical optimization strategies to blind deconvolution and shift-variant deblurring / Contributions pour la restauration d'images : des stratégies d'optimisation numérique à la déconvolution aveugle et à la correction de flous spatialement variables

Mourya, Rahul Kumar 01 February 2016 (has links)
L’introduction de dégradations lors du processus de formation d’images est un phénomène inévitable: les images souffrent de flou et de la présence de bruit. Avec les progrès technologiques et les outils numériques, ces dégradations peuvent être compensées jusqu’à un certain point. Cependant, la qualité des images acquises est insuffisante pour de nombreuses applications. Cette thèse contribue au domaine de la restauration d’images. La thèse est divisée en cinq chapitres, chacun incluant une discussion détaillée sur différents aspects de la restauration d’images. La thèse commence par une présentation générale des systèmes d’imagerie et pointe les dégradations qui peuvent survenir ainsi que leurs origines. Dans certains cas, le flou peut être considéré stationnaire dans tout le champ de vue et est alors simplement modélisé par un produit de convolution. Néanmoins, dans de nombreux cas de figure, le flou est spatialement variable et sa modélisation est plus difficile, un compromis devant être réalisé entre la précision de modélisation et la complexité calculatoire. La première partie de la thèse présente une discussion détaillée sur la modélisation des flous spatialement variables et différentes approximations efficaces permettant de les simuler. Elle décrit ensuite un modèle de formation de l’image générique. Puis, la thèse montre que la restauration d’images peut s’interpréter comme un problème d’inférence bayésienne et ainsi être reformulé en un problème d’optimisation en grande dimension. La deuxième partie de la thèse considère alors la résolution de problèmes d’optimisation génériques, en grande dimension, tels que rencontrés dans de nombreux domaines applicatifs. Une nouvelle classe de méthodes d’optimisation est proposée pour la résolution des problèmes inverses en imagerie. Les algorithmes proposés sont aussi rapides que l’état de l’art (d’après plusieurs comparaisons expérimentales) tout en supprimant la difficulté du réglage de paramètres propres à l’algorithme d’optimisation, ce qui est particulièrement utile pour les utilisateurs. La troisième partie de la thèse traite du problème de la déconvolution aveugle (estimation conjointe d’un flou invariant et d’une image plus nette) et suggère différentes façons de contraindre ce problème d’estimation. Une méthode de déconvolution aveugle adaptée à la restauration d’images astronomiques est développée. Elle se base sur une décomposition de l’image en sources ponctuelles et sources étendues et alterne des étapes de restauration de l’image et d’estimation du flou. Les résultats obtenus en simulation suggèrent que la méthode peut être un bon point de départ pour le développement de traitements dédiés à l’astronomie. La dernière partie de la thèse étend les modèles de flous spatialement variables pour leur mise en oeuvre pratique. Une méthode d’estimation du flou est proposée dans une étape d’étalonnage. Elle est appliquée à un système expérimental, démontrant qu’il est possible d’imposer des contraintes de régularité et d’invariance lors de l’estimation du flou. L’inversion du flou estimé permet ensuite d’améliorer significativement la qualité des images. Les deux étapes d’estimation du flou et de restauration forment les deux briques indispensables pour mettre en oeuvre, à l’avenir, une méthode de restauration aveugle (c’est à dire, sans étalonnage préalable). La thèse se termine par une conclusion ouvrant des perspectives qui pourront être abordées lors de travaux futurs / Degradations of images during the acquisition process is inevitable; images suffer from blur and noise. With advances in technologies and computational tools, the degradations in the images can be avoided or corrected up to a significant level, however, the quality of acquired images is still not adequate for many applications. This calls for the development of more sophisticated digital image restoration tools. This thesis is a contribution to image restoration. The thesis is divided into five chapters, each including a detailed discussion on different aspects of image restoration. It starts with a generic overview of imaging systems, and points out the possible degradations occurring in images with their fundamental causes. In some cases the blur can be considered stationary throughout the field-of-view, and then it can be simply modeled as convolution. However, in many practical cases, the blur varies throughout the field-of-view, and thus modeling the blur is not simple considering the accuracy and the computational effort. The first part of this thesis presents a detailed discussion on modeling of shift-variant blur and its fast approximations, and then it describes a generic image formation model. Subsequently, the thesis shows how an image restoration problem, can be seen as a Bayesian inference problem, and then how it turns into a large-scale numerical optimization problem. Thus, the second part of the thesis considers a generic optimization problem that is applicable to many domains, and then proposes a class of new optimization algorithms for solving inverse problems in imaging. The proposed algorithms are as fast as the state-of-the-art algorithms (verified by several numerical experiments), but without any hassle of parameter tuning, which is a great relief for users. The third part of the thesis presents an in depth discussion on the shift-invariant blind image deblurring problem suggesting different ways to reduce the ill-posedness of the problem, and then proposes a blind image deblurring method using an image decomposition for restoration of astronomical images. The proposed method is based on an alternating estimation approach. The restoration results on synthetic astronomical scenes are promising, suggesting that the proposed method is a good candidate for astronomical applications after certain modifications and improvements. The last part of the thesis extends the ideas of the shift-variant blur model presented in the first part. This part gives a detailed description of a flexible approximation of shift-variant blur with its implementational aspects and computational cost. This part presents a shift-variant image deblurring method with some illustrations on synthetically blurred images, and then it shows how the characteristics of shift-variant blur due to optical aberrations can be exploited for PSF estimation methods. This part describes a PSF calibration method for a simple experimental camera suffering from optical aberration, and then shows results on shift-variant image deblurring of the images captured by the same experimental camera. The results are promising, and suggest that the two steps can be used to achieve shift-variant blind image deblurring, the long-term goal of this thesis. The thesis ends with the conclusions and suggestions for future works in continuation of the current work
8

Transformations polynomiales, applications à l'estimation de mouvements et la classification / Polynomial transformations, applications to motion estimation and classification

Moubtahij, Redouane El 11 June 2016 (has links)
Ces travaux de recherche concernent la modélisation de l'information dynamique fonctionnelle fournie par les champs de déplacements apparents à l'aide de base de polynômes orthogonaux. Leur objectif est de modéliser le mouvement et la texture extraites afin de l'exploiter dans les domaines de l'analyse et de la reconnaissance automatique d'images et de vidéos. Nous nous intéressons aussi bien aux mouvements humains qu'aux textures dynamiques. Les bases de polynômes orthogonales ont été étudiées. Cette approche est particulièrement intéressante car elle offre une décomposition en multi-résolution et aussi en multi-échelle. La première contribution de cette thèse est la définition d'une méthode spatiale de décomposition d'image : l'image est projetée et reconstruite partiellement avec un choix approprié du degré d'anisotropie associé à l'équation de décomposition basée sur des transformations polynomiales. Cette approche spatiale est étendue en trois dimensions afin d'extraire la texture dynamique dans des vidéos. Notre deuxième contribution consiste à utiliser les séquences d'images qui représentent les parties géométriques comme images initiales pour extraire les flots optiques couleurs. Deux descripteurs d'action, spatial et spatio-temporel, fondés sur la combinaison des informations du mouvement/texture sont alors extraits. Il est ainsi possible de définir un système permettant de reconnaître une action complexe (composée d'une suite de champs de déplacement et de textures polynomiales) dans une vidéo. / The research relies on modeling the dynamic functional information from the fields of apparent movement using basic orthogonal polynomials. The goal is to model the movement and texture extracted for automatic analysis and recognition of images and videos. We are interested both in human movements as dynamic textures. Orthogonal polynomials bases were studied. This approach is particularly interesting because it offers a multi-resolution and a multi-scale decomposition. The first contribution of this thesis is the definition of method of image spatial decomposition: the image is projected and partially rebuilt with an appropriate choice of the degree of anisotropy associated with the decomposition equation based on polynomial transformations. This spatial approach is extended into three dimensions to retrieve the dynamic texture in videos. Our second contribution is to use image sequences that represent the geometric parts as initial images to extract color optical flow. Two descriptors of action, spatial and space-time, based on the combination of information of motion / texture are extracted. It is thus possible to define a system to recognize a complex action (composed of a series of fields of motion and polynomial texture) in a video.
9

First-order gradient regularisation methods for image restoration : reconstruction of tomographic images with thin structures and denoising piecewise affine images

Papoutsellis, Evangelos January 2016 (has links)
The focus of this thesis is variational image restoration techniques that involve novel non-smooth first-order gradient regularisers: Total Variation (TV) regularisation in image and data space for reconstruction of thin structures from PET data and regularisers given by an infimal-convolution of TV and $L^p$ seminorms for denoising images with piecewise affine structures. In the first part of this thesis, we present a novel variational model for PET reconstruction. During a PET scan, we encounter two different spaces: the sinogram space that consists of all the PET data collected from the detectors and the image space where the reconstruction of the unknown density is finally obtained. Unlike most of the state of the art reconstruction methods in which an appropriate regulariser is designed in the image space only, we introduce a new variational method incorporating regularisation in image and sinogram space. In particular, the corresponding minimisation problem is formed by a total variational regularisation on both the sinogram and the image and with a suitable weighted $L^2$ fidelity term, which serves as an approximation to the Poisson noise model for PET. We establish the well-posedness of this new model for functions of Bounded Variation (BV) and perform an error analysis through the notion of the Bregman distance. We examine analytically how TV regularisation on the sinogram affects the reconstructed image especially the boundaries of objects in the image. This analysis motivates the use of a combined regularisation principally for reconstructing images with thin structures. In the second part of this thesis we propose a first-order regulariser that is a combination of the total variation and $L^p$ seminorms with $1 < p \le \infty$. A well-posedness analysis is presented and a detailed study of the one dimensional model is performed by computing exact solutions for simple functions such as the step function and a piecewise affine function, for the regulariser with $p = 2$ and $p = 1$. We derive necessary and sufficient conditions for a pair in $BV \times L^p$ to be a solution for our proposed model and determine the structure of solutions dependent on the value of $p$. In the case $p = 2$, we show that the regulariser is equivalent to the Huber-type variant of total variation regularisation. Moreover, there is a certain class of one dimensional data functions for which the regularised solutions are equivalent to high-order regularisers such as the state of the art total generalised variation (TGV) model. The key assets of our regulariser are the elimination of the staircasing effect - a well-known disadvantage of total variation regularisation - the capability of obtaining piecewise affine structures for $p = 1$ and qualitatively comparable results to TGV. In addition, our first-order $TVL^p$ regulariser is capable of preserving spike-like structures that TGV is forced to smooth. The numerical solution of the proposed first-order model is in general computationally more efficient compared to high-order approaches.
10

Video event detection and visual data pro cessing for multimedia applications

Szolgay, Daniel 30 September 2011 (has links)
Cette thèse (i) décrit une procédure automatique pour estimer la condition d'arrêt des méthodes de déconvolution itératives basées sur un critère d'orthogonalité du signal estimé et de son gradient à une itération donnée; (ii) présente une méthode qui décompose l'image en une partie géométrique (ou "cartoon") et une partie "texture" en utilisation une estimation de paramètre et une condition d'arrêt basées sur la diffusion anisotropique avec orthogonalité, en utilisant le fait que ces deux composantes. "cartoon" et "texture", doivent être indépendantes; (iii) décrit une méthode pour extraire d'une séquence vidéo obtenue à partir de caméra portable les objets de premier plan en mouvement. Cette méthode augmente la compensation de mouvement de la caméra par une nouvelle estimation basée noyau de la fonction de probabilité de densité des pixels d'arrière-plan. Les méthodes présentées ont été testées et comparées aux algorithmes de l'état de l'art. / This dissertation (i) describes an automatic procedure for estimating the stopping condition of non-regularized iterative deconvolution methods based on an orthogonality criterion of the estimated signal and its gradient at a given iteration; (ii) presents a decomposition method that splits the image into geometric (or cartoon) and texture parts using anisotropic diffusion with orthogonality based parameter estimation and stopping condition, utilizing the theory that the cartoon and the texture components of an image should be independent of each other; (iii) describes a method for moving foreground object extraction in sequences taken by wearable camera, with strong motion, where the camera motion compensated frame differencing is enhanced with a novel kernel-based estimation of the probability density function of the background pixels. The presented methods have been thoroughly tested and compared to other similar algorithms from the state-of-the-art.

Page generated in 0.1238 seconds