Spelling suggestions: "subject:"image moments"" "subject:"lmage moments""
1 |
Moment Based Painterly Rendering Using Connected Color ComponentsObaid, Mohammad Hisham Rashid January 2006 (has links)
Research and development of Non-Photorealistic Rendering algorithms has recently moved towards the use of computer vision algorithms to extract image features. The feature representation capabilities of image moments could be used effectively for the selection of brush-stroke characteristics for painterly-rendering applications. This technique is based on the estimation of local geometric features from the intensity distribution in small windowed images to obtain the brush size, color and direction. This thesis proposes an improvement of this method, by additionally extracting the connected components so that the adjacent regions of similar color are grouped for generating large and noticeable brush-stroke images. An iterative coarse-to-fine rendering algorithm is developed for painting regions of varying color frequencies. Improvements over the existing technique are discussed with several examples.
|
2 |
Nouvelles contributions à l'application des moments en asservissement visuel / New contributions to the application of moments in visual servoingYeremou Tamtsia, Aurélien 11 October 2013 (has links)
Cette thèse propose des contributions très prometteuses au sujet du choix des primitives visuelles en asservissement visuel utilisant les moments 2D extraits de l’image. Nous avons proposé une nouvelle manière de résoudre un problème important en asservissement visuel, à savoir la commande du mouvement de rotation suivant les axes orthogonaux à l’axe optique. Ce travail représente une amélioration significative des travaux précédents en asservissement visuel basé sur l’utilisation des moments 2D extraits de l’image pour commander les degrés de liberté des robots manipulateurs. La commande la plus utilisée est connue sous le nom de commande cinématique. L’approche emploie un descripteur global d’image basé sur des moments 2D "shifted" dont les invariants calculés à partir de ces moments 2D "shifted" utilisent des moments d’ordre faible connus pour être robustes au bruit. De plus, ces invariants choisis ne dépendant pas de la forme de l’objet, sont invariants au mouvement de translation, de rotation et d’échelle. Cette nouvelle façon de faire vient ainsi résoudre les problèmes vus dans les travaux précédents relatifs aux choix des combinaisons des moments invariants basés sur les moments centrés et qui dépendent de la forme de l’objet considéré. Dans les travaux précédents, ces invariants sont calculés avec des moments dont l’ordre est compris entre trois et cinq qui sont vulnérables aux bruits de mesure. D’un point de vue asservissement visuel, le travail est basé sur la détermination explicite de la matrice d’interaction calculée à partir des moments 2D "shifted" dont le choix des paramètres de décalages respectent les propriétés d’invariances aux mouvements de translation, de rotation et d’échelle. En procédant ainsi, les informations visuelles choisies montrent la capacité de représenter les objets de formes symétriques et non symétriques. Des résultats de simulations sont présentés pour illustrer la validité de notre proposition. / This thesis proposes nice and very promising contributions about the choice of visual features in image moments-based visual servoing. We proposed a new way towards solving an important problem in Visual Servoing, namely control of non-optic axis rotational degrees of freedom. This work represents a significant improvement respect to previous works on image-based visual servoing of robot manipulators, with the camera in-hand configuration and under the control scheme known as kinematic control. The approach uses global image features that are based on shifted image moments of a planar target. The improvement consists in a particular selection of a combination of shifted image moments of low order such that they do not depend on the planar target shape ; thus solving the problem seen in related previous works where the selection of central or regular image moments combinations depended on the planar target shapes. From a visual servoing point of view, the work is based on the explicit resolution of the interaction matrix related to any shifted image moment, on the appropriate combination of these moments and on the proper selection of the shifted parameters. By doing so, the new features show improved ability to represent symmetrical objects and several kinds of objects defined from closed contours or from a set of points. Six visual features are selected to design a decoupled control scheme when the object is parallel to the image plane. This nice property is then generalized to the case where the desired object position is not parallel to the image plane. Finally simulated results are presented to illustrate the validity of our proposal.
|
3 |
Satellite Image Processing with Biologically-inspired Computational Methods and Visual AttentionSina, Md Ibne 27 July 2012 (has links)
The human vision system is generally recognized as being superior to all known artificial vision systems. Visual attention, among many processes that are related to human vision, is responsible for identifying relevant regions in a scene for further processing. In most cases, analyzing an entire scene is unnecessary and inevitably time consuming. Hence considering visual attention might be advantageous. A subfield of computer vision where this particular functionality is computationally emulated has been shown to retain high potential in solving real world vision problems effectively. In this monograph, elements of visual attention are explored and algorithms are proposed that exploit such elements in order to enhance image understanding capabilities. Satellite images are given special attention due to their practical relevance, inherent complexity in terms of image contents, and their resolution. Processing such large-size images using visual attention can be very helpful since one can first identify relevant regions and deploy further detailed analysis in those regions only. Bottom-up features, which are directly derived from the scene contents, are at the core of visual attention and help identify salient image regions. In the literature, the use of intensity, orientation and color as dominant features to compute bottom-up attention is ubiquitous. The effects of incorporating an entropy feature on top of the above mentioned ones are also studied. This investigation demonstrates that such integration makes visual attention more sensitive to fine details and hence retains the potential to be exploited in a suitable context. One interesting application of bottom-up attention, which is also examined in this work, is that of image segmentation. Since low salient regions generally correspond to homogenously textured regions in the input image; a model can therefore be learned from a homogenous region and used to group similar textures existing in other image regions. Experimentation demonstrates that the proposed method produces realistic segmentation on satellite images. Top-down attention, on the other hand, is influenced by the observer’s current states such as knowledge, goal, and expectation. It can be exploited to locate target objects depending on various features, and increases search or recognition efficiency by concentrating on the relevant image regions only. This technique is very helpful in processing large images such as satellite images. A novel algorithm for computing top-down attention is proposed which is able to learn and quantify important bottom-up features from a set of training images and enhances such features in a test image in order to localize objects having similar features. An object recognition technique is then deployed that extracts potential target objects from the computed top-down attention map and attempts to recognize them. An object descriptor is formed based on physical appearance and uses both texture and shape information. This combination is shown to be especially useful in the object recognition phase. The proposed texture descriptor is based on Legendre moments computed on local binary patterns, while shape is described using Hu moment invariants. Several tools and techniques such as different types of moments of functions, and combinations of different measures have been applied for the purpose of experimentations. The developed algorithms are generalized, efficient and effective, and have the potential to be deployed for real world problems. A dedicated software testing platform has been designed to facilitate the manipulation of satellite images and support a modular and flexible implementation of computational methods, including various components of visual attention models.
|
4 |
Satellite Image Processing with Biologically-inspired Computational Methods and Visual AttentionSina, Md Ibne 27 July 2012 (has links)
The human vision system is generally recognized as being superior to all known artificial vision systems. Visual attention, among many processes that are related to human vision, is responsible for identifying relevant regions in a scene for further processing. In most cases, analyzing an entire scene is unnecessary and inevitably time consuming. Hence considering visual attention might be advantageous. A subfield of computer vision where this particular functionality is computationally emulated has been shown to retain high potential in solving real world vision problems effectively. In this monograph, elements of visual attention are explored and algorithms are proposed that exploit such elements in order to enhance image understanding capabilities. Satellite images are given special attention due to their practical relevance, inherent complexity in terms of image contents, and their resolution. Processing such large-size images using visual attention can be very helpful since one can first identify relevant regions and deploy further detailed analysis in those regions only. Bottom-up features, which are directly derived from the scene contents, are at the core of visual attention and help identify salient image regions. In the literature, the use of intensity, orientation and color as dominant features to compute bottom-up attention is ubiquitous. The effects of incorporating an entropy feature on top of the above mentioned ones are also studied. This investigation demonstrates that such integration makes visual attention more sensitive to fine details and hence retains the potential to be exploited in a suitable context. One interesting application of bottom-up attention, which is also examined in this work, is that of image segmentation. Since low salient regions generally correspond to homogenously textured regions in the input image; a model can therefore be learned from a homogenous region and used to group similar textures existing in other image regions. Experimentation demonstrates that the proposed method produces realistic segmentation on satellite images. Top-down attention, on the other hand, is influenced by the observer’s current states such as knowledge, goal, and expectation. It can be exploited to locate target objects depending on various features, and increases search or recognition efficiency by concentrating on the relevant image regions only. This technique is very helpful in processing large images such as satellite images. A novel algorithm for computing top-down attention is proposed which is able to learn and quantify important bottom-up features from a set of training images and enhances such features in a test image in order to localize objects having similar features. An object recognition technique is then deployed that extracts potential target objects from the computed top-down attention map and attempts to recognize them. An object descriptor is formed based on physical appearance and uses both texture and shape information. This combination is shown to be especially useful in the object recognition phase. The proposed texture descriptor is based on Legendre moments computed on local binary patterns, while shape is described using Hu moment invariants. Several tools and techniques such as different types of moments of functions, and combinations of different measures have been applied for the purpose of experimentations. The developed algorithms are generalized, efficient and effective, and have the potential to be deployed for real world problems. A dedicated software testing platform has been designed to facilitate the manipulation of satellite images and support a modular and flexible implementation of computational methods, including various components of visual attention models.
|
5 |
Sistema de visão computacional aplicado a um robô cilíndrico acionado pneumaticamenteMedina, Betânia Vargas Oliveira January 2015 (has links)
O reconhecimento da posição e orientação de objetos em uma imagem é importante para diversos segmentos da engenharia, como robótica, automação industrial e processos de fabricação, permitindo às linhas de produção que utilizam sistemas de visão, melhorias na qualidade e redução do tempo de produção. O presente trabalho consiste na elaboração de um sistema de visão computacional para um robô cilíndrico de cinco graus de liberdade acionado pneumaticamente. Como resultado da aplicação do método desenvolvido, obtêm-se a posição e orientação de peças a fim de que as mesmas possam ser capturadas corretamente pelo robô. Para a obtenção da posição e orientação das peças, utilizou-se o método de cálculo dos momentos para extração de características de uma imagem, além da relação entre suas coordenadas em pixels com o sistema de coordenadas do robô. O desenvolvimento do presente trabalho visou também a integrar a esse sistema de visão computacional, um algoritmo de planejamento de trajetórias do robô, o qual, após receber os valores das coordenadas necessárias, gera a trajetória a ser seguida pelo robô, de forma que este possa pegar a peça em uma determinada posição e deslocá-la até outra posição pré-determinada. Também faz parte do escopo deste trabalho, a integração do sistema de visão, incluindo o planejamento de trajetórias, a um algoritmo de controle dos atuadores com compensação de atrito e a realização de testes experimentais com manipulação de peças. Para a demonstração da aplicação do método através de testes experimentais, foi montada uma estrutura para suportar as câmeras e as peças a serem manipuladas, levando em conta o espaço de trabalho do robô. Os resultados obtidos mostram que o algoritmo proposto de visão computacional determina a posição e orientação das peças permitindo ao robô a captação e manipulação das mesmas. / The recognition of the position and orientation of objects in an image is important for several technological areas in engineering, such as robotics, industrial automation and manufacturing processes, allowing production lines using vision systems, improvements in quality and reduction in production time. The present work consists of the development of a computer vision system for a pneumatically actuated cylindrical robot with five degrees of freedom. The application of the proposed method furnishes the position and orientation of pieces in a way that the robot could properly capture them. Position and orientation of the pieces are determined by means of a technique based on the method of calculating the moments for an image feature extraction and the relationship between their pixels coordinates with the robot coordinate system. The scope of the present work also comprises the integration of the computer vision system with a (previously developed) robot trajectory planning algorithm that use key-point coordinates (transmitted by the vision system) to generate the trajectory that must be followed by the robot, so that, departing from a given position, it moves suitably to another predetermined position. It is also object of this work, the integration of both vision system and trajectory planning algorithm with a (also previously developed) nonlinear control algorithm with friction compensation. Aiming at to demonstrate experimentally the application of the method, a special apparatus was mounted to support cameras and the pieces to be manipulated, taking into account the robot workspace. To validate the proposed algorithm, a case study was performed, with the results showing that the proposed computer vision algorithm determines the position and orientation of the pieces allowing the robot to capture and manipulation thereof.
|
6 |
Sistema de visão computacional aplicado a um robô cilíndrico acionado pneumaticamenteMedina, Betânia Vargas Oliveira January 2015 (has links)
O reconhecimento da posição e orientação de objetos em uma imagem é importante para diversos segmentos da engenharia, como robótica, automação industrial e processos de fabricação, permitindo às linhas de produção que utilizam sistemas de visão, melhorias na qualidade e redução do tempo de produção. O presente trabalho consiste na elaboração de um sistema de visão computacional para um robô cilíndrico de cinco graus de liberdade acionado pneumaticamente. Como resultado da aplicação do método desenvolvido, obtêm-se a posição e orientação de peças a fim de que as mesmas possam ser capturadas corretamente pelo robô. Para a obtenção da posição e orientação das peças, utilizou-se o método de cálculo dos momentos para extração de características de uma imagem, além da relação entre suas coordenadas em pixels com o sistema de coordenadas do robô. O desenvolvimento do presente trabalho visou também a integrar a esse sistema de visão computacional, um algoritmo de planejamento de trajetórias do robô, o qual, após receber os valores das coordenadas necessárias, gera a trajetória a ser seguida pelo robô, de forma que este possa pegar a peça em uma determinada posição e deslocá-la até outra posição pré-determinada. Também faz parte do escopo deste trabalho, a integração do sistema de visão, incluindo o planejamento de trajetórias, a um algoritmo de controle dos atuadores com compensação de atrito e a realização de testes experimentais com manipulação de peças. Para a demonstração da aplicação do método através de testes experimentais, foi montada uma estrutura para suportar as câmeras e as peças a serem manipuladas, levando em conta o espaço de trabalho do robô. Os resultados obtidos mostram que o algoritmo proposto de visão computacional determina a posição e orientação das peças permitindo ao robô a captação e manipulação das mesmas. / The recognition of the position and orientation of objects in an image is important for several technological areas in engineering, such as robotics, industrial automation and manufacturing processes, allowing production lines using vision systems, improvements in quality and reduction in production time. The present work consists of the development of a computer vision system for a pneumatically actuated cylindrical robot with five degrees of freedom. The application of the proposed method furnishes the position and orientation of pieces in a way that the robot could properly capture them. Position and orientation of the pieces are determined by means of a technique based on the method of calculating the moments for an image feature extraction and the relationship between their pixels coordinates with the robot coordinate system. The scope of the present work also comprises the integration of the computer vision system with a (previously developed) robot trajectory planning algorithm that use key-point coordinates (transmitted by the vision system) to generate the trajectory that must be followed by the robot, so that, departing from a given position, it moves suitably to another predetermined position. It is also object of this work, the integration of both vision system and trajectory planning algorithm with a (also previously developed) nonlinear control algorithm with friction compensation. Aiming at to demonstrate experimentally the application of the method, a special apparatus was mounted to support cameras and the pieces to be manipulated, taking into account the robot workspace. To validate the proposed algorithm, a case study was performed, with the results showing that the proposed computer vision algorithm determines the position and orientation of the pieces allowing the robot to capture and manipulation thereof.
|
7 |
Sistema de visão computacional aplicado a um robô cilíndrico acionado pneumaticamenteMedina, Betânia Vargas Oliveira January 2015 (has links)
O reconhecimento da posição e orientação de objetos em uma imagem é importante para diversos segmentos da engenharia, como robótica, automação industrial e processos de fabricação, permitindo às linhas de produção que utilizam sistemas de visão, melhorias na qualidade e redução do tempo de produção. O presente trabalho consiste na elaboração de um sistema de visão computacional para um robô cilíndrico de cinco graus de liberdade acionado pneumaticamente. Como resultado da aplicação do método desenvolvido, obtêm-se a posição e orientação de peças a fim de que as mesmas possam ser capturadas corretamente pelo robô. Para a obtenção da posição e orientação das peças, utilizou-se o método de cálculo dos momentos para extração de características de uma imagem, além da relação entre suas coordenadas em pixels com o sistema de coordenadas do robô. O desenvolvimento do presente trabalho visou também a integrar a esse sistema de visão computacional, um algoritmo de planejamento de trajetórias do robô, o qual, após receber os valores das coordenadas necessárias, gera a trajetória a ser seguida pelo robô, de forma que este possa pegar a peça em uma determinada posição e deslocá-la até outra posição pré-determinada. Também faz parte do escopo deste trabalho, a integração do sistema de visão, incluindo o planejamento de trajetórias, a um algoritmo de controle dos atuadores com compensação de atrito e a realização de testes experimentais com manipulação de peças. Para a demonstração da aplicação do método através de testes experimentais, foi montada uma estrutura para suportar as câmeras e as peças a serem manipuladas, levando em conta o espaço de trabalho do robô. Os resultados obtidos mostram que o algoritmo proposto de visão computacional determina a posição e orientação das peças permitindo ao robô a captação e manipulação das mesmas. / The recognition of the position and orientation of objects in an image is important for several technological areas in engineering, such as robotics, industrial automation and manufacturing processes, allowing production lines using vision systems, improvements in quality and reduction in production time. The present work consists of the development of a computer vision system for a pneumatically actuated cylindrical robot with five degrees of freedom. The application of the proposed method furnishes the position and orientation of pieces in a way that the robot could properly capture them. Position and orientation of the pieces are determined by means of a technique based on the method of calculating the moments for an image feature extraction and the relationship between their pixels coordinates with the robot coordinate system. The scope of the present work also comprises the integration of the computer vision system with a (previously developed) robot trajectory planning algorithm that use key-point coordinates (transmitted by the vision system) to generate the trajectory that must be followed by the robot, so that, departing from a given position, it moves suitably to another predetermined position. It is also object of this work, the integration of both vision system and trajectory planning algorithm with a (also previously developed) nonlinear control algorithm with friction compensation. Aiming at to demonstrate experimentally the application of the method, a special apparatus was mounted to support cameras and the pieces to be manipulated, taking into account the robot workspace. To validate the proposed algorithm, a case study was performed, with the results showing that the proposed computer vision algorithm determines the position and orientation of the pieces allowing the robot to capture and manipulation thereof.
|
8 |
Image Based Attitude And Position Estimation Using Moment FunctionsMukundan, R 07 1900 (has links) (PDF)
No description available.
|
9 |
Satellite Image Processing with Biologically-inspired Computational Methods and Visual AttentionSina, Md Ibne January 2012 (has links)
The human vision system is generally recognized as being superior to all known artificial vision systems. Visual attention, among many processes that are related to human vision, is responsible for identifying relevant regions in a scene for further processing. In most cases, analyzing an entire scene is unnecessary and inevitably time consuming. Hence considering visual attention might be advantageous. A subfield of computer vision where this particular functionality is computationally emulated has been shown to retain high potential in solving real world vision problems effectively. In this monograph, elements of visual attention are explored and algorithms are proposed that exploit such elements in order to enhance image understanding capabilities. Satellite images are given special attention due to their practical relevance, inherent complexity in terms of image contents, and their resolution. Processing such large-size images using visual attention can be very helpful since one can first identify relevant regions and deploy further detailed analysis in those regions only. Bottom-up features, which are directly derived from the scene contents, are at the core of visual attention and help identify salient image regions. In the literature, the use of intensity, orientation and color as dominant features to compute bottom-up attention is ubiquitous. The effects of incorporating an entropy feature on top of the above mentioned ones are also studied. This investigation demonstrates that such integration makes visual attention more sensitive to fine details and hence retains the potential to be exploited in a suitable context. One interesting application of bottom-up attention, which is also examined in this work, is that of image segmentation. Since low salient regions generally correspond to homogenously textured regions in the input image; a model can therefore be learned from a homogenous region and used to group similar textures existing in other image regions. Experimentation demonstrates that the proposed method produces realistic segmentation on satellite images. Top-down attention, on the other hand, is influenced by the observer’s current states such as knowledge, goal, and expectation. It can be exploited to locate target objects depending on various features, and increases search or recognition efficiency by concentrating on the relevant image regions only. This technique is very helpful in processing large images such as satellite images. A novel algorithm for computing top-down attention is proposed which is able to learn and quantify important bottom-up features from a set of training images and enhances such features in a test image in order to localize objects having similar features. An object recognition technique is then deployed that extracts potential target objects from the computed top-down attention map and attempts to recognize them. An object descriptor is formed based on physical appearance and uses both texture and shape information. This combination is shown to be especially useful in the object recognition phase. The proposed texture descriptor is based on Legendre moments computed on local binary patterns, while shape is described using Hu moment invariants. Several tools and techniques such as different types of moments of functions, and combinations of different measures have been applied for the purpose of experimentations. The developed algorithms are generalized, efficient and effective, and have the potential to be deployed for real world problems. A dedicated software testing platform has been designed to facilitate the manipulation of satellite images and support a modular and flexible implementation of computational methods, including various components of visual attention models.
|
10 |
Pokročilé momentové metody pro analýzu obrazu / Advanced Moment-Based Methods for Image AnalysisHöschl, Cyril January 2018 (has links)
The Thesis consists of an introduction and four papers that contribute to the research of image moments and moment invariants. The first two papers focus on rectangular decomposition algorithms that rapidly speed up the moment calculations. The other two papers present a design of new moment invariants. We present a comparative study of cutting edge methods for the decomposition of 2D binary images, including original implementations of all the methods. For 3D binary images, finding the optimal decomposition is an NP-complete problem, hence a polynomial-time heuristic needs to be developed. We propose a sub-optimal algorithm that outperforms other state of the art approximations. Additionally, we propose a new form of blur invariants that are derived by means of projection operators in a Fourier domain, which improves mainly the discrimination power of the features. Furthermore, we propose new moment-based features that are tolerant to additive Gaussian image noise and we show by extensive image retrieval experiments that the proposed features are robust and outperform other commonly used methods.
|
Page generated in 0.0605 seconds