• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 17
  • 17
  • 13
  • 13
  • 12
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Segmentation and Line Filling of 2D Shapes

Pérez Rocha, Ana Laura 21 January 2013 (has links)
The evolution of technology in the textile industry reached the design of embroidery patterns for machine embroidery. In order to create quality designs the shapes to be embroidered need to be segmented into regions that define different parts. One of the objectives of our research is to develop a method to automatically segment the shapes and by doing so making the process faster and easier. Shape analysis is necessary to find a suitable method for this purpose. It includes the study of different ways to represent shapes. In this thesis we focus on shape representation through its skeleton. We make use of a shape's skeleton and the shape's boundary through the so-called feature transform to decide how to segment a shape and where to place the segment boundaries. The direction of stitches is another important specification in an embroidery design. We develop a technique to select the stitch orientation by defining direction lines using the skeleton curves and information from the boundary. We compute the intersections of segment boundaries and direction lines with the shape boundary for the final definition of the direction line segments. We demonstrate that our shape segmentation technique and the automatic placement of direction lines produce sufficient constrains for automated embroidery designs. We show examples for lettering, basic shapes, as well as simple and complex logos.
2

Segmentation and Line Filling of 2D Shapes

Pérez Rocha, Ana Laura 21 January 2013 (has links)
The evolution of technology in the textile industry reached the design of embroidery patterns for machine embroidery. In order to create quality designs the shapes to be embroidered need to be segmented into regions that define different parts. One of the objectives of our research is to develop a method to automatically segment the shapes and by doing so making the process faster and easier. Shape analysis is necessary to find a suitable method for this purpose. It includes the study of different ways to represent shapes. In this thesis we focus on shape representation through its skeleton. We make use of a shape's skeleton and the shape's boundary through the so-called feature transform to decide how to segment a shape and where to place the segment boundaries. The direction of stitches is another important specification in an embroidery design. We develop a technique to select the stitch orientation by defining direction lines using the skeleton curves and information from the boundary. We compute the intersections of segment boundaries and direction lines with the shape boundary for the final definition of the direction line segments. We demonstrate that our shape segmentation technique and the automatic placement of direction lines produce sufficient constrains for automated embroidery designs. We show examples for lettering, basic shapes, as well as simple and complex logos.
3

Segmentation and Line Filling of 2D Shapes

Pérez Rocha, Ana Laura January 2013 (has links)
The evolution of technology in the textile industry reached the design of embroidery patterns for machine embroidery. In order to create quality designs the shapes to be embroidered need to be segmented into regions that define different parts. One of the objectives of our research is to develop a method to automatically segment the shapes and by doing so making the process faster and easier. Shape analysis is necessary to find a suitable method for this purpose. It includes the study of different ways to represent shapes. In this thesis we focus on shape representation through its skeleton. We make use of a shape's skeleton and the shape's boundary through the so-called feature transform to decide how to segment a shape and where to place the segment boundaries. The direction of stitches is another important specification in an embroidery design. We develop a technique to select the stitch orientation by defining direction lines using the skeleton curves and information from the boundary. We compute the intersections of segment boundaries and direction lines with the shape boundary for the final definition of the direction line segments. We demonstrate that our shape segmentation technique and the automatic placement of direction lines produce sufficient constrains for automated embroidery designs. We show examples for lettering, basic shapes, as well as simple and complex logos.
4

Compression vidéo très bas débit par analyse du contenu / Low bitrate video compression by content characterization

Decombas, Marc 22 November 2013 (has links)
L’objectif de cette thèse est de trouver de nouvelles méthodes de compression sémantique compatible avec un encodeur classique tel que H.264/AVC. . L’objectif principal est de maintenir la sémantique et non pas la qualité globale. Un débit cible de 300 kb/s a été fixé pour des applications de sécurité et de défense Pour cela une chaine complète de compression a dû être réalisée. Une étude et des contributions sur les modèles de saillance spatio-temporel ont été réalisées avec pour objectif d’extraire l’information pertinente. Pour réduire le débit, une méthode de redimensionnement dénommée «seam carving » a été combinée à un encodeur H.264/AVC. En outre, une métrique combinant les points SIFT et le SSIM a été réalisée afin de mesurer la qualité des objets sans être perturbée par les zones de moindre contenant la majorité des artefacts. Une base de données pouvant être utilisée pour des modèles de saillance mais aussi pour de la compression est proposée avec des masques binaires. Les différentes approches ont été validées par divers tests. Une extension de ces travaux pour des applications de résumé vidéo est proposée. / The objective of this thesis is to find new methods for semantic video compatible with a traditional encoder like H.264/AVC. The main objective is to maintain the semantic and not the global quality. A target bitrate of 300 Kb/s has been fixed for defense and security applications. To do that, a complete chain of compression has been proposed. A study and new contributions on a spatio-temporal saliency model have been done to extract the important information in the scene. To reduce the bitrate, a resizing method named seam carving has been combined with the H.264/AVC encoder. Also, a metric combining SIFT points and SSIM has been created to measure the quality of objects without being disturbed by less important areas containing mostly artifacts. A database that can be used for testing the saliency model but also for video compression has been proposed, containing sequences with their manually extracted binary masks. All the different approaches have been thoroughly validated by different tests. An extension of this work on video summary application has also been proposed.
5

[en] COMPUTATIONAL INTELLIGENCE TECHNIQUES FOR VISUAL SELF-LOCALIZATION AND MAPPING OF MOBILE ROBOTS / [pt] LOCALIZAÇÃO E MAPEAMENTO DE ROBÔS MÓVEIS UTILIZANDO INTELIGÊNCIA E VISÃO COMPUTACIONAL

NILTON CESAR ANCHAYHUA ARESTEGUI 18 October 2017 (has links)
[pt] Esta dissertação introduz um estudo sobre os algoritmos de inteligência computacional para o controle autônomo dos robôs móveis, Nesta pesquisa, são desenvolvidos e implementados sistemas inteligentes de controle de um robô móvel construído no Laboratório de Robótica da PUC-Rio, baseado numa modificação do robô ER1. Os experimentos realizados consistem em duas etapas: a primeira etapa de simulação usando o software Player-Stage de simulação do robô em 2-D onde foram desenvolvidos os algoritmos de navegação usando as técnicas de inteligência computacional; e a segunda etapa a implementação dos algoritmos no robô real. As técnicas implementadas para a navegação do robô móvel estão baseadas em algoritmos de inteligência computacional como são redes neurais, lógica difusa e support vector machine (SVM) e para dar suporte visual ao robô móvel foi implementado uma técnica de visão computacional chamado Scale Invariant Future Transform (SIFT), estes algoritmos em conjunto fazem um sistema embebido para dotar de controle autônomo ao robô móvel. As simulações destes algoritmos conseguiram o objetivo, mas na implementação surgiram diferenças muito claras respeito à simulação pelo tempo que demora em processar o microprocessador. / [en] This theses introduces a study on the computational intelligence algorithms for autonomous control of mobile robots, In this research, intelligent systems are developed and implemented for a robot in the Robotics Laboratory of PUC-Rio, based on a modiÞcation of the robot ER1. The verification consist of two stages: the first stage includes simulation using Player-Stage software for simulation of the robot in 2-D with the developed of artiÞcial intelligence; an the second stage, including the implementation of the algorithms in the real robot. The techniques implemented for the navigation of the mobile robot are based on algorithms of computational intelligence as neural networks, fuzzy logic and support vector machine (SVM); and to give visual support to the mobile robot was implemented the visual algorithm called Scale Invariant Future Transform (SIFT), these algorithms in set makes an absorbed system to endow with independent control the mobile robot. The simulations of these algorithms had obtained the objective but in the implementation clear differences had appeared respect to the simulation, it just for the time that delays in processing the microprocessor.
6

Real-time Hand Gesture Detection and Recognition for Human Computer Interaction

Dardas, Nasser Hasan Abdel-Qader 08 November 2012 (has links)
This thesis focuses on bare hand gesture recognition by proposing a new architecture to solve the problem of real-time vision-based hand detection, tracking, and gesture recognition for interaction with an application via hand gestures. The first stage of our system allows detecting and tracking a bare hand in a cluttered background using face subtraction, skin detection and contour comparison. The second stage allows recognizing hand gestures using bag-of-features and multi-class Support Vector Machine (SVM) algorithms. Finally, a grammar has been developed to generate gesture commands for application control. Our hand gesture recognition system consists of two steps: offline training and online testing. In the training stage, after extracting the keypoints for every training image using the Scale Invariance Feature Transform (SIFT), a vector quantization technique will map keypoints from every training image into a unified dimensional histogram vector (bag-of-words) after K-means clustering. This histogram is treated as an input vector for a multi-class SVM to build the classifier. In the testing stage, for every frame captured from a webcam, the hand is detected using my algorithm. Then, the keypoints are extracted for every small image that contains the detected hand posture and fed into the cluster model to map them into a bag-of-words vector, which is fed into the multi-class SVM classifier to recognize the hand gesture. Another hand gesture recognition system was proposed using Principle Components Analysis (PCA). The most eigenvectors and weights of training images are determined. In the testing stage, the hand posture is detected for every frame using my algorithm. Then, the small image that contains the detected hand is projected onto the most eigenvectors of training images to form its test weights. Finally, the minimum Euclidean distance is determined among the test weights and the training weights of each training image to recognize the hand gesture. Two application of gesture-based interaction with a 3D gaming virtual environment were implemented. The exertion videogame makes use of a stationary bicycle as one of the main inputs for game playing. The user can control and direct left-right movement and shooting actions in the game by a set of hand gesture commands, while in the second game, the user can control and direct a helicopter over the city by a set of hand gesture commands.
7

Real-time Hand Gesture Detection and Recognition for Human Computer Interaction

Dardas, Nasser Hasan Abdel-Qader 08 November 2012 (has links)
This thesis focuses on bare hand gesture recognition by proposing a new architecture to solve the problem of real-time vision-based hand detection, tracking, and gesture recognition for interaction with an application via hand gestures. The first stage of our system allows detecting and tracking a bare hand in a cluttered background using face subtraction, skin detection and contour comparison. The second stage allows recognizing hand gestures using bag-of-features and multi-class Support Vector Machine (SVM) algorithms. Finally, a grammar has been developed to generate gesture commands for application control. Our hand gesture recognition system consists of two steps: offline training and online testing. In the training stage, after extracting the keypoints for every training image using the Scale Invariance Feature Transform (SIFT), a vector quantization technique will map keypoints from every training image into a unified dimensional histogram vector (bag-of-words) after K-means clustering. This histogram is treated as an input vector for a multi-class SVM to build the classifier. In the testing stage, for every frame captured from a webcam, the hand is detected using my algorithm. Then, the keypoints are extracted for every small image that contains the detected hand posture and fed into the cluster model to map them into a bag-of-words vector, which is fed into the multi-class SVM classifier to recognize the hand gesture. Another hand gesture recognition system was proposed using Principle Components Analysis (PCA). The most eigenvectors and weights of training images are determined. In the testing stage, the hand posture is detected for every frame using my algorithm. Then, the small image that contains the detected hand is projected onto the most eigenvectors of training images to form its test weights. Finally, the minimum Euclidean distance is determined among the test weights and the training weights of each training image to recognize the hand gesture. Two application of gesture-based interaction with a 3D gaming virtual environment were implemented. The exertion videogame makes use of a stationary bicycle as one of the main inputs for game playing. The user can control and direct left-right movement and shooting actions in the game by a set of hand gesture commands, while in the second game, the user can control and direct a helicopter over the city by a set of hand gesture commands.
8

Real-time Hand Gesture Detection and Recognition for Human Computer Interaction

Dardas, Nasser Hasan Abdel-Qader January 2012 (has links)
This thesis focuses on bare hand gesture recognition by proposing a new architecture to solve the problem of real-time vision-based hand detection, tracking, and gesture recognition for interaction with an application via hand gestures. The first stage of our system allows detecting and tracking a bare hand in a cluttered background using face subtraction, skin detection and contour comparison. The second stage allows recognizing hand gestures using bag-of-features and multi-class Support Vector Machine (SVM) algorithms. Finally, a grammar has been developed to generate gesture commands for application control. Our hand gesture recognition system consists of two steps: offline training and online testing. In the training stage, after extracting the keypoints for every training image using the Scale Invariance Feature Transform (SIFT), a vector quantization technique will map keypoints from every training image into a unified dimensional histogram vector (bag-of-words) after K-means clustering. This histogram is treated as an input vector for a multi-class SVM to build the classifier. In the testing stage, for every frame captured from a webcam, the hand is detected using my algorithm. Then, the keypoints are extracted for every small image that contains the detected hand posture and fed into the cluster model to map them into a bag-of-words vector, which is fed into the multi-class SVM classifier to recognize the hand gesture. Another hand gesture recognition system was proposed using Principle Components Analysis (PCA). The most eigenvectors and weights of training images are determined. In the testing stage, the hand posture is detected for every frame using my algorithm. Then, the small image that contains the detected hand is projected onto the most eigenvectors of training images to form its test weights. Finally, the minimum Euclidean distance is determined among the test weights and the training weights of each training image to recognize the hand gesture. Two application of gesture-based interaction with a 3D gaming virtual environment were implemented. The exertion videogame makes use of a stationary bicycle as one of the main inputs for game playing. The user can control and direct left-right movement and shooting actions in the game by a set of hand gesture commands, while in the second game, the user can control and direct a helicopter over the city by a set of hand gesture commands.
9

Color Feature Integration with Directional Ringlet Intensity Feature Transform for Enhanced Object Tracking

Geary, Kevin Thomas January 2016 (has links)
No description available.
10

A Programming Framework To Implement Rule-based Target Detection In Images

Sahin, Yavuz 01 December 2008 (has links) (PDF)
An expert system is useful when conventional programming techniques fall short of capturing human expert knowledge and making decisions using this information. In this study, we describe a framework for capturing expert knowledge under a decision tree form and this framework can be used for making decisions based on captured knowledge. The framework proposed in this study is generic and can be used to create domain specific expert systems for different problems. Features are created or processed by the nodes of decision tree and a final conclusion is reached for each feature. Framework supplies 3 types of nodes to construct a decision tree. First type is the decision node, which guides the search path with its answers. Second type is the operator node, which creates new features using the inputs. Last type of node is the end node, which corresponds to a conclusion about a feature. Once the nodes of the tree are developed, then user can interactively create the decision tree and run the supplied inference engine to collect the result on a specific problem. The framework proposed is experimented with two case studies / &quot / Airport Runway Detection in High Resolution Satellite Images&quot / and &quot / Urban Area Detection in High Resolution Satellite Images&quot / . In these studies linear features are used for structural decisions and Scale Invariant Feature Transform (SIFT) features are used for testing existence of man made structures.

Page generated in 0.0875 seconds