• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 2
  • 1
  • 1
  • Tagged with
  • 56
  • 56
  • 56
  • 56
  • 8
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Automatic learning of British Sign Language from signed TV broadcasts

Buehler, Patrick January 2010 (has links)
In this work, we will present several contributions towards automatic recognition of BSL signs from continuous signing video sequences. Specifically, we will address three main points: (i) automatic detection and tracking of the hands using a generative model of the image; (ii) automatic learning of signs from TV broadcasts using the supervisory information available from subtitles; and (iii) generalisation given sign examples from one signer to recognition of signs from different signers. Our source material consists of many hours of video with continuous signing and corresponding subtitles recorded from BBC digital television. This is very challenging material for a number of reasons, including self-occlusions of the signer, self-shadowing, blur due to the speed of motion, and in particular the changing background. Knowledge of the hand position and hand shape is a pre-requisite for automatic sign language recognition. We cast the problem of detecting and tracking the hands as inference in a generative model of the image, and propose a complete model which accounts for the positions and self-occlusions of the arms. Reasonable configurations are obtained by efficiently sampling from a pictorial structure proposal distribution. The results using our method exceed the state-of-the-art for the length and stability of continuous limb tracking. Previous research in sign language recognition has typically required manual training data to be generated for each sign, e.g. a signer performing each sign in controlled conditions - a time-consuming and expensive procedure. We show that for a given signer, a large number of BSL signs can be learned automatically from TV broadcasts using the supervisory information available from subtitles broadcast simultaneously with the signing. We achieve this by modelling the problem as one of multiple instance learning. In this way we are able to extract the sign of interest from hours of signing footage, despite the very weak and "noisy" supervision from the subtitles. Lastly, we show that automatic recognition of signs can be extended to multiple signers. Using automatically extracted examples from a single signer, we train discriminative classifiers and show that these can successfully classify and localise signs in new signers. This demonstrates that the descriptor we extract for each frame (i.e. hand position, hand shape, and hand orientation) generalises between different signers.
52

Montagem assistida por realidade aumentada (A3R). / Assembly assisted by augmented reality (A3R).

Nishihara, Anderson 20 July 2016 (has links)
Processos de montagem em geral necessitam de instruções para serem executados, desde a montagem de simples brinquedos até máquinas complexas. Tradicionalmente, essas instruções vem na forma de manuais em papel ou meio digital. Seja qual for o modo, os manuais de instruções utilizam desenhos, diagramas ou fotos, além de instruções textuais para indicar a sequência de montagem do início até o estado final. Procurando mudar esse paradigma, esse trabalho propõe um sistema para auxílio à montagem que utiliza realidade aumentada para guiar o usuário no processo. Através de processamento de imagens capturadas por uma câmera o sistema reconhece cada peça e por meio de sinais gráficos é indicado ao usuário qual a peça a ser manipulada e onde deve ser posicionada. Em seguida é feito a verificação do posicionamento das peças e o usuário é alertado quando a tarefa de montagem atinge o estado final. Muitos trabalhos na área utilizam algum tipo de dispositivo customizado como \"head mounted display\" (HMD) e marcadores para auxiliar o rastreamento da câmera e identificação das peças, limitando a popularização dessa tecnologia. Tendo esse último ponto em vista, propõe-se um sistema que não utiliza qualquer dispositivo customizado ou marcadores para rastreamento. Além disso, todos os processos do sistema são executados em software embarcado, não necessitando de comunicação com outros computadores para o processamento de imagens. Como o sistema não faz uso de marcadores para a identificação das peças, inicialmente é proposto a implementação do sistema para guiar o usuário na resolução de um quebra-cabeças plano. O sistema proposto é denominado como MARA (Montagem Assistida por Realidade Aumentada). / Assembly processes for simple toys or complex machines usually requires instructions to be executed. Traditionally, these instructions are written in the form of paper or digital manuals. These manuals contains descriptive text, photos or diagrams to guide the assembly sequence from the beginning to the final state. To change this paradigm, it is proposed in this work an augmented reality system to guide assembly tasks. The system recognizes each assembly piece through image processing techniques and guides the piece placement with graphic signals. Later, the system checks if the pieces are properly assembled and warns the user when the assembly have been finished. In the field of assembly assisted by augmented reality systems, many works use some kind of customized device, like head mounted displays (HMD). Furthermore, markers have been used to track camera position and identify assembly parts. These two features restrict the spread of the technology, thus in the proposed work customized devices and markers to track and identify parts shall not be used. Besides, all the processing are executed on embedded software without the need of communication with other computers to help image processing. The first implementation of the proposed system assists the user on the assembly of a planar puzzle, as the proposed system do not use markers to recognize assembly pieces. This system is being called A3R (Assembly Assisted by Augmented Reality).
53

Montagem assistida por realidade aumentada (A3R). / Assembly assisted by augmented reality (A3R).

Anderson Nishihara 20 July 2016 (has links)
Processos de montagem em geral necessitam de instruções para serem executados, desde a montagem de simples brinquedos até máquinas complexas. Tradicionalmente, essas instruções vem na forma de manuais em papel ou meio digital. Seja qual for o modo, os manuais de instruções utilizam desenhos, diagramas ou fotos, além de instruções textuais para indicar a sequência de montagem do início até o estado final. Procurando mudar esse paradigma, esse trabalho propõe um sistema para auxílio à montagem que utiliza realidade aumentada para guiar o usuário no processo. Através de processamento de imagens capturadas por uma câmera o sistema reconhece cada peça e por meio de sinais gráficos é indicado ao usuário qual a peça a ser manipulada e onde deve ser posicionada. Em seguida é feito a verificação do posicionamento das peças e o usuário é alertado quando a tarefa de montagem atinge o estado final. Muitos trabalhos na área utilizam algum tipo de dispositivo customizado como \"head mounted display\" (HMD) e marcadores para auxiliar o rastreamento da câmera e identificação das peças, limitando a popularização dessa tecnologia. Tendo esse último ponto em vista, propõe-se um sistema que não utiliza qualquer dispositivo customizado ou marcadores para rastreamento. Além disso, todos os processos do sistema são executados em software embarcado, não necessitando de comunicação com outros computadores para o processamento de imagens. Como o sistema não faz uso de marcadores para a identificação das peças, inicialmente é proposto a implementação do sistema para guiar o usuário na resolução de um quebra-cabeças plano. O sistema proposto é denominado como MARA (Montagem Assistida por Realidade Aumentada). / Assembly processes for simple toys or complex machines usually requires instructions to be executed. Traditionally, these instructions are written in the form of paper or digital manuals. These manuals contains descriptive text, photos or diagrams to guide the assembly sequence from the beginning to the final state. To change this paradigm, it is proposed in this work an augmented reality system to guide assembly tasks. The system recognizes each assembly piece through image processing techniques and guides the piece placement with graphic signals. Later, the system checks if the pieces are properly assembled and warns the user when the assembly have been finished. In the field of assembly assisted by augmented reality systems, many works use some kind of customized device, like head mounted displays (HMD). Furthermore, markers have been used to track camera position and identify assembly parts. These two features restrict the spread of the technology, thus in the proposed work customized devices and markers to track and identify parts shall not be used. Besides, all the processing are executed on embedded software without the need of communication with other computers to help image processing. The first implementation of the proposed system assists the user on the assembly of a planar puzzle, as the proposed system do not use markers to recognize assembly pieces. This system is being called A3R (Assembly Assisted by Augmented Reality).
54

Convolutional Polynomial Neural Network for Improved Face Recognition

Cui, Chen 24 August 2017 (has links)
No description available.
55

Machine learning for blob detection in high-resolution 3D microscopy images

Ter Haak, Martin January 2018 (has links)
The aim of blob detection is to find regions in a digital image that differ from their surroundings with respect to properties like intensity or shape. Bio-image analysis is a common application where blobs can denote regions of interest that have been stained with a fluorescent dye. In image-based in situ sequencing for ribonucleic acid (RNA) for example, the blobs are local intensity maxima (i.e. bright spots) corresponding to the locations of specific RNA nucleobases in cells. Traditional methods of blob detection rely on simple image processing steps that must be guided by the user. The problem is that the user must seek the optimal parameters for each step which are often specific to that image and cannot be generalised to other images. Moreover, some of the existing tools are not suitable for the scale of the microscopy images that are often in very high resolution and 3D. Machine learning (ML) is a collection of techniques that give computers the ability to ”learn” from data. To eliminate the dependence on user parameters, the idea is applying ML to learn the definition of a blob from labelled images. The research question is therefore how ML can be effectively used to perform the blob detection. A blob detector is proposed that first extracts a set of relevant and nonredundant image features, then classifies pixels as blobs and finally uses a clustering algorithm to split up connected blobs. The detector works out-of-core, meaning it can process images that do not fit in memory, by dividing the images into chunks. Results prove the feasibility of this blob detector and show that it can compete with other popular software for blob detection. But unlike other tools, the proposed blob detector does not require parameter tuning, making it easier to use and more reliable. / Syftet med blobdetektion är att hitta regioner i en digital bild som skiljer sig från omgivningen med avseende på egenskaper som intensitet eller form. Biologisk bildanalys är en vanlig tillämpning där blobbar kan beteckna intresseregioner som har färgats in med ett fluorescerande färgämne. Vid bildbaserad in situ-sekvensering för ribonukleinsyra (RNA) är blobbarna lokala intensitetsmaxima (dvs ljusa fläckar) motsvarande platserna för specifika RNA-nukleobaser i celler. Traditionella metoder för blob-detektering bygger på enkla bildbehandlingssteg som måste vägledas av användaren. Problemet är att användaren måste hitta optimala parametrar för varje steg som ofta är specifika för just den bilden och som inte kan generaliseras till andra bilder. Dessutom är några av de befintliga verktygen inte lämpliga för storleken på mikroskopibilderna som ofta är i mycket hög upplösning och 3D. Maskininlärning (ML) är en samling tekniker som ger datorer möjlighet att “lära sig” från data. För att eliminera beroendet av användarparametrar, är tanken att tillämpa ML för att lära sig definitionen av en blob från uppmärkta bilder. Forskningsfrågan är därför hur ML effektivt kan användas för att utföra blobdetektion. En blobdetekteringsalgoritm föreslås som först extraherar en uppsättning relevanta och icke-överflödiga bildegenskaper, klassificerar sedan pixlar som blobbar och använder slutligen en klustringsalgoritm för att dela upp sammansatta blobbar. Detekteringsalgoritmen fungerar utanför kärnan, vilket innebär att det kan bearbeta bilder som inte får plats i minnet genom att dela upp bilderna i mindre delar. Resultatet visar att detekteringsalgoritmen är genomförbar och visar att den kan konkurrera med andra populära programvaror för blobdetektion. Men i motsats till andra verktyg behöver den föreslagna detekteringsalgoritmen inte justering av sina parametrar, vilket gör den lättare att använda och mer tillförlitlig.
56

Dohledávání objektů v obraze / Image object detection

Pluskal, Richard January 2008 (has links)
The thesis deals with design of a program for entering various types of geometric objects in an image for the purpose of their further processing. The program should also contain algorithms to ease object entering (e.g. refining manually entered object position). In the first part there is a brief description of the computer vision and its basic methods used in the work as well as introduction of the OpenCV image processing library. The following part describes three types of geometric primitives that are implemented for now. Because the output of the program is in universal XML format, there is short chapter about the XML. After that, there are summarized some methods for searching of parametric description of geometric primitives in an image. The final chapter describes the proposed system and evaluates possibility and suitability of its usage for various types of images.

Page generated in 0.1201 seconds