• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 7
  • 5
  • 3
  • 1
  • Tagged with
  • 31
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Object recognition from large libraries of line patterns

Huet, Benoit January 1999 (has links)
No description available.
2

Visualization by Example - A Constructive Visual Component-Based Interface for Direct Volume Rendering

Liu, Bingchen, Wuensche, Burkhard, Ropinski, Timo January 2010 (has links)
The effectiveness of direct volume rendered images depends on finding transfer functions which emphasize structures in the underlying data. In order to support this process, we present a spreadsheet-like constructive visual component-based interface, which also allows novice users to efficiently find meaningful transfer functions. The interface uses a programming-by-example style approach and exploits the domain knowledge of the user without requiring visualization knowledge. Therefore, our application automatically analysis histograms with the Douglas-Peucker algorithm in order to identify potential structures in the data set. Sample visualizations of the resulting structures are presented to the user who can refine and combine them to more complex visualizations. Preliminary tests confirm that the interface is easy to use, and enables non-expert users to identify structures which they could not reveal with traditional transfer function editors. / <p>Short paper</p>
3

Testing of Rainflow Histograms of Strain for Implementation as a Bridge Weigh-in-Motion Technnique

Johnson, Nephi R. 01 May 2015 (has links)
This research was done as part of a long term project, with the goal to monitor multiple bridges over an extended period of time. Due to the nation’s aging infrastructure and the limited amount of funds to upgrade and maintain it, structural health monitoring (SHM) is very important because it provides in depth information about a structure to be used in decision making. SHM of bridges includes monitoring the effects of traffic loads. This paper discusses the development of a bridge weigh-in-motion (B-WIM) technique that uses the rainflow counting of strain cycles. Typical B-WIM techniques have proven to be accurate but require large algorithms and gauges at multiple locations across the span, and the strain gauge temperature drift must be accounted for. The rainflow B-WIM (RF-BWIM) decreases the processing of the B-WIM and automatically accounts for drift, thus allowing temperature and other analyses of the same bridge to be possible. RF-BWIM also has the potential to decrease the number of sensors required. Strain data taken from an existing long term monitoring system was used to develop the RF-BWIM. The development of the RF-BWIM, as well as a method to determine a virtual gross vehicle weight (C-GVW) used in calculating the RF-BWIM output, is presented.
4

Comparison Of Histograms Of Oriented Optical Flowbased Action Recogniton Methods

Ercis, Firat 01 September 2012 (has links) (PDF)
In the task of human action recognition in uncontrolled video, motion features are used widely in order to achieve subject and appearence invariance. We implemented 3 Histograms of Oriented Optical Flow based method which have a common motion feature extraction phase. We compute an optical flow field over each frame of the video. Then those flow vectors are histogrammed due to angle values to represent each frame with a histogram. In order to capture local motions, The bounding box of the subject is divided into grids and the angle histograms of all grids are concetanated to obtain the final motion feature vector. Motion Features are supplied to 3 dierent classification system alternatives containing clustering combined with HMM, clustering with K-nearest neighbours and average histograms methods. Three methods are implemented and results are evaluated over Weizmann and KTH datasets.
5

Autonomous Morphometrics using Depth Cameras for Object Classification and Identification / Autonom Morphometri med Djupkameror för Objektklassificering och Identifiering

Björkeson, Felix January 2013 (has links)
Identification of individuals has been solved with many different solutions around the world, either using biometric data or external means of verification such as id cards or RFID tags. The advantage of using biometric measurements is that they are directly tied to the individual and are usually unalterable. Acquiring dependable measurements is however challenging when the individuals are uncooperative. A dependable system should be able to deal with this and produce reliable identifications. The system proposed in this thesis can autonomously classify uncooperative specimens from depth data. The data is acquired from a depth camera mounted in an uncontrolled environment, where it was allowed to continuously record for two weeks. This requires stable data extraction and normalization algorithms to produce good representations of the specimens. Robust descriptors can therefore be extracted from each sample of a specimen and together with different classification algorithms, the system can be trained or validated. Even with as many as 138 different classes the system achieves high recognition rates. Inspired by the research field of face recognition, the best classification algorithm, the method of fisherfaces, was able to accurately recognize 99.6% of the validation samples. Followed by two variations of the method of eigenfaces, achieving recognition rates of 98.8% and 97.9%. These results affirm that the capabilities of the system are adequate for a commercial implementation.
6

Lip Detection and Adaptive Tracking

Wang, Benjamin 01 January 2017 (has links)
Performance of automatic speech recognition (ASR) systems utilizing only acoustic information degrades significantly in noisy environments such as a car cabins. Incorporating audio and visual information together can improve performance in these situations. This work proposes a lip detection and tracking algorithm to serve as a visual front end to an audio-visual automatic speech recognition (AVASR) system. Several color spaces are examined that are effective for segmenting lips from skin pixels. These color components and several features are used to characterize lips and to train cascaded lip detectors. Pre- and post-processing techniques are employed to maximize detector accuracy. The trained lip detector is incorporated into an adaptive mean-shift tracking algorithm for tracking lips in a car cabin environment. The resulting detector achieves 96.8% accuracy, and the tracker is shown to recover and adapt in scenarios where mean-shift alone fails.
7

REM: Relational Entropy-Based Measure of Saliency

Duncan, Kester 07 May 2010 (has links)
The incredible ability of human beings to quickly detect the prominent or salient regions in an image is often taken for granted. To be able to reproduce this intelligent ability in computer vision systems remains quite a challenge. This ability is of paramount importance to perception and image understanding since it accelerates the image analysis process, thereby allowing higher vision processes such as recognition to have a focus of attention. In addition to this, human eye fixation points occurring during the early stages of visual processing, often correspond to the loci of salient image regions. These regions provide us with assistance in determining the interesting parts of an image and they also lend support to our ability to discriminate between different objects in a scene. Salient regions attract our immediate attention without requiring an exhaustive scan of a scene. In essence, saliency can be defined as the quality of an image region that enables it to stand out in relation to its neighbors. Saliency is often approached in either one of two ways. The bottom-up saliency approach refers to mechanisms which are image-driven and independent of the knowledge in an image, whereas the top-down saliency approach refers to mechanisms which are task-oriented and make use of the prior knowledge about a scene. In this thesis, we present a bottom-up measure of saliency based on the relationships exhibited among image features. The perceived structure in an image is determined more by the relationships among features rather than the individual feature attributes. From this standpoint, we aim to capture the organization within an image by employing relational distributions derived from distance and gradient direction relationships exhibited between image primitives. The Rényi entropy of the relational distribution tends to be lower if saliency is exhibited for some image region in the local pixel neighborhood over which the distribution is defined. This notion forms the foundation of our measure. Correspondingly, results of our measure are presented in the form of a saliency map, highlighting salient image regions. We show results on a variety of real images from various datasets. We evaluate the performance of our measure in relation to a dominant saliency model and obtain comparable results. We also investigate the biological plausibility of our method by comparing our results to those captured by human fixation maps. In an effort to derive meaningful information from an image, we investigate the significance of scale relative to our saliency measure, and attempt to determine optimal scales for image analysis. In addition to this, we extend a perceptual grouping framework by using our measure as an optimization criterion for determining the organizational strength of edge groupings. As a result, the use of ground truth images is circumvented.
8

Detecção de movimentos não usuais no espaço_fase / Detection of unusual motion in space-phase

Hennemann, Luciano 22 February 2008 (has links)
Made available in DSpace on 2015-03-05T13:59:44Z (GMT). No. of bitstreams: 0 Previous issue date: 22 / Nenhuma / Este trabalho tem como finalidade apresentar um modelo para detecção de comportamentos não usuais baseados em trajetórias. O modelo segue uma linha de pesquisa bastante atual na área de câmeras inteligentes ou vigilância electrônica que tende a concorrer com a enorme variedade de dispositivos baseadas em hardware hoje disponíveis no mercado. O fundamento está no processamento das trajetórias de pedestres ou objetos adquiridas por meio de filmagens feitas de ambientes com tráfego. O modelo é fundamentado em um período de treinamento, onde irá aprender o perfil das trajetórias selecionando, agrupando e posteriormente, guardando-as em um banco de dados. Depois irá compará-las com trajetórias novas que vão sendo adquiridas continuamente no período de teste ou operação propriamentte dito. No período de teste, uma trajetória será classificada com usual se compatível com as trajetórias adquiridas durante o treinamento, ou não usual, caso contrário. Neste trabalho, portanto, serão apresentados algoritmos que detectam / This work aims to present a model for detection of unusual motion based on trajectories. This model relates to the research field on intelligent cameras and surveillance systems, that tends to compete nowadays with the enormous range of devices based in hardware available on the market. The main idea of the proposed approach is to analyze trajectories acquired from film scenes. The first step of the algorithm consists of a training period, that learns the profile of trajectories, selecting, grouping and later, keeping them in a database. After that, the algorithm compares new trajectories that are being acquired continously in the test period. In test period, one given trajectory will be classified as usual if it is compatible with the trajectories acquired during the training period, or unusual otherwise. This work, therefore, will present algorithms that detect patterns of similarity between a set of trajectories in the training period with each new trajectory acquired in the test period
9

Going further with direct visual servoing / Aller plus loin avec les asservissements visuels directs

Bateux, Quentin 12 February 2018 (has links)
Dans cette thèse, nous nous concentrons sur les techniques d'asservissement visuel (AV), critiques pour de nombreuses applications de vision robotique et insistons principalement sur les AV directs. Afin d'améliorer l'état de l'art des méthodes directes, nous nous intéressons à plusieurs composantes des lois de contrôle d'AV traditionnelles. Nous proposons d'abord un cadre générique pour considérer l'histogramme comme une nouvelle caractéristique visuelle. Cela permet de définir des lois de contrôle efficaces en permettant de choisir parmi n'importe quel type d'histogramme pour décrire des images, depuis l'histogramme d'intensité à l'histogramme couleur, en passant par les histogrammes de Gradients Orientés. Une nouvelle loi d'asservissement visuel direct est ensuite proposée, basée sur un filtre particulaire pour remplacer la partie optimisation des tâches d'AV classiques, permettant d'accomplir des tâches associées à des fonctions de coûts hautement non linéaires et non convexes. L'estimation du filtre particulaire peut être calculée en temps réel à l'aide de techniques de transfert d'images permettant d'évaluer les mouvements de caméra associés aux déplacements des caractéristiques visuelles considérées dans l'image. Enfin, nous présentons une nouvelle manière de modéliser le problème de l'AV en utilisant l'apprentissage profond et les réseaux neuronaux convolutifs pour pallier à la difficulté de modélisation des problèmes non convexes via les méthodes analytiques classiques. En utilisant des techniques de transfert d'images, nous proposons une méthode permettant de générer rapidement des ensembles de données d'apprentissage de grande taille afin d'affiner des architectures de réseau pré-entraînés sur des tâches connexes, et résoudre des tâches d'AV. Nous montrons que cette méthode peut être appliquée à la fois pour modéliser des scènes connues, et plus généralement peut être utilisée pour modéliser des estimations de pose relative entre des couples de points de vue pris de scènes arbitraires. / In this thesis we focus on visual servoing (VS) techniques, critical for many robotic vision applications and we focus mainly on direct VS. In order to improve the state-of-the-art of direct methods, we tackle several components of traditional VS control laws. We first propose a method to consider histograms as a new visual servoing feature. It allows the definition of efficient control laws by allowing to choose from any type of his tograms to describe images, from intensity to color histograms, or Histograms of Oriented Gradients. A novel direct visual servoing control law is then proposed, based on a particle filter to perform the optimization part of visual servoing tasks, allowing to accomplish tasks associated with highly non-linear and non-convex cost functions. The Particle Filter estimate can be computed in real-time through the use of image transfer techniques to evaluate camera motions associated to suitable displacements of the considered visual features in the image. Lastly, we present a novel way of modeling the visual servoing problem through the use of deep learning and Convolutional Neural Networks to alleviate the difficulty to model non-convex problems through classical analytic methods. By using image transfer techniques, we propose a method to generate quickly large training datasets in order to fine-tune existing network architectures to solve VS tasks.We shows that this method can be applied both to model known static scenes, or more generally to model relative pose estimations between couples of viewpoints from arbitrary scenes.
10

[en] ENHANCEMENT OF IMAGES IN THE TRANSFORM DOMAIN / [pt] REALCE DE IMAGENS NO DOMÍNIO DA TRANSFORMADA

EDUARDO ESTEVES VALE 03 May 2006 (has links)
[pt] Esta Dissertação destina-se ao desenvolvimento de novas técnicas de realce aplicadas no domínio da transformada. O estudo das transformadas bidimensionais motivaram o desenvolvimento de técnicas baseadas nestas ferramentas matemáticas. Análises comparativas entre os métodos de realce no domínio espacial e no domínio da transformada logo revelaram as vantagens do uso das transformadas. É proposta e analisada uma nova técnica de realce no domínio da Transformada Cosseno Discreta (DCT). Os resultados mostraram que esta nova proposta é menos afetada por ruído e realça mais a imagem que as técnicas apresentadas na literatura. Adicionalmente, considera-se uma estratégia com o objetivo de eliminar o efeito de escurecimento da imagem processada pelo Alpha-rooting. É também apresentada uma nova proposta de realce no domínio da Transformada Wavelet Discreta (DWT). As simulações mostraram que a imagem resultante possui melhor qualidade visual que a de técnicas relatadas na literatura, além de ser pouco afetada pelo ruído. Além disso, a escolha do parâmetro de realce é simplificada. / [en] This Dissertation is aimed at the development of new enhancement techniques applied in the transform domain. The study of the bidimensional transforms motivated the development of techniques based on these mathematical tools. The comparative analysis between the enhancement methods in the spatial domain and in the transform domain revealed the advantages of the use of transforms. A new proposal of enhancement in the Discrete Cosine Transform (DCT) domain is analysed. The results showed that this new proposal is less affected by noise and enhances more the image than other techniques reported in the literature. In addition, a strategy to eliminate the darkening effect of enhancement by Alpha-rooting is considered. A new proposal of enhancement in the Discrete Wavelet Transform (DWT) domain is also presented. Simulation results showed that the enhanced images have better visual quality than other ones presented in the literature and is less affected by noise. Moreover, the choice of the enhancement parameter is simplified.

Page generated in 0.0537 seconds