• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 68
  • 24
  • 10
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 136
  • 68
  • 36
  • 33
  • 27
  • 27
  • 26
  • 24
  • 21
  • 16
  • 16
  • 15
  • 15
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Global Depth Perception from Familiar Scene Structure

Torralba, Antonio, Oliva, Aude 01 December 2001 (has links)
In the absence of cues for absolute depth measurements as binocular disparity, motion, or defocus, the absolute distance between the observer and a scene cannot be measured. The interpretation of shading, edges and junctions may provide a 3D model of the scene but it will not inform about the actual "size" of the space. One possible source of information for absolute depth estimation is the image size of known objects. However, this is computationally complex due to the difficulty of the object recognition process. Here we propose a source of information for absolute depth estimation that does not rely on specific objects: we introduce a procedure for absolute depth estimation based on the recognition of the whole scene. The shape of the space of the scene and the structures present in the scene are strongly related to the scale of observation. We demonstrate that, by recognizing the properties of the structures present in the image, we can infer the scale of the scene, and therefore its absolute mean depth. We illustrate the interest in computing the mean depth of the scene with application to scene recognition and object detection.
22

Monocular Vision based Particle Filter Localization in Urban Environments

Leung, Keith Yu Kit 17 September 2007 (has links)
This thesis presents the design and experimental result of a monocular vision based particle filter localization system for urban settings that uses aerial orthoimagery as a reference map. The topics of perception and localization are reviewed along with their modeling using a probabilistic framework. Computer vision techniques used to create the feature map and to extract features from camera images are discussed. Localization results indicate that the design is viable.
23

Monocular Vision based Particle Filter Localization in Urban Environments

Leung, Keith Yu Kit 17 September 2007 (has links)
This thesis presents the design and experimental result of a monocular vision based particle filter localization system for urban settings that uses aerial orthoimagery as a reference map. The topics of perception and localization are reviewed along with their modeling using a probabilistic framework. Computer vision techniques used to create the feature map and to extract features from camera images are discussed. Localization results indicate that the design is viable.
24

Optical Design And Analysis Of A Riflescope System

Bayar, Cevdet 01 September 2009 (has links) (PDF)
Today, riflescope systems are used widely, mostly by military forces. In this study, a riflescope working in the visible range (400-700 nm) will be designed. The riflescope will have 3 degrees field of view and maximum 15 cm total track. Total design length is limited to 15 cm because a short riflescope is more stabilized than a long one with respect to thermal instability and vibrational effects. Taken into account the cost factor, only two types of glasses will be used in the design. One of them is NBK7 a crown glass and the other is N-F2 a flint glass. Moreover, Schmidt-Pechan prism will be used to construct an erected image. The optical performance analysis of the design is also carried out for a production ready riflescope system.
25

Monocular depth perception for a computer vision system

Rosenberg, David. January 1981 (has links)
No description available.
26

Exploration of the crosslinks between saccadic and vergence eye movement pathways using motor and visual perturbations

Schultz, Kevin P. January 2010 (has links) (PDF)
Thesis (Ph.D.)--University of Alabama at Birmingham, 2010. / Title from PDF title page (viewed on July 8, 2010). Includes bibliographical references (p.169-183).
27

Caminhamento fotogramétrico utilizando o fluxo óptico filtrado

Barbosa, Ricardo Luís [UNESP] January 2006 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:30:31Z (GMT). No. of bitstreams: 0 Previous issue date: 2006Bitstream added on 2014-06-13T19:40:16Z : No. of bitstreams: 1 barbosa_rl_dr_prud.pdf: 1353454 bytes, checksum: 109a85af0d056d18c721a2c6af70ce93 (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) / Em certas condições, os sensores de orientação e posicionamento (INS e GPS) de um Sistema Móvel de Mapeamento Terrestre (SMMT) ficam indisponíveis por algum intervalo de tempo casionando a perda da orientação e do posicionamento das imagens capturadas neste intervalo. Neste trabalho, é proposta uma solução baseada apenas nas imagens sem a utilização de sensores ou informações externas às mesmas, através do fluxo óptico. Um sistema móvel com um par de vídeo câmaras, denominado Unidade Móvel de Mapeamento Digital (UMMD), foi utilizado para testar a metodologia proposta em uma via plana. As câmaras são fixadas em uma base com um afastamento entre as câmaras de 0,94m e paralelas ao eixo de deslocamento (Y). A velocidade do veículo é estimada, inicialmente, com base no fluxo óptico denso. Em seguida, a estimação da velocidade é melhorada após uma filtragem, que consiste em: utilizar os vetores que apresentam comportamento radial na metade inferior das imagens e que foram detectados pelo algoritmo de Canny, acrescida uma segunda etapa na estimação da velocidade com eliminação de erros grosseiros. Com a velocidade estimada e sabendo-se o tempo de amostragem do vídeo, o deslocamento de cada imagem é determinado e esta informação é utilizada como aproximação inicial para o posicionamento das câmaras. Os resultados mostraram que a velocidade estimada ficou próxima da velocidade verdadeira e a qualidade do ajustamento se mostrou razoável, considerando-se a não utilização de sensores externos e de pontos de apoio. / Under certain conditions the positioning and orientation sensors such as INS and GPS of a land-based mobile mapping system may fail for a certain time interval. The consequence is that the images captured during this time interval may be misoriented or even may have no orientation. This thesis proposes a solution to orient the images based only on image processing and a photogrammetric technique without any external sensors in order to overcome the lack of external orientation. A land-based mobile mapping system with a pair of video cameras and a GPS receiver was used to test the proposed methodology on an urban flat road. The video cameras were mounted on the roof of the vehicle with both optical axes parallel to the main road axis (Y). The methodology is based on the velocity estimation of the vehicle, which is done in two steps. Initially, the dense optical flow is computed then the velocity estimation is obtained through a filtering strategy that consists of using radial vectors in the low parts of the images. These radial vectors are detected by the Canny algorithm. The vehicle velocity is re-estimated after eliminating the optical flow outliers. With the reestimated velocity and with the video sampling time the spatial displacement of each image (with respect to the previous one of the sequence) is determined. The results show that the estimated velocity is pretty close to the true one and the quality of the least square adjustment is quite acceptable, considering that no external sensors were used.
28

Long range monocular SLAM

Frost, Duncan January 2017 (has links)
This thesis explores approaches to two problems in the frame-rate computation of a priori unknown 3D scene structure and camera pose using a single camera, or monocular simultaneous localisation and mapping. The thesis reflects two trends in vision in general and structure from motion in particular: (i) the move from directly recovered and towards learnt geometry; and (ii) the sparsification of otherwise dense direct methods. The first contributions mitigate scale drift. Beyond the inevitable accumulation of random error, monocular SLAM accumulates error via the depth/speed scaling ambiguity. Three solutions are investigated. The first detects objects of known class and size using fixed descriptors, and incorporates their measurements in the 3D map. Experiments using databases with ground truth show that metric accuracy can be restored over kilometre distances; and similar gains are made using a hand-held camera. Our second method avoids explicit feature choice, instead employing a deep convolutional neural network to yield depth priors. Relative depths are learnt well, but absolute depths less so, and recourse to database-wide scaling is investigated. The third approach uses a novel trained network to infer speed from imagery. The second part of the thesis develops sparsified direct methods for monocular SLAM. The first contribution is a novel camera tracker operating directly using affine image warping, but on patches around sparse corners. Camera pose is recovered with an accuracy at least equal to the state of the art, while requiring only half the computational time. The second introduces a least squares adjustment to sparsified direct map refinement, again using patches from sparse corners. The accuracy of its 3D structure estimation is compared with that from the widely used method of depth filtering. It is found empirically that the new method's accuracy is often higher than that of its filtering counterpart, but that the method is more troubled by occlusion.
29

Exploração autônoma utilizando SLAM monocular esparso

Pittol, Diego January 2018 (has links)
Nos últimos anos, observamos o alvorecer de uma grande quantidade de aplicações que utilizam robôs autônomos. Para que um robô seja considerado verdadeiramente autônomo, é primordial que ele possua a capacidade de aprender sobre o ambiente no qual opera. Métodos de SLAM (Localização e Mapeamento Simultâneos) constroem um mapa do ambiente por onde o robô trafega ao mesmo tempo em que estimam a trajetória correta do robô. No entanto, para obter um mapa completo do ambiente de forma autônoma é preciso guiar o robô por todo o ambiente, o que é feito no problema de exploração. Câmeras são sensores baratos que podem ser utilizadas para a construção de mapas 3D. Porém, o problema de exploração em mapas gerados por métodos de SLAM monocular, i.e. que extraem informações de uma única câmera, ainda é um problema em aberto, pois tais métodos geram mapas esparsos ou semi-densos, que são inadequados para navegação e exploração. Para tal situação, é necessário desenvolver métodos de exploração capazes de lidar com a limitação das câmeras e com a falta de informação nos mapas gerados por SLAMs monoculares. Propõe-se uma estratégia de exploração que utilize mapas volumétricos locais, gerados através das linhas de visão, permitindo que o robô navegue em segurança. Nestes mapas locais, são definidos objetivos que levem o robô a explorar o ambiente desviando de obstáculos. A abordagem proposta visa responder a questão fundamental em exploração: "Para onde ir?". Além disso, busca determinar corretamente quando o ambiente está suficientemente explorado e a exploração deve parar. A abordagem proposta é avaliada através de experimentos em um ambiente simples (i.e. apenas uma sala) e em um ambiente compostos por diversas salas. / In recent years, we have seen the dawn of a large number of applications that use autonomous robots. For a robot to be considered truly autonomous, it is primordial that it has the ability to learn about the environment in which it operates. SLAM (Simultaneous Location and Mapping) methods build a map of the environment while estimating the robot’s correct trajectory. However, to autonomously obtain a complete map of the environment, it is necessary to guide the robot throughout the environment, which is done in the exploration problem. Cameras are inexpensive sensors that can be used for building 3D maps. However, the exploration problem in maps generated by monocular SLAM methods (i.e. that extract information from a single camera) is still an open problem, since such methods generate sparse or semi-dense maps that are ill-suitable for navigation and exploration. For such a situation, it is necessary to develop exploration methods capable of dealing with the limitation of the cameras and the lack of information in the maps generated by monocular SLAMs. We proposes an exploration strategy that uses local volumetric maps, generated using the lines of sight, allowing the robot to safely navigate. In these local maps, objectives are defined to lead the robot to explore the environment while avoiding obstacles. The proposed approach aims to answer the fundamental question in exploration: "Where to go?". In addition, it seeks to determine correctly when the environment is sufficiently explored and the exploration must stop. The effectiveness of the proposed approach is evaluated in experiments on single and multi-room environments.
30

Exploração autônoma utilizando SLAM monocular esparso

Pittol, Diego January 2018 (has links)
Nos últimos anos, observamos o alvorecer de uma grande quantidade de aplicações que utilizam robôs autônomos. Para que um robô seja considerado verdadeiramente autônomo, é primordial que ele possua a capacidade de aprender sobre o ambiente no qual opera. Métodos de SLAM (Localização e Mapeamento Simultâneos) constroem um mapa do ambiente por onde o robô trafega ao mesmo tempo em que estimam a trajetória correta do robô. No entanto, para obter um mapa completo do ambiente de forma autônoma é preciso guiar o robô por todo o ambiente, o que é feito no problema de exploração. Câmeras são sensores baratos que podem ser utilizadas para a construção de mapas 3D. Porém, o problema de exploração em mapas gerados por métodos de SLAM monocular, i.e. que extraem informações de uma única câmera, ainda é um problema em aberto, pois tais métodos geram mapas esparsos ou semi-densos, que são inadequados para navegação e exploração. Para tal situação, é necessário desenvolver métodos de exploração capazes de lidar com a limitação das câmeras e com a falta de informação nos mapas gerados por SLAMs monoculares. Propõe-se uma estratégia de exploração que utilize mapas volumétricos locais, gerados através das linhas de visão, permitindo que o robô navegue em segurança. Nestes mapas locais, são definidos objetivos que levem o robô a explorar o ambiente desviando de obstáculos. A abordagem proposta visa responder a questão fundamental em exploração: "Para onde ir?". Além disso, busca determinar corretamente quando o ambiente está suficientemente explorado e a exploração deve parar. A abordagem proposta é avaliada através de experimentos em um ambiente simples (i.e. apenas uma sala) e em um ambiente compostos por diversas salas. / In recent years, we have seen the dawn of a large number of applications that use autonomous robots. For a robot to be considered truly autonomous, it is primordial that it has the ability to learn about the environment in which it operates. SLAM (Simultaneous Location and Mapping) methods build a map of the environment while estimating the robot’s correct trajectory. However, to autonomously obtain a complete map of the environment, it is necessary to guide the robot throughout the environment, which is done in the exploration problem. Cameras are inexpensive sensors that can be used for building 3D maps. However, the exploration problem in maps generated by monocular SLAM methods (i.e. that extract information from a single camera) is still an open problem, since such methods generate sparse or semi-dense maps that are ill-suitable for navigation and exploration. For such a situation, it is necessary to develop exploration methods capable of dealing with the limitation of the cameras and the lack of information in the maps generated by monocular SLAMs. We proposes an exploration strategy that uses local volumetric maps, generated using the lines of sight, allowing the robot to safely navigate. In these local maps, objectives are defined to lead the robot to explore the environment while avoiding obstacles. The proposed approach aims to answer the fundamental question in exploration: "Where to go?". In addition, it seeks to determine correctly when the environment is sufficiently explored and the exploration must stop. The effectiveness of the proposed approach is evaluated in experiments on single and multi-room environments.

Page generated in 0.0464 seconds