• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 68
  • 24
  • 10
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 136
  • 68
  • 36
  • 33
  • 27
  • 27
  • 26
  • 24
  • 21
  • 16
  • 16
  • 15
  • 15
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Sensor fusion to detect scale and direction of gravity in monocular SLAM systems

Tucker, Seth C. January 2017 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Monocular simultaneous localization and mapping (SLAM) is an important technique that enables very inexpensive environment mapping and pose estimation in small systems such as smart phones and unmanned aerial vehicles. However, the information generated by monocular SLAM is in an arbitrary and unobservable scale, leading to drift and making it difficult to use with other sources of odometry for control or navigation. To correct this, the odometry needs to be aligned with metric scale odometry from another device, or else scale must be recovered from known features in the environment. Typically known environmental features are not available, and for systems such as cellphones or unmanned aerial vehicles (UAV), which may experience sustained, small scale, irregular motion, an IMU is often the only practical option. Because accelerometers measure acceleration and gravity, an inertial measurement unit (IMU) must filter out gravity and track orientation with complex algorithms in order to provide a linear acceleration measurement that can be used to recover SLAM scale. In this thesis, an alternative method will be proposed, which detects and removes gravity from the accelerometer measurement by using the unscaled direction of acceleration derived from the SLAM odometry.
32

Monocular depth perception for a computer vision system

Rosenberg, David. January 1981 (has links)
No description available.
33

Self-supervised monocular image depth learning and confidence estimation

Chen, L., Tang, W., Wan, Tao Ruan, John, N.W. 17 June 2020 (has links)
No / We present a novel self-supervised framework for monocular image depth learning and confidence estimation. Our framework reduces the amount of ground truth annotation data required for training Convolutional Neural Networks (CNNs), which is often a challenging problem for the fast deployment of CNNs in many computer vision tasks. Our DepthNet adopts a novel fully differential patch-based cost function through the Zero-Mean Normalized Cross Correlation (ZNCC) to take multi-scale patches as matching and learning strategies. This approach greatly increases the accuracy and robustness of the depth learning. Whilst the proposed patch-based cost function naturally provides a 0-to-1 confidence, it is then used to self-supervise the training of a parallel network for confidence map learning and estimation by exploiting the fact that ZNCC is a normalized measure of similarity which can be approximated as the confidence of the depth estimation. Therefore, the proposed corresponding confidence map learning and estimation operate in a self-supervised manner and is a parallel network to the DepthNet. Evaluation on the KITTI depth prediction evaluation dataset and Make3D dataset show that our method outperforms the state-of-the-art results.
34

Eyewear for rugby union: wearer characteristics and experience with rugby goggles

Little, J-A., Eckert, F., Douglas, M., Barrett, Brendan T. 27 January 2020 (has links)
Yes / Unlike many other sports, Rugby Union has not permitted players to wear spectacles or eye protection. With an industrial partner, World Rugby developed goggles suitable for use while playing rugby for the purposes of growing participation amongst those that need to wear corrective lenses. This study reports on the profile and experiences of goggle wearers. 387 players received the goggles. Data were obtained from 188 (49 %) using an online, 75-item questionnaire. 87 % “strongly agreed/agreed” that goggles are beneficial and 75 % are happy with goggle performance. Common problems reported by 49.7 and 32.6 % of respondents were issues with fogging-up and getting dirty. 15 (8 %) players stopped wearing the goggles because of fogging-up, limits to peripheral vision and poor comfort/fit. Injuries were reported in 3 % of respondents. In none of these cases did the player stop wearing the goggles. From the positive experience of players in the trial, the goggles were adopted into the Laws of the game on July 1, 2019. As the need to correct vision with spectacles is common, and contact lenses are not worn by 80 % + of spectacle wearers, the new Rugby goggles will widen participation for those that need to wear refractive correction, or have an existing/increased risk of uniocular visual impairment.
35

An Analysis of Head Movements and Binocular/Monocular Viewing Conditions in Visual Position Discrimination

Keller, William F. 10 1900 (has links)
<p> The study concerns how human observers judge the relative position of successively presented points of light in an otherwise dark field. In particular, the possible role of involuntary head movements and binocular/monocular viewing conditions is considered. The data are analysed in terms of a mathematical model of the perceptual process which deals with short term memory for visual position. Contrary to previous suggestions in the literature, neither of the viewing variables proved to have a significant effect. In addition, the results provide a strong test of the theoretical model which appears to confirm the model's validity. The results of this study are shown to suggest a particular direction for future experimentation.</p> / Thesis / Master of Arts (MA)
36

Neuronal basis of horizontal eye velocity-to-position integration

Debowy, Owen G. 20 January 2007 (has links)
Motion of an image across the retina degrades visual accuracy, thus eye position must be held stationary. The horizontal eye velocity-to-position neural integrator (PNI), located in the caudal hindbrain of vertebrates, is believed to be responsible since the neuronal firing rate is sustained and proportional to eye position. The physiological mechanism for PNI function has been envisioned to be either (1) network dynamics within or between the bilateral PNI including brainstem/cerebellar pathways or (2) cellular properties of PNI neurons. These hypotheses were investigated by recording PNI neuronal activity in goldfish during experimental paradigms consisting of disconjugacy, commissurectomy and cerebellectomy.In goldfish, the eye position time constant ([tau]) is modifiable by short-term (~1 hr) visual feedback training to either drift away from, or towards, the center of the oculomotor range. Although eye movements are yoked in direction and timing, disconjugate motion during [tau] modification suggested separate PNIs to exist for each eye. Correlation of PNI neural activity with eye position during disconjugacy demonstrated the presence of two discrete neuronal populations exhibiting ipsilateral and conjugate eye sensitivity. During monocular PNI plasticity, [tau] was differentially modified for each eye corroborating coexistence of distinct neuronal populations within PNI.The hypothesized role of reciprocal inhibitory feedback between PNI was tested by commissurectomy. Both sustained PNI activity and [tau] remained with a concurrent nasal shift in eye position and decrease in oculomotor range. [tau] modification also was unaffected, suggesting that PNI function is independent of midline connections.The mammalian cerebellum has been suggested to play a dominant role for both [tau] and [tau] modification. In goldfish, cerebellar inactivation by either aspiration or pharmacology both prevented and abolished [tau] modifications, but did not affect eye position holding. PNI neurons still exhibited eye position related firing and modulation during training.By excluding all network circuitry either intrinsic or extrinsic to PNI, these results favor a cellular mechanism as the major determinate of sustained neural activity and eye position holding. By contrast, while cerebellar pathways are important for sustaining large [tau] (>20s), they are unequivocally essential for [tau] modification.
37

Detecção e desvio de obstáculos para veículos aéreos não tripulados usando visão monocular / Obstacle avoidance for UAVs using monocular vision

Chiaramonte, Rodolfo Barros 21 November 2018 (has links)
Veículos autônomos são importantes para a execução de missões dos mais variados tipos, reduzindo riscos aos seres humanos e executando as missões de uma maneira mais eficiente. Neste contexto existem os veículos aéreos não tripulados que são cada vez mais utilizados em missões de vigilância, reconhecimento, resgate, entre outras. Uma das características destes veículos é realizar as missões de maneira autônoma, sem a intervenção de operadores humanos. Desta forma, é necessário que existam formas de detectar aproximações perigosas com outras aeronaves e objetos que possam causar risco de colisão e, consequentemente a perda de ativos de alto valor ou até mesmo vidas humanas e, posteriormente realizar o desvio necessário. Neste cenário foi proposto o MOSAIC, um sistema de detecção e desvio de obstáculos utilizando visão monocular para veículos aéreos de pequeno porte. Para isto, foi desenvolvido um método de estimativa da posição tridimensional dos obstáculos a partir de imagens monoculares e propostas melhorias em algoritmos de detecção. A validação do sistema foi obtida por meio de experimentos simulados e reais sobre cada módulo e os resultados obtidos foram promissores, apresentando um erro de apenas 9,75% em ambientes sem restrições e distâncias de até 20 metros. Com isto, os resultados se mostram melhores que os demais algoritmos encontrados no estado da arte em que o erro é menor que 10% apenas em ambientes controlados e distâncias de até 5 metros. / Autonomous vehicles can be used for different kinds of missions reducing risks to human life and being more efficient. In this context, unmanned aerial vehicles play an important role on surveillance, recognition and rescue missions, among others. Due to the mission nature, these vehicles need to perform actions without human intervention, which requires that dangerous approximations to others aerial vehicles or objects to be detected and properly avoided. This leads to the creation of MOSAIC, an obstacle avoidance system based on monocular vision designed to meet the requirements of miniature air vehicles. A novel approach to estimate obstacle three-dimensional position based on monocular vision was developed and some improvements in the detection algorithm were proposed. The system validation was obtained through simulated and real experiments in which each module could be validated. Promising results were obtained showing an error under 9.75% in unconstrained environments and distance up to 20 meters. This results were better than the algorithms and approaches described in the state of the art where errors are under 10% only on constrained environments and distance up to 5 meters.
38

Detecção e desvio de obstáculos para veículos aéreos não tripulados usando visão monocular / Obstacle avoidance for UAVs using monocular vision

Rodolfo Barros Chiaramonte 21 November 2018 (has links)
Veículos autônomos são importantes para a execução de missões dos mais variados tipos, reduzindo riscos aos seres humanos e executando as missões de uma maneira mais eficiente. Neste contexto existem os veículos aéreos não tripulados que são cada vez mais utilizados em missões de vigilância, reconhecimento, resgate, entre outras. Uma das características destes veículos é realizar as missões de maneira autônoma, sem a intervenção de operadores humanos. Desta forma, é necessário que existam formas de detectar aproximações perigosas com outras aeronaves e objetos que possam causar risco de colisão e, consequentemente a perda de ativos de alto valor ou até mesmo vidas humanas e, posteriormente realizar o desvio necessário. Neste cenário foi proposto o MOSAIC, um sistema de detecção e desvio de obstáculos utilizando visão monocular para veículos aéreos de pequeno porte. Para isto, foi desenvolvido um método de estimativa da posição tridimensional dos obstáculos a partir de imagens monoculares e propostas melhorias em algoritmos de detecção. A validação do sistema foi obtida por meio de experimentos simulados e reais sobre cada módulo e os resultados obtidos foram promissores, apresentando um erro de apenas 9,75% em ambientes sem restrições e distâncias de até 20 metros. Com isto, os resultados se mostram melhores que os demais algoritmos encontrados no estado da arte em que o erro é menor que 10% apenas em ambientes controlados e distâncias de até 5 metros. / Autonomous vehicles can be used for different kinds of missions reducing risks to human life and being more efficient. In this context, unmanned aerial vehicles play an important role on surveillance, recognition and rescue missions, among others. Due to the mission nature, these vehicles need to perform actions without human intervention, which requires that dangerous approximations to others aerial vehicles or objects to be detected and properly avoided. This leads to the creation of MOSAIC, an obstacle avoidance system based on monocular vision designed to meet the requirements of miniature air vehicles. A novel approach to estimate obstacle three-dimensional position based on monocular vision was developed and some improvements in the detection algorithm were proposed. The system validation was obtained through simulated and real experiments in which each module could be validated. Promising results were obtained showing an error under 9.75% in unconstrained environments and distance up to 20 meters. This results were better than the algorithms and approaches described in the state of the art where errors are under 10% only on constrained environments and distance up to 5 meters.
39

Estimação monocular de profundidade por aprendizagem profunda para veículos autônomos: influência da esparsidade dos mapas de profundidade no treinamento supervisionado / Monocular depth estimation by deep learning for autonomous vehicles: influence of depth maps sparsity in supervised training

Rosa, Nícolas dos Santos 24 June 2019 (has links)
Este trabalho aborda o problema da estimação de profundidade a partir de imagens monoculares (SIDE), com foco em melhorar a qualidade das predições de redes neurais profundas. Em um cenário de aprendizado supervisionado, a qualidade das predições está intrinsecamente relacionada aos rótulos de treinamento, que orientam o processo de otimização. Para cenas internas, sensores de profundidade baseados em escaneamento por luz estruturada (Ex.: Kinect) são capazes de fornecer mapas de profundidade densos, embora de curto alcance. Enquanto que para cenas externas, consideram-se LiDARs como sensor de referência, que comparativamente fornece medições mais esparsas, especialmente em regiões mais distantes. Em vez de modificar a arquitetura de redes neurais para lidar com mapas de profundidade esparsa, este trabalho introduz um novo método de densificação para mapas de profundidade, usando o framework de Mapas de Hilbert. Um mapa de ocupação contínuo é produzido com base nos pontos 3D das varreduras do LiDAR, e a superfície reconstruída resultante é projetada em um mapa de profundidade 2D com resolução arbitrária. Experimentos conduzidos com diferentes subconjuntos do conjunto de dados do KITTI mostram uma melhora significativa produzida pela técnica proposta (esparso-para-contínuo), sem necessitar inserir informações extras durante a etapa de treinamento. / This work addresses the problem of single image depth estimation (SIDE), focusing on improving the quality of deep neural network predictions. In a supervised learning scenario, the quality of predictions is intrinsically related to the training labels, which guide the optimization process. For indoor scenes, structured-light-based depth sensors (e.g. Kinect) are able to provide dense, albeit short-range, depth maps. While for outdoor scenes, LiDARs are considered the standard sensor, which comparatively provide much sparser measurements, especially in areas further away. Rather than modifying the neural network architecture to deal with sparse depth maps, this work introduces a novel densification method for depth maps using the Hilbert Maps framework. A continuous occupancy map is produced based on 3D points from LiDAR scans, and the resulting reconstructed surface is projected into a 2D depth map with arbitrary resolution. Experiments conducted with various subsets of the KITTI dataset show a significant improvement produced by the proposed Sparse-to-Continuous technique, without the introduction of extra information into the training stage.
40

Localization of Combat Aircraft at High Altitude using Visual Odometry

Nilsson Boij, Jenny January 2022 (has links)
Most of the navigation systems used in today’s aircraft rely on Global Navigation Satellite Systems (GNSS). However, GNSS is not fully reliable. For example, it can be jammed by attacks on the space or ground segments of the system or denied at inaccessible areas. Hence to ensure successful navigation it is of great importance to continuously be able to establish the aircraft’s location without having to rely on external reference systems. Localization is one of many sub-problems in navigation and will be the focus of this thesis. This brings us to the field of visual odometry (VO), which involves determining position and orientation with the help of images from one or more camera sensors. But to date, most VO systems have primarily been established on ground vehicles and low flying multi-rotor systems. This thesis seeks to extend VO to new applications by exploring it in a fairly new context; a fixed-wing piloted combat aircraft, for vision-only pose estimation in applications of extremely large scene depth. A major part of this research work is the data gathering, where the data is collected using the flight simulator X-Plane 11. Three different flight routes are flown; a straight line, a curve and a loop, for two types of visual conditions; in clear weather with daylight and during sunset. The method used in this work is ORB-SLAM3, an open-source library for visual simultaneous localization and mapping (SLAM). It has shown excellent results in previous works and has become a benchmark method often used in the field of visual pose estimation. ORB-SLAM3 tracks the straight line of 78 km very well at an altitude over 2700 m. The absolute trajectory error (ATE) is 0.072% of the total distance traveled in daylight and 0.11% during sunset. These results are of the same magnitude as ORB-SLAM3 on the EuRoC MAV dataset. For the curved trajectory of 79 km ATE is 2.0% and 1.2% of total distance traveled in daylight and sunset respectively.  The longest flight route of 258 km shows the challenges of visual pose estimation. Although it is managing to close loops in daylight, it has an ATE of 3.6% during daylight. During sunset the features do not possess enough invariant characteristics to close loops, resulting in an even larger ATE of 14% of total distance traveled. Hence to be able to use and properly rely on vision in localization, more sensor information is needed. But since all aircraft already possess an inertial measurement unit (IMU), the future work naturally includes IMU data in the system. Nevertheless, the results from this research show that vision is useful, even at the high altitudes and speeds used by a combat aircraft.

Page generated in 0.0509 seconds