• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 51
  • 21
  • 17
  • 8
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 134
  • 134
  • 35
  • 33
  • 28
  • 23
  • 23
  • 22
  • 21
  • 20
  • 20
  • 17
  • 17
  • 17
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Vizuální detekce elektronických součástek / Visual detection of electronic devices

Juhas, Miroslav January 2010 (has links)
This thesis describes application of image processing for precise distance measurement in self acting production of a tip for AFM microscopes. The main goal is to measure distances between assembly parts during fabrication process. The purpose is to acquire a data for self acting assembly line which have to substitute inaccurate and nonrecurring manual assembly process. The assembly process consists of three technological steps. In first two steps the tungsten wire is glued to the cantilever. Distance measurement is necessary in all axes for proper alignment of parts. In third step the sharp tip is etched by KOH solution. The right distance between liquid level and the cantilever must be kept. A camera with high resolution and macro objective is used to acquire an image. Camera image is then calibrated to suppress distortions and scene position with respect to camera position. Length conversion coefficient is also computed. Object recognition and distance measurement is based on standard computer vision methods, mainly: adaptive thresholding, moments, image statistics, canny edge detector, Hough transform… Proposed algorithms have been implemented in C++ using Intel OpenCV library. The final achieved distance resolution is about 10µm per pixel. Algorithm output was successfully used to assembly few test tips.
112

Měření výšky postavy v obraze / Height Measurement in Digital Image

Olejár, Adam January 2015 (has links)
The aim of this paper is a summary of the theory necessary for a modification, detection of person and the height calculation of the detected person in the image. These information were then used for implementation of the algoritm. The first half reveals teoretical problems and solutions. Shows the basic methods of image preprocessing and discusses the basic concepts of plane and projective geometry and transformations. Then describes the distortion, that brings into the picture imperfections of optical systems of cameras and the possibilities of removing them. Explains HOG algorithm and the actual method of calculating height of person detected in the image. The second half describes algoritm structure and statistical evaluation.
113

Dokumentace historických artefaktů s využitím blízké fotogrammetrie / Use of Close Range Photogrammetry for Documentation of Historical Artefacts.

Naništa, Jiří January 2013 (has links)
This diploma thesis deals with the design and implementation of appropriate procedure photogrammetry processing of technical documentation of selected historical artifacts gauges. During processing was calibrated camera, historical gauges were photographed and metrological documented and model viualization was created.
114

Určení pozice mobilního zařízení v prostoru / Localization of Mobile Device in Space

Komár, Michal January 2013 (has links)
This thesis focuses on the current localization options of the Android mobile phone platform. It explores the possibilities of locating mobile devices not only with the use of inertial sensors, but also the possibility of localization using integrated video camera. The work describes the measurements done with available inertial sensors, introduces visual localization algorithm and design a system using these two approaches.
115

Lokalizace objektů v prostoru / Object Localisation in 3D Space

Šolony, Marek Unknown Date (has links)
Virtual reality systems are nowadays common part of many research institutes due to its low cost and effective visualization of data. They mostly allow visualization and exploration of virtual worlds, but many lack user interaction. In this paper we suggest multi-camera optical system, which allows effective user interaction, thereby increasing immersion of virtual system. This paper describes the calibration process of multiple cameras using point correspondences.
116

Geometric model of a dual-fisheye system composed of hyper-hemispherical lenses /

Castanheiro, Letícia Ferrari January 2020 (has links)
Orientador: Antonio Maria Garcia Tommaselli / Resumo: A combinação de duas lentes com FOV hiper-hemisférico em posição opostas pode gerar um sistema omnidirecional (FOV 360°) leve, compacto e de baixo custo, como Ricoh Theta S e GoPro Fusion. Entretanto, apenas algumas técnicas e modelos matemáticos para a calibração um sistema com duas lentes hiper-hemisféricas são apresentadas na literatura. Nesta pesquisa, é avaliado e definido um modelo geométrico para calibração de sistemas omnidirecionais compostos por duas lentes hiper-hemisféricas e apresenta-se algumas aplicações com esse tipo de sistema. A calibração das câmaras foi realizada no programa CMC (calibração de múltiplas câmeras) utilizando imagens obtidas a partir de vídeos feitos com a câmara Ricoh Theta S no campo de calibração 360°. A câmara Ricoh Theta S é composto por duas lentes hiper-hemisféricas fisheye que cobrem 190° cada uma. Com o objetivo de avaliar as melhorias na utilização de pontos em comum entre as imagens, dois conjuntos de dados de pontos foram considerados: (1) apenas pontos no campo hemisférico, e (2) pontos em todo o campo de imagem (isto é, adicionar pontos no campo de imagem hiper-hemisférica). Primeiramente, os modelos ângulo equisólido, equidistante, estereográfico e ortogonal combinados com o modelo de distorção Conrady-Brown foram testados para a calibração de um sensor da câmara Ricoh Theta S. Os modelos de ângulo-equisólido e estereográfico apresentaram resultados melhores do que os outros modelos. Portanto, esses dois modelos de projeção for... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: The arrangement of two hyper-hemispherical fisheye lenses in opposite position can design a light weight, small and low-cost omnidirectional system (360° FOV), e.g. Ricoh Theta S and GoPro Fusion. However, only a few techniques are presented in the literature to calibrate a dual-fisheye system. In this research, a geometric model for dual-fisheye system calibration was evaluated, and some applications with this type of system are presented. The calibrating bundle adjustment was performed in CMC (calibration of multiple cameras) software by using the Ricoh Theta video frames of the 360° calibration field. The Ricoh Theta S system is composed of two hyper-hemispherical fisheye lenses with 190° FOV each one. In order to evaluate the improvement in applying points in the hyper-hemispherical image field, two data set of points were considered: (1) observations that are only in the hemispherical field, and (2) points in all image field, i.e. adding points in the hyper-hemispherical image field. First, one sensor of the Ricoh Theta S system was calibrated in a bundle adjustment based on the equidistant, equisolid-angle, stereographic and orthogonal models combined with Conrady-Brown distortion model. Results showed that the equisolid-angle and stereographic models can provide better solutions than those of the others projection models. Therefore, these two projection models were implemented in a simultaneous camera calibration, in which the both Ricoh Theta sensors were considered i... (Complete abstract click electronic access below) / Mestre
117

Calibrage de caméra fisheye et estimation de la profondeur pour la navigation autonome

Brousseau, Pierre-André 08 1900 (has links)
Ce mémoire s’intéresse aux problématiques du calibrage de caméras grand angles et de l’estimation de la profondeur à partir d’une caméra unique, immobile ou en mouvement. Les travaux effectués se situent à l’intersection entre la vision 3D classique et les nouvelles méthodes par apprentissage profond dans le domaine de la navigation autonome. Ils visent à permettre la détection d’obstacles par un drone en mouvement muni d’une seule caméra à très grand angle de vue. D’abord, une nouvelle méthode de calibrage est proposée pour les caméras fisheyes à très grand angle de vue par calibrage planaire à correspondances denses obtenues par lumière structurée qui peuvent être modélisée par un ensemble de caméras génériques virtuelles centrales. Nous démontrons que cette approche permet de modéliser directement des caméras axiales, et validons sur des données synthétiques et réelles. Ensuite, une méthode est proposée pour estimer la profondeur à partir d’une seule image, à partir uniquement des indices de profondeurs forts, les jonctions en T. Nous démontrons que les méthodes par apprentissage profond sont susceptibles d’apprendre les biais de leurs ensembles de données et présentent des lacunes d’invariance. Finalement, nous proposons une méthode pour estimer la profondeur à partir d’une caméra en mouvement libre à 6 degrés de liberté. Ceci passe par le calibrage de la caméra fisheye sur le drone, l’odométrie visuelle et la résolution de la profondeur. Les méthodes proposées permettent la détection d’obstacle pour un drone. / This thesis focuses on the problems of calibrating wide-angle cameras and estimating depth from a single camera, stationary or in motion. The work carried out is at the intersection between traditional 3D vision and new deep learning methods in the field of autonomous navigation. They are designed to allow the detection of obstacles by a moving drone equipped with a single camera with a very wide field of view. First, a new calibration method is proposed for fisheye cameras with very large field of view by planar calibration with dense correspondences obtained by structured light that can be modelled by a set of central virtual generic cameras. We demonstrate that this approach allows direct modeling of axial cameras, and validate it on synthetic and real data. Then, a method is proposed to estimate the depth from a single image, using only the strong depth cues, the T-junctions. We demonstrate that deep learning methods are likely to learn from the biases of their data sets and have weaknesses to invariance. Finally, we propose a method to estimate the depth from a camera in free 6 DoF motion. This involves calibrating the fisheye camera on the drone, visual odometry and depth resolution. The proposed methods allow the detection of obstacles for a drone.
118

Rekonstrukce 3D objektu z obrazových dat / 3D Objects Reconstruction from Image Data

Cír, Filip January 2008 (has links)
This paper deals with 3D reconstruction of objects from image data. There is describes theoretical basis of the 3D optical scanning. Handheld 3D optical scanner setup is described composed of a single camera and a line laser whose position is fixed with respect to the camera. Set of image markers and a simple real-time detection algorithm are proposed. Detected markers are used to estimate position and orientation of the camera. Finally, laser detection and triangulation of points lying on object surface are discussed.
119

Kalibrace snímačů pro multispektrální datovou fúzi v mobilní robotice / Sensor Calibration for Multispectral Data Fusion in Mobile Robotics

Kalvodová, Petra January 2015 (has links)
Thesis deals with data fusion and calibration of sensory system of Orpheus-X3 robot and EnvMap mapping robot. These robots are parts of Cassandra robotic system that is used for exploration of hazardous or inaccessible areas. Corrections of measured distances are determined for used laser scanners Velodyne HDL-64, Velodyne HDL-32 and range camera SwissRanger SR4000. Software MultiSensCalib has been created and is described. This software is used for determination of intrinsic parameters of heterogeneous cameras of the sensory head and for determination of mutual position and orientation of these sensors. Algorithm for data fusion of CCD camera stereo pair, thermal imager stereo pair and range camera is proposed. Achieved calibration and data-fusion parameters are evaluated by several experiments.
120

Calibration de systèmes de caméras et projecteurs dans des applications de création multimédia

Bélanger, Lucie 12 1900 (has links)
Ce mémoire s'intéresse à la vision par ordinateur appliquée à des projets d'art technologique. Le sujet traité est la calibration de systèmes de caméras et de projecteurs dans des applications de suivi et de reconstruction 3D en arts visuels et en art performatif. Le mémoire s'articule autour de deux collaborations avec les artistes québécois Daniel Danis et Nicolas Reeves. La géométrie projective et les méthodes de calibration classiques telles que la calibration planaire et la calibration par géométrie épipolaire sont présentées pour introduire les techniques utilisées dans ces deux projets. La collaboration avec Nicolas Reeves consiste à calibrer un système caméra-projecteur sur tête robotisée pour projeter des vidéos en temps réel sur des écrans cubiques mobiles. En plus d'appliquer des méthodes de calibration classiques, nous proposons une nouvelle technique de calibration de la pose d'une caméra sur tête robotisée. Cette technique utilise des plans elliptiques générés par l'observation d'un seul point dans le monde pour déterminer la pose de la caméra par rapport au centre de rotation de la tête robotisée. Le projet avec le metteur en scène Daniel Danis aborde les techniques de calibration de systèmes multi-caméras. Pour son projet de théâtre, nous avons développé un algorithme de calibration d'un réseau de caméras wiimotes. Cette technique basée sur la géométrie épipolaire permet de faire de la reconstruction 3D d'une trajectoire dans un grand volume à un coût minime. Les résultats des techniques de calibration développées sont présentés, de même que leur utilisation dans des contextes réels de performance devant public. / This thesis focuses on computer vision applications for technological art projects. Camera and projector calibration is discussed in the context of tracking applications and 3D reconstruction in visual arts and performance art. The thesis is based on two collaborations with québécois artists Daniel Danis and Nicolas Reeves. Projective geometry and classical camera calibration techniques, such as planar calibration and calibration from epipolar geometry, are detailed to introduce the techniques implemented in both artistic projects. The project realized in collaboration with Nicolas Reeves consists of calibrating a pan-tilt camera-projector system in order to adapt videos to be projected in real time on mobile cubic screens. To fulfil the project, we used classical camera calibration techniques combined with our proposed camera pose calibration technique for pan-tilt systems. This technique uses elliptic planes, generated by the observation of a point in the scene while the camera is panning, to compute the camera pose in relation to the rotation centre of the pan-tilt system. The project developed in collaboration with Daniel Danis is based on multi-camera calibration. For this studio theatre project, we developed a multi-camera calibration algorithm to be used with a wiimote network. The technique based on epipolar geometry allows 3D reconstruction of a trajectory in a large environment at a low cost. The results obtained from the camera calibration techniques implemented are presented alongside their application in real public performance contexts.

Page generated in 0.1445 seconds