• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 261
  • 44
  • 42
  • 39
  • 23
  • 18
  • 11
  • 6
  • 6
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 534
  • 260
  • 161
  • 106
  • 101
  • 86
  • 84
  • 64
  • 60
  • 52
  • 50
  • 50
  • 47
  • 45
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

Fusão de informações obtidas a partir de múltiplas imagens visando à navegação autônoma de veículos inteligentes em abiente agrícola / Data fusion obtained from multiple images aiming the navigation of autonomous intelligent vehicles in agricultural environment

Vítor Manha Utino 08 April 2015 (has links)
Este trabalho apresenta um sistema de auxilio à navegação autônoma para veículos terrestres com foco em ambientes estruturados em um cenário agrícola. É gerada a estimativa das posições dos obstáculos baseado na fusão das detecções provenientes do processamento dos dados de duas câmeras, uma estéreo e outra térmica. Foram desenvolvidos três módulos de detecção de obstáculos. O primeiro módulo utiliza imagens monoculares da câmera estéreo para detectar novidades no ambiente através da comparação do estado atual com o estado anterior. O segundo módulo utiliza a técnica Stixel para delimitar os obstáculos acima do plano do chão. Por fim, o terceiro módulo utiliza as imagens térmicas para encontrar assinaturas que evidenciem a presença de obstáculo. Os módulos de detecção são fundidos utilizando a Teoria de Dempster-Shafer que fornece a estimativa da presença de obstáculos no ambiente. Os experimentos foram executados em ambiente agrícola real. Foi executada a validação do sistema em cenários bem iluminados, com terreno irregular e com obstáculos diversos. O sistema apresentou um desempenho satisfatório tendo em vista a utilização de uma abordagem baseada em apenas três módulos de detecção com metodologias que não tem por objetivo priorizar a confirmação de obstáculos, mas sim a busca de novos obstáculos. Nesta dissertação são apresentados os principais componentes de um sistema de detecção de obstáculos e as etapas necessárias para a sua concepção, assim como resultados de experimentos com o uso de um veículo real. / This work presents a support system to the autonomous navigation for ground vehicles with focus on structured environments in an agricultural scenario. The estimated obstacle positions are generated based on the fusion of the detections from the processing of data from two cameras, one stereo and other thermal. Three modules obstacle detection have been developed. The first module uses monocular images of the stereo camera to detect novelties in the environment by comparing the current state with the previous state. The second module uses Stixel technique to delimit the obstacles above the ground plane. Finally, the third module uses thermal images to find signatures that reveal the presence of obstacle. The detection modules are fused using the Dempster-Shafer theory that provides an estimate of the presence of obstacles in the environment. The experiments were executed in real agricultural environment. System validation was performed in well-lit scenarios, with uneven terrain and different obstacles. The system showed satisfactory performance considering the use of an approach based on only three detection modules with methods that do not prioritize obstacle confirmation, but the search for new ones. This dissertation presents the main components of an obstacle detection system and the necessary steps for its design as well as results of experiments with the use of a real vehicle.
502

3D Printing for Computer Graphics Industry

Granath, Victor January 2011 (has links)
Rapid prototyping is a relativity new technology and is based on layered manufacturing which has similarities to the method an ordinary desktop paper printer works. This research is to obtain a better understanding on how to use computer graphics software, in this particular case Autodesk Maya, to create a model. The goal is to understand how to create a suitable mesh of a 3D model for use with a 3D printer and produce a printed model that is equivalent to the CAD software 3D model. This specific topic has not been scientifically documented which has resulted in an actual 3D model.
503

Evaluating Vivado High-Level Synthesis on OpenCV Functions for the Zynq-7000 FPGA

Johansson, Henrik January 2015 (has links)
More complex and intricate Computer Vision algorithms combined with higher resolution image streams put bigger and bigger demands on processing power. CPU clock frequencies are now pushing the limits of possible speeds, and have instead started growing in number of cores. Most Computer Vision algorithms' performance respond well to parallel solutions. Dividing the algorithm over 4-8 CPU cores can give a good speed-up, but using chips with Programmable Logic (PL) such as FPGA's can give even more. An interesting recent addition to the FPGA family is a System on Chip (SoC) that combines a CPU and an FPGA in one chip, such as the Zynq-7000 series from Xilinx. This tight integration between the Programmable Logic and Processing System (PS) opens up for designs where C programs can use the programmable logic to accelerate selected parts of the algorithm, while still behaving like a C program. On that subject, Xilinx has introduced a new High-Level Synthesis Tool (HLST) called Vivado HLS, which has the power to accelerate C code by synthesizing it to Hardware Description Language (HDL) code. This potentially bridges two otherwise very separate worlds; the ever popular OpenCV library and FPGAs. This thesis will focus on evaluating Vivado HLS from Xilinx primarily with image processing in mind for potential use on GIMME-2; a system with a Zynq-7020 SoC and two high resolution image sensors, tailored for stereo vision.
504

Recalage hétérogène pour la reconstruction 3D de scènes sous-marines / Heterogeneous Registration for 3D Reconstruction of Underwater Scene

Mahiddine, Amine 30 June 2015 (has links)
Le relevé et la reconstruction 3D de scènes sous-marine deviennent chaque jour plus incontournable devant notre intérêt grandissant pour l’étude des fonds sous-marins. La majorité des travaux existants dans ce domaine sont fondés sur l’utilisation de capteurs acoustiques l’image n’étant souvent qu’illustrative.L’objectif de cette thèse consiste à développer des techniques permettant la fusion de données hétérogènes issues d’un système photogrammétrique et d’un système acoustique.Les travaux présentés dans ce mémoire sont organisés en trois parties. La première est consacrée au traitement des données 2D afin d’améliorer les couleurs des images sous-marines pour augmenter la répétabilité des descripteurs en chaque point 2D. Puis, nous proposons un système de visualisation de scène en 2D sous forme de mosaïque.Dans la deuxième partie, une méthode de reconstruction 3D à partir d’un ensemble non ordonné de plusieurs images a été proposée. Les données 3D ainsi calculées seront fusionnées avec les données provenant du système acoustique dans le but de reconstituer le site sous-marin.Dans la dernière partie de ce travail de thèse, nous proposons une méthode de recalage 3D originale qui se distingue par la nature du descripteur extrait en chaque point. Le descripteur que nous proposons est invariant aux transformations isométriques (rotation, transformation) et permet de s’affranchir du problème de la multi-résolution. Nous validons à l’aide d’une étude effectuée sur des données synthétiques et réelles où nous montrons les limites des méthodes de recalages existantes dans la littérature. Au final, nous proposons une application de notre méthode à la reconnaissance d’objets 3D. / The survey and the 3D reconstruction of underwater become indispensable for our growing interest in the study of the seabed. Most of the existing works in this area are based on the use of acoustic sensors image.The objective of this thesis is to develop techniques for the fusion of heterogeneous data from a photogrammetric system and an acoustic system.The presented work is organized in three parts. The first is devoted to the processing of 2D data to improve the colors of the underwater images, in order to increase the repeatability of the feature descriptors. Then, we propose a system for creating mosaics, in order to visualize the scene.In the second part, a 3D reconstruction method from an unordered set of several images was proposed. The calculated 3D data will be merged with data from the acoustic system in order to reconstruct the underwater scene.In the last part of this thesis, we propose an original method of 3D registration in terms of the nature of the descriptor extracted at each point. The descriptor that we propose is invariant to isometric transformations (rotation, transformation) and addresses the problem of multi-resolution. We validate our approach with a study on synthetic and real data, where we show the limits of the existing methods of registration in the literature. Finally, we propose an application of our method to the recognition of 3D objects.
505

Estimation de cartes de profondeur à partir d’images stéréo et morphologie mathématique / Depth map estimation from stereo images and mathematical morphology

Bricola, Jean-Charles 19 October 2016 (has links)
Cette thèse propose de nouvelles approches pour le calcul de cartes de profondeur associées à deux images stéréoscopiques.La difficulté du problème réside dans l'établissement de mises en correspondances entre les deux images stéréoscopiques. Cet établissement s'avère en effet incertain dans les zones de l'image qui sont homogènes, voire impossible en cas d'occultation.Afin de gérer ces deux problèmes, nos méthodes procèdent en deux étapes. Tout d'abord nous cherchons des mesures de profondeur fiables en comparant les deux images stéréoscopiques à l'aide de leurs segmentations associées. L'analyse des coûts de superpositions d'images, sur une base régionale et au travers d'échelles multiples, nous permet de réaliser des agrégations de coûts pertinentes, desquelles nous déduisons des mesures de disparités précises. De plus, cette analyse facilite la détection des zones de l'image de référence étant potentiellement occultées dans l’autre image de la paire stéréoscopique. Dans un deuxième temps, un mécanisme d'estimation se charge de trouver les profondeurs les plus plausibles, là où aucune mise en correspondance n'a pu être établie.L'ouvrage est scindé en deux parties : la première permettra au lecteur de se familiariser avec les problèmes fréquemment observés en analyse d'images stéréoscopiques. Il y trouvera également une brève introduction au traitement d'images morphologique. Dans une deuxième partie, nos opérateurs de calcul de profondeur sont présentés, détaillés et évalués. / In this thesis, we introduce new approaches dedicated to the computation of depth maps associated with a pair of stereo images.The main difficulty of this problem resides in the establishment of correspondences between the two stereoscopic images. Indeed, it is difficult to ascertain the relevance of matches occurring in homogeneous areas, whilst matches are infeasible for pixels occluded in one of the stereo views.In order to handle these two problems, our methods are composed of two steps. First, we search for reliable depth measures, by comparing the two images of the stereo pair with the help of their associated segmentations. The analysis of image superimposition costs, on a regional basis and across multiple scales, allows us to perform relevant cost aggregations, from which we deduce accurate disparity measures. Furthermore, this analysis facilitates the detection of the reference image areas, which are potentially occluded in the other image of the stereo pair. Second, an interpolation mechanism is devoted to the estimation of depth values, where no correspondence could have been established.The manuscript is divided into two parts: the first will allow the reader to become familiar with the problems and issues frequently encountered when analysing stereo images. A brief introduction to morphological image processing is also provided. In the second part, our algorithms to the computation of depth maps are introduced, detailed and evaluated.
506

3D rekonstrukce z více pohledů kamer / 3D reconstruction from multiple views

Sládeček, Martin January 2019 (has links)
This thesis deals with the task of three-dimensional scene reconstruction using image data obtained from multiple views. It is assumed that intrinsic parameters of the utilized cameras are known. The theoretical chapters describe the basic priciples of individual reconstruction steps. Variuous possible implementaions of data model suitable for this task are also described. The practical part also includes a comparison of false keypoint correspondence filtering, implementation of polar stereo rectification and comparison of disparity map calculation methods that are bundled with the OpenCV library. In the final portion of the thesis, examples of reconstructed 3D models are presented and discussed.
507

Zpracování obrazu pro golfový simulátor / Image Processing for Golf Simulator

Heřman, Zdeněk January 2016 (has links)
This thesis describes design and practical realization of a golf simulator. It includes specifications of the hardware, that is a necessity for such simulator, and implementation of the detection of a golf swing and ball flight. The simulator has to fulfill several conditions that were stated at the beginning of the draft. One of the most important conditions was low purchase price, and therefore the simulator is based on common USB cameras Playstation Eye 3. The main goal was to create a user friendly simulator, that will be appropriate both for indoor and outdoor conditions. The final solution was compared with Full Swing simulator. The accuracy of our simulator was far better than the compared one at short game and putting through the use of scanning the ball right after the start of its flight. Accuracy at putting ranged: speed of the ball +/- 0.2 m/s, launch angle +/- 1 degree and flight angle +/- 0.8 degrees.
508

Building Information Extraction and Refinement from VHR Satellite Imagery using Deep Learning Techniques

Bittner, Ksenia 26 March 2020 (has links)
Building information extraction and reconstruction from satellite images is an essential task for many applications related to 3D city modeling, planning, disaster management, navigation, and decision-making. Building information can be obtained and interpreted from several data, like terrestrial measurements, airplane surveys, and space-borne imagery. However, the latter acquisition method outperforms the others in terms of cost and worldwide coverage: Space-borne platforms can provide imagery of remote places, which are inaccessible to other missions, at any time. Because the manual interpretation of high-resolution satellite image is tedious and time consuming, its automatic analysis continues to be an intense field of research. At times however, it is difficult to understand complex scenes with dense placement of buildings, where parts of buildings may be occluded by vegetation or other surrounding constructions, making their extraction or reconstruction even more difficult. Incorporation of several data sources representing different modalities may facilitate the problem. The goal of this dissertation is to integrate multiple high-resolution remote sensing data sources for automatic satellite imagery interpretation with emphasis on building information extraction and refinement, which challenges are addressed in the following: Building footprint extraction from Very High-Resolution (VHR) satellite images is an important but highly challenging task, due to the large diversity of building appearances and relatively low spatial resolution of satellite data compared to airborne data. Many algorithms are built on spectral-based or appearance-based criteria from single or fused data sources, to perform the building footprint extraction. The input features for these algorithms are usually manually extracted, which limits their accuracy. Based on the advantages of recently developed Fully Convolutional Networks (FCNs), i.e., the automatic extraction of relevant features and dense classification of images, an end-to-end framework is proposed which effectively combines the spectral and height information from red, green, and blue (RGB), pan-chromatic (PAN), and normalized Digital Surface Model (nDSM) image data and automatically generates a full resolution binary building mask. The proposed architecture consists of three parallel networks merged at a late stage, which helps in propagating fine detailed information from earlier layers to higher levels, in order to produce an output with high-quality building outlines. The performance of the model is examined on new unseen data to demonstrate its generalization capacity. The availability of detailed Digital Surface Models (DSMs) generated by dense matching and representing the elevation surface of the Earth can improve the analysis and interpretation of complex urban scenarios. The generation of DSMs from VHR optical stereo satellite imagery leads to high-resolution DSMs which often suffer from mismatches, missing values, or blunders, resulting in coarse building shape representation. To overcome these problems, a methodology based on conditional Generative Adversarial Network (cGAN) is developed for generating a good-quality Level of Detail (LoD) 2 like DSM with enhanced 3D object shapes directly from the low-quality photogrammetric half-meter resolution satellite DSM input. Various deep learning applications benefit from multi-task learning with multiple regression and classification objectives by taking advantage of the similarities between individual tasks. Therefore, an observation of such influences for important remote sensing applications such as realistic elevation model generation and roof type classification from stereo half-meter resolution satellite DSMs, is demonstrated in this work. Recently published deep learning architectures for both tasks are investigated and a new end-to-end cGAN-based network is developed, which combines different models that provide the best results for their individual tasks. To benefit from information provided by multiple data sources, a different cGAN-based work-flow is proposed where the generative part consists of two encoders and a common decoder which blends the intensity and height information within one network for the DSM refinement task. The inputs to the introduced network are single-channel photogrammetric DSMs with continuous values and pan-chromatic half-meter resolution satellite images. Information fusion from different modalities helps in propagating fine details, completes inaccurate or missing 3D information about building forms, and improves the building boundaries, making them more rectilinear. Lastly, additional comparison between the proposed methodologies for DSM enhancements is made to discuss and verify the most beneficial work-flow and applicability of the resulting DSMs for different remote sensing approaches.
509

3D Building Model Reconstruction from Very High Resolution Satellite Stereo Imagery

Partovi, Tahmineh 02 October 2019 (has links)
Automatic three-dimensional (3D) building model reconstruction using remote sensing data is crucial in applications which require large-scale and frequent building model updates, such as disaster monitoring and urban management, to avoid huge manual efforts and costs. Recent advances in the availability of very high-resolution satellite data together with efficient data acquisition and large area coverage have led to an upward trend in their applications for 3D building model reconstructions. In this dissertation, a novel multistage hybrid automatic 3D building model reconstruction approach is proposed which reconstructs building models in level of details 2 (LOD2) based on digital surface model (DSM) data generated from the very high-resolution stereo imagery of the WorldView-2 satellite. This approach uses DSM data in combination with orthorectified panchromatic (PAN) and pan-sharpened data of multispectral satellite imagery to overcome the drawbacks of DSM data, such as blurred building boundaries, rough building shapes unwanted failures in the roof geometries. In the first stage, the rough building boundaries in the DSM-based building masks are refined by classifying the geometrical features of the corresponding PAN images. The refined boundaries are then simplified in the second stage through a parameterization procedure which represents the boundaries by a set of line segments. The main orientations of buildings are then determined, and the line segments are regularized accordingly. The regularized line segments are then connected to each other based on a rule-based method to form polygonal building boundaries. In the third stage, a novel technique is proposed to decompose the building polygons into a number of rectangles under the assumption that buildings are usually composed of rectangular structures. In the fourth stage, a roof model library is defined, which includes flat, gable, half-hip, hip, pyramid and mansard roofs. These primitive roof types are then assigned to the rectangles based on a deep learning-based classification method. In the fifth stage, a novel approach is developed to reconstruct watertight parameterized 3D building models based on the results of the previous stages and normalized DSM (nDSM) of satellite imagery. In the final stage, a novel approach is proposed to optimize building parameters based on an exhaustive search, so that the two-dimensional (2D) distance between the 3D building models and the building boundaries (obtained from building masks and PAN image) as well as the 3D normal distance between the 3D building models and the 3D point clouds (obtained from nDSM) are minimized. Different parts of the building blocks are then merged through a newly proposed intersection and merging process. All corresponding experiments were conducted on four areas of the city of Munich including 208 buildings and the results were evaluated qualitatively and quantitatively. According to the results, the proposed approach could accurately reconstruct 3D models of buildings, even the complex ones with several inner yards and multiple orientations. Furthermore, the proposed approach provided a high level of automation by the limited number of primitive roof model types required and by performing automatic parameter initialization. In addition, the proposed boundary refinement method improved the DSM-based building masks specified by 8 % in area accuracy. Furthermore, the ridge line directions and roof types were detected accurately for most of the buildings. The combination of the first three stages improved the accuracy of the building boundaries by 70 % in comparison to using line segments extracted from building masks without refinement. Moreover, the proposed optimization approach could achieve in most cases the best combinations of 2D and 3D geometrical parameters of roof models. Finally, the intersection and merging process could successfully merge different parts of the complex building models.
510

Design of a Novel Wearable Ultrasound Vest for Autonomous Monitoring of the Heart Using Machine Learning

Goodman, Garrett G. January 2020 (has links)
No description available.

Page generated in 0.0541 seconds