• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 162
  • 63
  • 25
  • 15
  • 14
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 345
  • 345
  • 116
  • 97
  • 61
  • 46
  • 44
  • 40
  • 39
  • 38
  • 32
  • 32
  • 31
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Cellular GPU Models to Euclidean Optimization Problems : Applications from Stereo Matching to Structured Adaptive Meshing and Traveling Salesman Problem

ZHANG, Naiyu 02 December 2013 (has links) (PDF)
The work presented in this PhD studies and proposes cellular computation parallel models able to address different types of NP-hard optimization problems defined in the Euclidean space, and their implementation on the Graphics Processing Unit (GPU) platform. The goal is to allow both dealing with large size problems and provide substantial acceleration factors by massive parallelism. The field of applications concerns vehicle embedded systems for stereovision as well as transportation problems in the plane, as vehicle routing problems. The main characteristic of the cellular model is that it decomposes the plane into an appropriate number of cellular units, each responsible of a constant part of the input data, and such that each cell corresponds to a single processing unit. Hence, the number of processing units and required memory are with linear increasing relationship to the optimization problem size, which makes the model able to deal with very large size problems.The effectiveness of the proposed cellular models has been tested on the GPU parallel platform on four applications. The first application is a stereo-matching problem. It concerns color stereovision. The problem input is a stereo image pair, and the output a disparity map that represents depths in the 3D scene. The goal is to implement and compare GPU/CPU winner-takes-all local dense stereo-matching methods dealing with CFA (color filter array) image pairs. The second application focuses on the possible GPU improvements able to reach near real-time stereo-matching computation. The third and fourth applications deal with a cellular GPU implementation of the self-organizing map neural network in the plane. The third application concerns structured mesh generation according to the disparity map to allow 3D surface compressed representation. Then, the fourth application is to address large size Euclidean traveling salesman problems (TSP) with up to 33708 cities.In all applications, GPU implementations allow substantial acceleration factors over CPU versions, as the problem size increases and for similar or higher quality results. The GPU speedup factor over CPU was of 20 times faster for the CFA image pairs, but GPU computation time is about 0.2s for a small image pair from Middlebury database. The near real-time stereovision algorithm takes about 0.017s for a small image pair, which is one of the fastest records in the Middlebury benchmark with moderate quality. The structured mesh generation is evaluated on Middlebury data set to gauge the GPU acceleration factor and quality obtained. The acceleration factor for the GPU parallel self-organizing map over the CPU version, on the largest TSP problem with 33708 cities, is of 30 times faster.
222

Linear, Discrete, and Quadratic Constraints in Single-image 3D Reconstruction

Ecker, Ady 14 February 2011 (has links)
In this thesis, we investigate the formulation, optimization and ambiguities in single-image 3D surface reconstruction from geometric and photometric constraints. We examine linear, discrete and quadratic constraints for shape from planar curves, shape from texture, and shape from shading. The problem of recovering 3D shape from the projection of planar curves on a surface is strongly motivated by perception studies. Applications include single-view modeling and uncalibrated structured light. When the curves intersect, the problem leads to a linear system for which a direct least-squares method is sensitive to noise. We derive a more stable solution and show examples where the same method produces plausible surfaces from the projection of parallel (non-intersecting) planar cross sections. The problem of reconstructing a smooth surface under constraints that have discrete ambiguities arise in areas such as shape from texture, shape from shading, photometric stereo and shape from defocus. While the problem is computationally hard, heuristics based on semidefinite programming may reveal the shape of the surface. Finally, we examine the shape from shading problem without boundary conditions as a polynomial system. This formulation allows, in generic cases, a complete solution for ideal polyhedral objects. For the general case we propose a semidefinite programming relaxation procedure, and an exact line search iterative procedure with a new smoothness term that favors folds at edges. We use this numerical technique to inspect shading ambiguities.
223

Performance Improvement Of A 3d Reconstruction Algorithm Using Single Camera Images

Kilic, Varlik 01 July 2005 (has links) (PDF)
In this study, it is aimed to improve a set of image processing techniques used in a previously developed method for reconstructing 3D parameters of a secondary passive target using single camera images. This 3D reconstruction method was developed and implemented on a setup consisting of a digital camera, a computer, and a positioning unit. Some automatic target recognition techniques were also included in the method. The passive secondary target used is a circle with two internal spots. In order to achieve a real time target detection, the existing binarization, edge detection, and ellipse detection algorithms are debugged, modified, or replaced to increase the speed, to eliminate the run time errors, and to become compatible for target tracking. The overall speed of 20 Hz is achieved for 640x480 pixel resolution 8 bit grayscale images on a 2.8 GHz computer A novel target tracking method with various tracking strategies is introduced to reduce the search area for target detection and to achieve a detection and reconstruction speed at the maximum frame rate of the hardware. Based on the previously suggested lens distortion model, distortion measurement, distortion parameters determination, and distortion correction methods for both radial and tangential distortions are developed. By the implementation of this distortion correction method, the accuracy of the 3D reconstruction method is enhanced. The overall 3D reconstruction method is implemented in an integrated software and hardware environment as a combination of the methods with the best performance among their alternatives. This autonomous and real time system is able to detect the secondary passive target and reconstruct its 3D configuration parameters at a rate of 25 Hz. Even for extreme conditions, in which it is difficult or impossible to detect the target, no runtime failures are observed.
224

New Methods for Triangulation-based Shape Acquisition using Laser Scanners

Forest Collado, Josep 11 December 2004 (has links)
Tradicionalment, la reproducció del mon real se'ns ha mostrat a traves d'imatges planes. Aquestes imatges se solien materialitzar mitjançant pintures sobre tela o be amb dibuixos. Avui, per sort, encara podem veure pintures fetes a ma, tot i que la majoria d'imatges s'adquireixen mitjançant càmeres, i es mostren directament a una audiència, com en el cinema, la televisió o exposicions de fotografies, o be son processades per un sistema computeritzat per tal d'obtenir un resultat en particular. Aquests processaments s'apliquen en camps com en el control de qualitat industrial o be en la recerca mes puntera en intel·ligència artificial. Aplicant algorismes de processament de nivell mitja es poden obtenir imatges 3D a partir d'imatges 2D, utilitzant tècniques ben conegudes anomenades Shape From X, on X es el mètode per obtenir la tercera dimensió, i varia en funció de la tècnica que s'utilitza a tal nalitat. Tot i que l'evolució cap a la càmera 3D va començar en els 90, cal que les tècniques per obtenir les formes tridimensionals siguin mes i mes acurades. Les aplicacions dels escàners 3D han augmentat considerablement en els darrers anys, especialment en camps com el lleure, diagnosi/cirurgia assistida, robòtica, etc. Una de les tècniques mes utilitzades per obtenir informació 3D d'una escena, es la triangulació, i mes concretament, la utilització d'escàners laser tridimensionals. Des de la seva aparició formal en publicacions científiques al 1971 [SS71], hi ha hagut contribucions per solucionar problemes inherents com ara la disminució d'oclusions, millora de la precisió, velocitat d'adquisició, descripció de la forma, etc. Tots i cadascun dels mètodes per obtenir punts 3D d'una escena te associat un procés de calibració, i aquest procés juga un paper decisiu en el rendiment d'un dispositiu d'adquisició tridimensional. La nalitat d'aquesta tesi es la d'abordar el problema de l'adquisició de forma 3D, des d'un punt de vista total, reportant un estat de l'art sobre escàners laser basats en triangulació, provant el funcionament i rendiment de diferents sistemes, i fent aportacions per millorar la precisió en la detecció del feix laser, especialment en condicions adverses, i solucionant el problema de la calibració a partir de mètodes geomètrics projectius. / Traditionally, the reproduction of the real world has been shown to us by means of at images. These images used to be materialised by means of paint on canvas, drawings or the like. Today, we still see hand made pictures, by fortune, although most of the images are acquired by cameras and they are either directly shown to an audience, like in the cinema, television or photographs, or they are processed by a computer system in order to obtain a particular result, like in industrial quality assurance or bleeding edge artificial intelligence research. Applying mid-level processing algorithms, 3D images can be obtained from 2D ones, using well known techniques called Shape From X, where X is the method for obtaining the 3rd dimension. While the evolution to the 3D camera begun in the 90s, the techniques for obtaining the most accurate 3D shape need to be continuously improving. The application of 3D scanners has spread signi cantly in the recent years, specially in elds like entertainment, assisted diagnosis/ surgery, robotics, etc. One of the most used techniques to obtain 3D information from a scene is triangulation, and more concretely, triangulationbased laser scanners. Since their formal appearance in scienti c publications, in 1971 [SS71], there have been contributions for solving inherent problems like occlusion avoidance, accuracy improvement, acquisition speed, shape description, etc. All of the methods for obtaining 3D points of a scene is accompained with a calibration procedure, and this procedure plays a decisive role in the performance of the acquisition device. The goal of this thesis is to provide a holistic approach to the problem of shape acquisition, giving a wide survey of triangulation laser scanners, testing the performance of di erent systems, and to give contributions for both improving acquisition accuracy under adverse conditions and solving the calibration problem. In addition, the calibration approach is based on previous works that used projective geometry to this end.
225

The God-like Interaction Framework: tools and techniques for communicating in mixed-space collaboration

Stafford, Aaron January 2008 (has links)
This dissertation presents the god-like interaction framework, consisting of tools and techniques for remote communication of situational and navigational information. The framework aims to facilitated intuitive and effective communication between a group of experts and remote field workers in the context of military, fire-fighting, and search and rescue.
226

Improving Conventional Image-based 3D Reconstruction of Man-made Environments Through Line Cloud Integration

Gråd, Martin January 2018 (has links)
Image-based 3D reconstruction refers to the capture and virtual reconstruction of real scenes, through the use of ordinary camera sensors. A common approach is the use of the algorithms Structure from Motion, Multi-view Stereo and Poisson Surface Reconstruction, that fares well for many types of scenes. However, a problem that this pipeline suffers from is that it often falters when it comes to texture-less surfaces and areas, such as those found in man-made environments. Building facades, roads and walls often lack detail and easily trackable feature points, making this approach less than ideal for such scenes. To remedy this weakness, this thesis investigates an expanded approach, incorporating line segment detection and line cloud generation into the already existing point cloud-based pipeline. Texture-less objects such as building facades, windows and roofs are well-suited for line segment detection, and line clouds are fitting for encoding 3D positional data in scenes consisting mostly of objects featuring many straight lines. A number of approaches have been explored in order to determine the usefulness of line clouds in this context, each of them addressing different aspects of the reconstruction procedure.
227

Analyse de "Time Lapse" optiques stéréo et d'images radar satellitaires : application à la mesure du déplacement de glaciers / Analysis of optical stereo Time Lapse and radar satellite images : application to the measurement of glacier displacement

Pham, Ha Thai 24 February 2015 (has links)
L’observation de la Terre par des systèmes d’acquisition d’images permet de suivre l’évolution temporelle de phénomènes naturels tels que les séismes, les volcans ou les mouvements gravitaires. Différentes techniques existent dont l’imagerie satellitaire, la photogrammétrie terrestre et les mesures in-situ. Les séries temporelles d’images issues d’appareils photo automatiques (Time Lapse) sont une source d’informations en plein essor car elles offrent un compromis intéressant en termes de couverture spatiale et de fréquence d’observation pour mesurer les déplacements de surface de zones spécifiques. Cette thèse est consacrée à l’analyse de séries d’images issues de la photographie terrestre et de l’imagerie radar satellitaire pour la mesure du déplacement des glaciers Alpins. Nous nous intéressons en particulier aux problèmes du traitement de Time Lapse stéréo pour le suivi d’objets géophysiques dans des conditions terrain peu favorables à la photogrammétrie. Nous proposons une chaîne de traitement mono-caméra qui comprend les étapes de sélection automatique des images, de recalage et de calcul de champs de déplacement bidimensionnel (2D). L’information apportée par les couples stéréo est ensuite exploitée à l’aide du logiciel MICMAC pour reconstruire le relief et obtenir le déplacement tridimensionnel(3D). Plusieurs couples d’images radar à synthèse d’ouverture (SAR) ont également été traités à l’aide des outils EFIDIR pour obtenir des champs de déplacement 2D dans la géométrie radar sur des orbites ascendantes ou descendantes. La combinaison de mesures obtenues quasi-simultanément sur ces deux types d’orbites permet de reconstruire le déplacement 3D. Ces méthodes ont été mises en oeuvre sur des séries de couples stéréo acquis par deux appareils photo automatiques installés sur la rive droite du glacier d’Argentière et sur des images du satellite TerraSAR-X couvrant le massif du Mont-Blanc. Les résultats sont présentés sur des données acquises lors d’une expérimentation multi-instruments menée en collaboration avec l’IGN à l’automne 2013, incluant le déploiement d’un réseau de Géocubes qui ont fournit des mesures GPS. Elles sont utilisées pour évaluer la précision des résultats obtenus par télédétection proximale et spatiale sur ce type de glacier. / Earth observation by image acquisition systems allows the survey of temporal evolution of natural phenomena such as earthquakes, volcanoes or gravitational movements. Various techniques exist including satellite imagery, terrestrial photogrammetry and in-situ measurements. Image time series from automatic cameras (Time Lapse) are a growing source of information since they offer an interesting compromise in terms of spatial coverage and observation frequency in order to measure surface motion in specific areas. This PhD thesis is devoted to the analysis of image time series from terrestrial photography and satellite radar imagery to measure the displacement of Alpine glaciers. We are particularly interested in Time Lapse stereo processing problems for monitoring geophysical objects in unfavorable conditions for photogrammetry. We propose a single-camera processing chain that includes the steps of automatic photograph selection, coregistration and calculation of two-dimensional (2D) displacement field. The information provided by the stereo pairs is then processed using the MICMAC software to reconstruct the relief and get the three-dimensional (3D) displacement. Several pairs of synthetic aperture radar (SAR) images were also processed with the EFIDIR tools to obtain 2D displacement fields in the radar geometry in ascending or descending orbits. The combination of measurements obtained almost simultaneously on these two types of orbits allows the reconstruction of the 3D displacement. These methods have been implemented on time series of stereo pairs acquired by two automatic cameras installed on the right bank of the Argentière glacier and on TerraSAR-X satellite images covering the Mont-Blanc massif. The results are presented on data acquired during a multi-instrument experiment conducted in collaboration with the French Geographic National Institute (IGN) during the fall of 2013,with a network of Géocubes which provided GPS measurements. They are used to evaluate the accuracy of the results obtained by proximal and remote sensing on this type of glacier.
228

Durabilité des convertisseurs électrochimiques haute température à oxydes solides : une étude expérimentale et de modélisation basée sur la caractérisation au synchrotron par nanotomographie des rayons X / Durability of solid oxide cells : an experimental and modelling investigation based on synchrotron X-ray nano-tomography characterization

Hubert, Maxime 24 May 2017 (has links)
Ce travail porte sur l’étude de la dégradation des convertisseurs électrochimiques haute température à oxydes solides. Une approche couplant des tests électrochimiques, des caractérisations post-mortem avancées et une modélisation multi-échelle a été mise en place afin d’établir les liens entre les performances, la microstructure des électrodes et leur dégradation. Dans ce but, des essais de durabilité de plus de mille heures ont été menés dans différentes conditions opératoires. La microstructure des électrodes a été reconstruite par nano-holotomographie des rayons X pour la cellule de référence avant et après vieillissement. Une attention particulière a été apportée à la mesure de la résolution spatiale et à la fiabilisation du protocole expérimental. Grâce aux volumes 3D, les propriétés microstructurales de l’électrode H2 en Ni-YSZ ont été quantifiées pour les cellules à l’état initial et vieillies. Un modèle physique d’agglomération des particules de Nickel a ensuite été ajusté sur les analyses tridimensionnelles et intégré dans une structure de modélisation multi-échelle développée au laboratoire. Il a auparavant été nécessaire de compléter l’outil numérique avec un module spécifique dédié aux matériaux composant l’électrode à oxygène fait avec un conducteur mixte ionique-électronique. Une fois le modèle validé sur des courbes de polarisation expérimentales, il a été utilisé pour quantifier la contribution de l’agglomération du Nickel sur les pertes de performances mesurées expérimentalement en mode pile à combustible et électrolyse. / This work aims at a better understanding of the high temperature Solid Oxide Cells degradation. An approach based on electrochemical tests, advanced post-test characterizations and multi-scale models has been used to investigate the links between the performances, the electrodes microstructure and their degradation. In that goal, long-term durability tests have been performed over thousand hours in different operating conditions. Electrode microstructures have been reconstructed by X-ray nano-holotomography for the pristine and the aged cells. It is worth noting that a special attention has been paid to improve both the process reliability for the tomographic experiments as well as the spatial resolution of the 3D reconstructed images. Thanks to the valuable 3D volumes, the Ni-YSZ microstructural properties of the H2 electrode have been quantified for the fresh and the aged samples. Then, a physically-based model for Nickel particle agglomeration has been adjusted on the microstructural parameters obtained by the 3D analysis and implemented in an in-house multi-scale modelling framework. Beforehand, it has been necessary to enrich the available numerical tool with a specific module dedicated to the oxygen electrode made in Mixed Ionic Electronic Conducting materials. Once validated on polarisation curves, the completed model has been used to quantify the contribution of Nickel agglomeration on the experimental degradation rates recorded in fuel cell and electrolysis modes.
229

Living in a dynamic world : semantic segmentation of large scale 3D environments

Miksik, Ondrej January 2017 (has links)
As we navigate the world, for example when driving a car from our home to the work place, we continuously perceive the 3D structure of our surroundings and intuitively recognise the objects we see. Such capabilities help us in our everyday lives and enable free and accurate movement even in completely unfamiliar places. We largely take these abilities for granted, but for robots, the task of understanding large outdoor scenes remains extremely challenging. In this thesis, I develop novel algorithms for (near) real-time dense 3D reconstruction and semantic segmentation of large-scale outdoor scenes from passive cameras. Motivated by "smart glasses" for partially sighted users, I show how such modeling can be integrated into an interactive augmented reality system which puts the user in the loop and allows her to physically interact with the world to learn personalized semantically segmented dense 3D models. In the next part, I show how sparse but very accurate 3D measurements can be incorporated directly into the dense depth estimation process and propose a probabilistic model for incremental dense scene reconstruction. To relax the assumption of a stereo camera, I address dense 3D reconstruction in its monocular form and show how the local model can be improved by joint optimization over depth and pose. The world around us is not stationary. However, reconstructing dynamically moving and potentially non-rigidly deforming texture-less objects typically require "contour correspondences" for shape-from-silhouettes. Hence, I propose a video segmentation model which encodes a single object instance as a closed curve, maintains correspondences across time and provide very accurate segmentation close to object boundaries. Finally, instead of evaluating the performance in an isolated setup (IoU scores) which does not measure the impact on decision-making, I show how semantic 3D reconstruction can be incorporated into standard Deep Q-learning to improve decision-making of agents navigating complex 3D environments.
230

Widening the basin of convergence for the bundle adjustment type of problems in computer vision

Hong, Je Hyeong January 2018 (has links)
Bundle adjustment is the process of simultaneously optimizing camera poses and 3D structure given image point tracks. In structure-from-motion, it is typically used as the final refinement step due to the nonlinearity of the problem, meaning that it requires sufficiently good initialization. Contrary to this belief, recent literature showed that useful solutions can be obtained even from arbitrary initialization for fixed-rank matrix factorization problems, including bundle adjustment with affine cameras. This property of wide convergence basin of high quality optima is desirable for any nonlinear optimization algorithm since obtaining good initial values can often be non-trivial. The aim of this thesis is to find the key factor behind the success of these recent matrix factorization algorithms and explore the potential applicability of the findings to bundle adjustment, which is closely related to matrix factorization. The thesis begins by unifying a handful of matrix factorization algorithms and comparing similarities and differences between them. The theoretical analysis shows that the set of successful algorithms actually stems from the same root of the optimization method called variable projection (VarPro). The investigation then extends to address why VarPro outperforms the joint optimization technique, which is widely used in computer vision. This algorithmic comparison of these methods yields a larger unification, leading to a conclusion that VarPro benefits from an unequal trust region assumption between two matrix factors. The thesis then explores ways to incorporate VarPro to bundle adjustment problems using projective and perspective cameras. Unfortunately, the added nonlinearity causes a substantial decrease in the convergence basin of VarPro, and therefore a bootstrapping strategy is proposed to bypass this issue. Experimental results show that it is possible to yield feasible metric reconstructions and pose estimations from arbitrary initialization given relatively clean point tracks, taking one step towards initialization-free structure-from-motion.

Page generated in 0.0789 seconds