• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 162
  • 63
  • 25
  • 15
  • 14
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 345
  • 345
  • 116
  • 97
  • 61
  • 46
  • 44
  • 40
  • 39
  • 38
  • 32
  • 32
  • 31
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Robust Self-Calibration and Fundamental Matrix Estimation in 3D Computer Vision

Rastgar, Houman January 2013 (has links)
The recent advances in the field of computer vision have brought many of the laboratory algorithms into the realm of industry. However, one problem that still remains open in the field of 3D vision is the problem of noise. The challenging problem of 3D structure recovery from images is highly sensitive to the presence of input data that are contaminated by errors that do not conform to ideal assumptions. Tackling the problem of extreme data, or outliers has led to many robust methods in the field that are able to handle moderate levels of outliers and still provide accurate outputs. However, this problem remains open, especially for higher noise levels and so it has been the goal of this thesis to address the issue of robustness with respect to two central problems in 3D computer vision. The two problems are highly related and they have been presented together within a Structure from Motion (SfM) context. The first, is the problem of robustly estimating the fundamental matrix from images whose correspondences contain high outlier levels. Even though this area has been extensively studied, two algorithms have been proposed that significantly speed up the computation of the fundamental matrix and achieve accurate results in scenarios containing more than 50% outliers. The presented algorithms rely on ideas from the field of robust statistics in order to develop guided sampling techniques that rely on information inferred from residual analysis. The second, problem addressed in this thesis is the robust estimation of camera intrinsic parameters from fundamental matrices, or self-calibration. Self-calibration algorithms are notoriously unreliable for general cases and it is shown that the existing methods are highly sensitive to noise. In spite of this, robustness in self-calibration has received little attention in the literature. Through experimental results, it is shown that it is essential for a real-world self-calibration algorithm to be robust. In order to introduce robustness to the existing methods, three robust algorithms have been proposed that utilize existing constraints for self-calibration from the fundamental matrix. However, the resulting algorithms are less affected by noise than existing algorithms based on these constraints. This is an important milestone since self-calibration offers many possibilities by providing estimates of camera parameters without requiring access to the image acquisition device. The proposed algorithms rely on perturbation theory, guided sampling methods and a robust root finding method for systems of higher order polynomials. By adding robustness to self-calibration it is hoped that this idea is one step closer to being a practical method of camera calibration rather than merely a theoretical possibility.
102

Heterogeneous Finite Element Stress Analysis of Abdominal Aortic Aneurysms : Comparison Between Ruptured and Unruptured Lesions

Chung, Timothy Kwang-Joon 01 July 2013 (has links)
Abdominal Aortic Aneurysm (AAA) rupture remains a leading cause of death in westernized countries. Much remains to be understood on the biomechanics of rupture. It is not clear whether rupture is predominantly a phenomenon at the material level (aneurysm wall weakening) or due to abnormally elevated tissue wall tension (stress resultant). A computational study involving 4 ruptured and 9 unruptured abdominal aortic aneurysms (AAA) was conducted to test if ruptured aneurysms were subject to a higher pressure induced wall tension than unruptured aneurysms. The unique aspect of this study is that, regional variations in material properties (thickness, stiffness, failure strength) were documented in all the aneurysms in the study population. In addition, AAA geometry was documented using photographs from multiple rotational angles. Novel methods were developed for 3D reconstruction from photographs using voxel carving, for precise spatial mapping of measured properties onto the reconstructed 3D models and for scattered data interpolation of sparsely measured parameters to the entire finite element model. Heterogeneous, variable wall thickness models of patient-specific AAA were developed and the tension distribution under normal systolic pressure computed. Peak wall tension was the primary metric studied. Other indices found in literature (peak wall stress, peak regional tension to failure tension ratio and peak regional stress to failure stress ratio) were also compared. The peak wall tension in the ruptured aneurysm group was not higher than the unruptured aneurysms with statistical significance, but with a trend toward it (p = 0.053). Among the other metrics, the peak regional tension and stress ratios (with their respective failure counterparts) were higher in the ruptured group (p = 0.038 for both) but not so for peak wall stress (p = 0.099). Although rigorously studied, the small study population does not warrant definitive conclusions. The study methods developed however will permit larger studies of this nature to better investigate mechanisms in AAA rupture.
103

Observation and Analysis of Leather Structure Based on Nano-CT

Zhang, Huayong, Cheng, Jinyong, Li, Tianduo, Lu, Jianmei, Hua, Yuai 26 June 2019 (has links)
Content: The composition, working principle and the image acquisition procedure of nano-CT were introduced. A dried piece of blue stock of chrome-tanned cattle hide was chosen for this work and a sequence of 2356 images was obtained. 3D visible digital models (5mm*3.5mm*3.5mm) of leather fiber bundle braided network (Figure 1) and the interspace between fiber bundles (Figure 2) were reconstructed. The inner structure and composition of leather were shown accurately and intuitively in the form of 2D sectional images and 3D image. Based on the 3D model, the diameter, volume, surface area and other parameters of the fiber bundles, the pore structure and inclusions were measured and calculated. Take-Away: 1. 3D visible digital model of leather fiber bundle braided network was reconstructed. 2. The inner structure and composition of leather were shown accurately and intuitively in the form of 2D sectional images and 3D image.
104

Měření okamžité rychlosti vozidel / Vehicles speed measurement

Schoř, Zdeněk January 2020 (has links)
This master’s thesis deals with vehicles speed measurement. The first part is a theoreticalanalysis with an overview of the measurement devices, which are being used nowadays.The second part of the thesis focuses on creating a method for vehicles speed measure-ment in a common traffic situaction. Measurement is based on digital image processingfrom a camera and determining the movement in the image.For movement determination is used license plate detection and other possible methodsare explained in the thesis for future improvement.Created measurement method is using vehicle with known proportions for visual scenecalibration. The MATLAB programming environment is used to detect license plate andfor calibration.Test measurement is conducted in the thesis, which is compared with other referencespeed measurement method.The purpose of the thesis is to use the proposed method to reduce the cost of measuringthe speed of vehicles and contribute to safety and smoothness on the roads.
105

Depth Estimation Using Adaptive Bins via Global Attention at High Resolution

Bhat, Shariq 21 April 2021 (has links)
We address the problem of estimating a high quality dense depth map from a single RGB input image. We start out with a baseline encoder-decoder convolutional neural network architecture and pose the question of how the global processing of information can help improve overall depth estimation. To this end, we propose a transformer-based architecture block that divides the depth range into bins whose center value is estimated adaptively per image. The final depth values are estimated as linear combinations of the bin centers. We call our new building block AdaBins. Our results show a decisive improvement over the state-of-the-art on several popular depth datasets across all metrics. We also validate the effectiveness of the proposed block with an ablation study.
106

Zpracování dat pro 3D / 3D data processing

Babinec, Tomáš January 2009 (has links)
The thesis falls within the field of computer vision. It describes the development of the software environment for 3D data processing. The thesis deals with the design of C++ classes suitable for scene description and representation of bonding conditions between scenes elements. There is also discussed the solution of camera calibration, geometric distortion identification and 3D coordinates reconstruction. The solving of these tasks is augmented with planarity and linearity conditions, which support projection equations and augment the computational ability of the environment.
107

Super résolution de texture pour la reconstruction 3D fine / Texture Super Resolution for 3D Reconstruction

Burns, Calum 23 March 2018 (has links)
La reconstruction 3D multi-vue atteint désormais un niveau de maturité industrielle : des utilisateurs non-experts peuvent produire des modèles 3D large-échelle de qualité à l'aide de logiciels commerciaux. Ces reconstructions utilisent des capteurs haut de gamme comme des LIDAR ou des appareils photos de type DSLR, montés sur un trépied et déplacés autour de la scène. Ces protocoles d'acquisition sont mal adaptés à l’inspection d’infrastructures de grande taille, à géométrie complexe. Avec l'évolution rapide des capacités des micro-drones, il devient envisageable de leur confier ce type de tâche. Un tel choix modifie les données d’acquisition : on passe d’un ensemble restreint de photos de qualité, soigneusement acquises par l’opérateur, à une séquence d'images à cadence vidéo, sujette à des variations de qualité image dues, par exemple, au bougé et au défocus.Les données vidéo posent problème aux logiciels de photogrammétrie du fait de la combinatoire élevée engendrée par le grand nombre d’images. Nous proposons d’exploiter l’intégralité des images en deux étapes. Au cours de la première, la reconstruction 3D est obtenue en sous-échantillonnant temporellement la séquence, lors de la seconde, la restitution haute résolution de texture est obtenue en exploitant l'ensemble des images. L'intérêt de la texture est de permettre de visualiser des détails fins du modèle numérisé qui ont été perdus dans le bruit géométrique de la reconstruction. Cette augmentation de qualité se fait via des techniques de Super Résolution (SR).Pour atteindre cet objectif nous avons conçu et réalisé une chaîne algorithmique prenant, en entrée, la séquence vidéo acquise et fournissant, en sortie, un modèle 3D de la scène avec une texture sur-résolue. Cette chaîne est construite autour d’un algorithme de reconstruction 3D multi-vues de l’état de l’art pour la partie géométrique.Une contribution centrale de notre chaîne est la méthode de recalage employée afin d’atteindre la précision sub-pixellique requise pour la SR. Contrairement aux données classiquement utilisées en SR, nos prises de vues sont affectées par un mouvement 3D, face à une scène à géométrie 3D, ce qui entraîne des mouvements image complexes. La précision intrinsèque des méthodes de reconstruction 3D est insuffisante pour effectuer un recalage purement géométrique, ainsi nous appliquons un raffinement supplémentaire par flot optique. Le résultat de cette méthode de restitution de texture SR est d'abord comparée qualitativement à une approche concurrente de l’état de l’art.Ces appréciations qualitatives sont renforcées par une évaluation quantitative de qualité image. Nous avons à cet effet élaboré un protocole d’évaluation quantitatif de techniques de SR appliquées sur des surfaces 3D. Il est fondé sur l'utilisation de mires fractales binaires, initialement proposées par S. Landeau. Nous avons étendu ces idées au contexte de SR sur des surfaces courbes. Cette méthode est employée ici pour valider les choix de notre méthode de SR, mais elle s'applique à l'évaluation de toute texturation de modèle 3D.Enfin, les surfaces spéculaires présentes dans les scènes induisent des artefacts au niveau des résultats de SR en raison de la perte de photoconsistence des pixels au travers des images à fusionner. Pour traiter ce problème nous avons proposé deux méthodes correctives permettant de recaler photométriquement nos images et restaurer la photoconsistence. La première méthode est basée sur une modélisation des phénomènes d’illumination dans un cas d'usage particulier, la seconde repose sur une égalisation photométrique locale. Les deux méthodes testées sur des données polluées par une illumination variable s'avèrent effectivement capables d'éliminer les artefacts. / Multi-view 3D reconstruction techniques have reached industrial level maturity : non-expert users are now able to use commercial software to produce quality, large scale, 3D models. These reconstructions use top of the line sensors such as LIDAR or DSLR cameras, mounted on tripods and moved around the scene. Such protocols are not designed to efficiently inspect large infrastructures with complex geometry. As the capabilities of micro-drones progress at a fast rate, it is becoming possible to delegate such tasks to them. This choice induces changes in the acquired data : rather than a set of carefully acquired images, micro-drones will produce a video sequence with varying image quality, due to such flaws as motion blur and defocus. Processing video data is challenging for photogrammetry software, due to the high combinatorial cost induced by the large number of images. We use the full image sequence in two steps. Firstly, a 3D reconstruction is obtained using a temporal sub-sampling of the data, then a high resolution texture is built from the full sequence. Texture allows the inspector to visualize small details that may be lost in the noise of the geometric reconstruction. We apply Super Resolution techniques to achieve texture quality augmentation. To reach this goal we developed an algorithmic pipeline that processes the video input and outputs a 3D model of the scene with super resolved texture. This pipeline uses a state of the art 3D reconstruction software for the geometric reconstruction step. The main contribution of this pipeline is the image registration method used to achieve the sub-pixel accuracy required for Super Resolution. Unlike the data on which Super Resolution is generally applied, our viewpoints are subject to relative 3D motion and are facing a scene with 3D geometry, which makes the motion field all the more complex. The intrinsic precision of current 3D reconstruction algorithms is insufficient to perform a purely geometric registration. Instead we refine the geometric registration with an optical flow algorithm. This approach is qualitatively to a competing state of the art method. qualitative comparisons are reinforced by a quantitative evaluation of the resulting image quality. For this we developed a quantitative evaluation protocol of Super Resolution techniques applied to 3D surfaces. This method is based on the Binary Fractal Targets proposed by S. Landeau. We extended these ideas to the context of curved surfaces. This method has been used to validate our choice of Super Resolution algorithm. Finally, specularities present on the scene surfaces induce artefacts in our Super Resolution results, due to the loss of photoconsistency among the set of images to be fused. To address this problem we propose two corrective methods designed to achieve photometric registration of our images and restore photoconsistency. The first method is based on a model of the illumination phenomena, valid in a specific setting, the second relies on local photometric equalization among the images. When tested on data polluted by varying illumination, both methods were able to eliminate these artefacts.
108

Fusing Stereo Measurements into a Global 3D Representation

Blåwiik, Per January 2021 (has links)
The report describes the thesis project with the aim of fusing an arbitrary sequence of stereo measurements into a global 3D representation in real-time. The proposed method involves an octree-based signed distance function for representing the 3D environment, where the geomtric data is fused together using a cumulative weighted update function, and finally rendered by incremental mesh extraction using the marching cubes algorithm. The result of the project was a prototype system, integrated into a real-time stereo reconstruction system, which was evaluated by benchmark tests as well as qualitative comparisons with an older method of overlapping meshes. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
109

3D Reconstruction in Scattering Media / 散乱媒体下での三次元復元

Fujimura, Yuki 23 March 2021 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第23312号 / 情博第748号 / 新制||情||128(附属図書館) / 京都大学大学院情報学研究科知能情報学専攻 / (主査)准教授 飯山 将晃, 教授 西野 恒, 教授 中村 裕一, 教授 美濃 導彦(京都大学 名誉教授) / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
110

Études et conception d'algorithmes de reconstruction 3D sur tablettes : génération automatique de modèles 3D éditables de bâtiments existants / Study and Conception of 3D Reconstruction Algorithms on Tablets : Automatic Generation of 3D Editable Models of Existing Buildings

Arnaud, Adrien 03 December 2018 (has links)
L'objectif de ces travaux de thèse consiste à mettre en place des solutions algorithmiques permettant de reconstruire un modèle 3D éditable d'un environnement intérieur à l'aide d'une tablette équipée d'un capteur de profondeur.Ces travaux s'inscrivent dans le contexte de la rénovation d'intérieur. Les normes Européennes poussent à la rénovation énergétique et à la modélisation 3D des bâtiments existants. Des outils professionnels utilisant des capteurs de type LIDAR permettent de reconstruire des nuages de points de très grande qualité, mais sont coûteux et longs à mettre en œuvre. De plus, il est très difficile d'identifier automatiquement les constituants d'un bâtiment pour en exporter un modèle 3D éditable complet.Dans le cadre de la rénovation d’intérieur, il n'est pas nécessaire de disposer des informations sur l'ensemble du bâtiment, seules les principales dimensions et surfaces sont nécessaires. Nous pouvons alors envisager d'automatiser complètement le processus de modélisation 3D.La mise sur le marché de capteurs de profondeur intégrables sur tablettes, et l'augmentation des capacités de calcul de ces dernières nous permet d'envisager l'adaptation d'algorithmes de reconstruction 3D classiques à ces supports.Au cours de ces travaux, nous avons envisagé deux approches de reconstruction 3D différentes. La première approche s'appuie sur des méthodes de l'état de l'art. Elle consiste à générer un maillage 3D d'un environnement intérieur en temps réel sur tablette, puis d'utiliser ce maillage 3D pour identifier la structure globale du bâtiment (murs, portes et fenêtres). La deuxième approche envisagée consiste à générer un modèle 3D éditable en temps réel, sans passer par un maillage intermédiaire. De cette manière beaucoup plus d'informations sont disponibles pour pouvoir détecter les éléments structuraux. Nous avons en effet à chaque instant donné un nuage de points complet ainsi que l'image couleur correspondante. Nous avons dans un premier temps mis en place deux algorithmes de segmentation planaire en temps réel. Puis, nous avons mis en place un algorithme d'analyse de ces plans permettant d'identifier deux plans identiques sur plusieurs prises de vue différentes. Nous sommes alors capables d'identifier les différents murs contenus dans l'environnement capturé, et nous pouvons mettre à jour leurs informations géométriques en temps réel. / This thesis works consisted to implement algorithmic solutions to reconstruct an editable 3D model of an indoor environment using a tablet equipped with a depth sensor.These works are part of the context of interior renovation. European standards push for energy renovation and 3D modeling of existing buildings. Professional tools using LIDAR-type sensors make it possible to reconstruct high-quality point clouds, but are costly and time-consuming to implement. In addition, it is very difficult to automatically identify the constituents of a building to export a complete editable 3D model.As part of the interior renovation, it is not necessary to have information on the whole building, only the main dimensions and surfaces are necessary. We can then consider completely automating the 3D modeling process.The recent development of depth sensors that can be integrated on tablets, and the improvement of the tablets computation capabilities allows us to consider the adaptation of classical 3D reconstruction algorithms to these supports.During this work, we considered two different 3D reconstruction approaches. The first approach is based on state-of-the-art methods. It consists of generating a 3D mesh of an interior environment in real time on a tablet, then using this 3D mesh to identify the overall structure of the building (walls, doors and windows). The second approach envisaged is to generate a 3D editable model in real time, without passing through an intermediate mesh. In this way much more information is available to be able to detect the structural elements. We have in fact at each given time a complete point cloud as well as the corresponding color image. In a first time we have set up two planar segmentation algorithms in real time. Then, we set up an analysis algorithm of these plans to identify two identical planes on different captures. We are then able to identify the different walls contained in the captured environment, and we can update their geometric information in real-time.

Page generated in 0.1148 seconds