• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 137
  • 63
  • 21
  • 14
  • 14
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 304
  • 304
  • 100
  • 87
  • 51
  • 42
  • 38
  • 38
  • 34
  • 31
  • 30
  • 27
  • 27
  • 27
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Motion Segmentation for Autonomous Robots Using 3D Point Cloud Data

Kulkarni, Amey S. 13 May 2020 (has links)
Achieving robot autonomy is an extremely challenging task and it starts with developing algorithms that help the robot understand how humans perceive the environment around them. Once the robot understands how to make sense of its environment, it is easy to make efficient decisions about safe movement. It is hard for robots to perform tasks that come naturally to humans like understanding signboards, classifying traffic lights, planning path around dynamic obstacles, etc. In this work, we take up one such challenge of motion segmentation using Light Detection and Ranging (LiDAR) point clouds. Motion segmentation is the task of classifying a point as either moving or static. As the ego-vehicle moves along the road, it needs to detect moving cars with very high certainty as they are the areas of interest which provide cues to the ego-vehicle to plan it's motion. Motion segmentation algorithms segregate moving cars from static cars to give more importance to dynamic obstacles. In contrast to the usual LiDAR scan representations like range images and regular grid, this work uses a modern representation of LiDAR scans using permutohedral lattices. This representation gives ease of representing unstructured LiDAR points in an efficient lattice structure. We propose a machine learning approach to perform motion segmentation. The network architecture takes in two sequential point clouds and performs convolutions on them to estimate if 3D points from the first point cloud are moving or static. Using two temporal point clouds help the network in learning what features constitute motion. We have trained and tested our learning algorithm on the FlyingThings3D dataset and a modified KITTI dataset with simulated motion.
32

Tvorba 3D modelů / 3D reconstruction

Musálek, Martin January 2014 (has links)
Thesis solves 3D reconstruction of an object by method of lighting by pattern. A projector lights the measured object by defined pattern and two cameras are measuring 2D points from it. The pedestal of obejct rotates and during the measure are acquired data from different angles. Points are indentified from measured images, transformed to 3D using stereovision, connected to 3D model and displayed.
33

Entwicklung eines iterativen 3D Rekonstruktionverfahrens für die Kontrolle der Tumorbehandlung mit Schwerionen mittels der Positronen-Emissions-Tomographie

Lauckner, Kathrin January 1999 (has links)
At the Gesellschaft für Schwerionenforschung in Darmstadt a therapy unit for heavy ion cancer treatment has been established in collaboration with the Deutsches Krebsforschungszentrum Heidelberg, the Radiologische Universitätsklinik Heidelberg and the Forschungszentrum Rossendorf. For quality assurance the dual-head positron camera BASTEI (Beta Activity meaSurements at the Therapy with Energetic Ions) has been integrated into this facility. It measures ß+-activity distributions generated via nuclear fragmentation reactions within the target volume. BASTEI has about 4 million coincidence channels. The emission data are acquired in a 3D regime and stored in a list mode data format. Typically counting statstics is two to three orders of magnitude lower than those of typical PET-scans in nuclear medicine. Two iterative 3D reconstruction algorithms based on ISRA (Image Space Reconstruction Algorithm) and MLEM (Maximum Likelihood Expectation Maximization), respectively, have been adapted to this imaging geometry. The major advantage of the developed approaches are run-time Monte-Carlo simulations which are used to calculate the transition matrix. The influences of detector sensitivity variations, randoms, activity from outside of the field of view and attenuation are corrected for the individual coincidence channels. Performance studies show, that the implementation based on MLEM is the algorithm of merit. Since 1997 it has been applied sucessfully to patient data. The localization of distal and lateral gradients of the ß+-activity distribution is guaranteed in the longitudinal sections. Out of the longitudinal sections the lateral gradients of the ß+-activity distribution should be interpreted using a priori knowledge.
34

Generating 3D Scenes From Single RGB Images in Real-Time Using Neural Networks

Grundberg, Måns, Altintas, Viktor January 2021 (has links)
The ability to reconstruct 3D scenes of environments is of great interest in a number of fields such as autonomous driving, surveillance, and virtual reality. However, traditional methods often rely on multiple cameras or sensor-based depth measurements to accurately reconstruct 3D scenes. In this thesis we propose an alternative, deep learning-based approach to 3D scene reconstruction for objects of interest, using nothing but single RGB images. We evaluate our approach using the Deep Object Pose Estimation (DOPE) neural network for object detection and pose estimation, and the NVIDIA Deep learning Dataset Synthesizer for synthetic data generation. Using two unique objects, our results indicate that it is possible to reconstruct 3D scenes from single RGB images within a few centimeters of error margin.
35

Resection Process Map: A novel dynamic simulation system for pulmonary resection / 解剖学的肺切除における新しいシミュレーションシステム、RPMの開発

Tokuno, Junko 23 March 2023 (has links)
京都大学 / 新制・課程博士 / 博士(医学) / 甲第24477号 / 医博第4919号 / 新制||医||1062(附属図書館) / 京都大学大学院医学研究科医学専攻 / (主査)教授 中本 裕士, 教授 波多野 悦朗, 教授 万代 昌紀 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
36

Comparison of Image Generation and Processing Techniques for 3D Reconstruction of the Human Skull

Marinescu, Ruxandra 03 December 2001 (has links)
No description available.
37

A PDE method for patchwise approximation of large polygon meshes

Sheng, Y., Sourin, A., Gonzalez Castro, Gabriela, Ugail, Hassan January 2010 (has links)
No / Three-dimensional (3D) representations of com- plex geometric shapes, especially when they are recon- structed from magnetic resonance imaging (MRI) and com- puted tomography (CT) data, often result in large polygon meshes which require substantial storage for their handling, and normally have only one fixed level of detail (LOD). This can often be an obstacle for efficient data exchange and interactive work with such objects. We propose to re- place such large polygon meshes with a relatively small set of coefficients of the patchwise partial differential equation (PDE) function representation. With this model, the approx- imations of the original shapes can be rendered with any desired resolution at interactive rates. Our approach can di- rectly work with any common 3D reconstruction pipeline, which we demonstrate by applying it to a large reconstructed medical data set with irregular geometry.
38

An improved effective method for generating 3D printable models from medical imaging

Rathod, Gaurav Dilip 16 November 2017 (has links)
Medical practitioners rely heavily on visualization of medical imaging to get a better understanding of the patient's anatomy. Most cancer treatment and surgery today are performed using medical imaging. Medical imaging is therefore of great importance to the medical industry. Medical imaging continues to depend heavily on a series of 2D scans, resulting in a series of 2D photographs being displayed using light boxes and/or computer monitors. Today, these 2D images are increasingly combined into 3D solid models using software. These 3D models can be used for improved visualization and understanding of the problem at hand, including fabricating physical 3D models using additive manufacturing technologies. Generating precise 3D solid models automatically from 2D scans is non-trivial. Geometric and/or topologic errors are common, and often costly manual editing is required to produce 3D solid models that sufficiently reflect the actual underlying human geometry. These errors arise from the ambiguity of converting from 2D data to 3D data, and also from inherent limitations of the .STL fileformat used in additive manufacturing. This thesis proposes a new, robust method for automatically generating 3D models from 2D scanned data (e.g., computed tomography (CT) or magnetic resonance imaging (MRI)), where the resulting 3D solid models are specifically generated for use with additive manufacturing. This new method does not rely on complicated procedures such as contour evolution and geometric spline generation, but uses volume reconstruction instead. The advantage of this approach is that the original scan data values are kept intact longer, so that the resulting surface is more accurate. This new method is demonstrated using medical CT data of the human nasal airway system, resulting in physical 3D models fabricated via additive manufacturing. / Master of Science
39

Investigations of stereo setup for Kinect

Manuylova, Ekaterina January 2012 (has links)
The main purpose of this work is to investigate the behavior of the recently released by Microsoft company the Kinect sensor, which contains the properties that go beyond ordinary cameras. Normally, in order to create a 3D reconstruction of the scene two cameras are required. Whereas, the Kinect device, due to the properties of the Infrared projector and sensor allows to create the same type of the reconstruction using only one device. However, the depth images, which are generated by the Infrared laser projector and monochrome sensor in Kinect can contain undefined values. Therefore, in addition to other investigations this project contains an idea how to improve the quality of the depth images. However, the base aim of this work is to perform a reconstruction of the scene based on the color images using pair of Kinects which will be compared with the results generated by using depth information from one Kinect. In addition, the report contains the information how to check that all the performed calculations were done correctly. All  the algorithms which were used in the project as well as the achieved results will be described and discussed in the separate chapters in the current report.
40

Étude de l'apparence physique de surfaces opaques, analyse photométrique et reconstruction 3D / Study of opaque surface physical appearance, photometric analysis and 3D reconstruction

Tauzia, Emmanuelle 30 June 2016 (has links)
L'étude de l'apparence de surfaces par analyse photométrique est un domaine de recherche actif, avec de nombreuses applications par exemple pour étudier de la qualité de surfaces, la rugosité des objets, leur apparence, etc. Le sujet de cette thèse concerne plus particulièrement l'étude de surfaces opaques, par l'acquisition de la géométrie et de la réflectance. Cela nous a conduit à une analyse des modèles mathématique de réflectance, permettant de représenter les matériaux. Afin d'offrir une description physiquement plausible des matériaux opaques, notre première contribution principale concerne la mise en oeuvre d'un modèle à base de microfacettes Lambertiennes interfacées. Il généralise différents modèles de la littérature incluant des surfaces planes diffuses ou spéculaires et rugueuses diffuses ou spéculaires grâce à trois paramètres physiques : couleur, rugosité et indice de réfraction. Il permet de prendre en compte la transmission des flux lumineux pénétrant sous la surface ainsi que les réflexions multiples entre microfacettes et de restituer les effets de rétrodiffusion lumineuse et d’anisotropie. Notre seconde contribution principale concerne la réalisation d'un système complet d'acquisition de la géométrie et de la réflectance d'objets à partir d'images HDR. Notre méthodologie correspond à une chaîne de reconstruction complète et automatique, uniquement à partir d'images, permettant d'obtenir un niveau de précision intéressant et un faible coût de mise en place et de temps de traitement comparé aux méthodes existantes. Notre méthode permet d'extraire des échantillons de réflectance suffisamment nombreux pour identifier les paramètres de modèles de réflectance avec les données acquises. / The study of surface appearance by photometric analysis is an active area of research, with various applications concerning the analysis of surface roughness or appearance ... The subject of this PhD dissertation relates to the study of opaque surfaces, through the acquisition of their geometry. Our study leads us to an analysis of mathematical reflectance models, for representing materials appearance. To provide a physically plausible description of opaque surfaces, the first major contribution concerns the implementation of a model based on Lambertian interfaced microfacets. This model generalizes several approaches often referenced in the literature, and includes flat diffuse or specular surfaces as well as diffuse or specular microfacets with three physically-based parameters: color, roughness and refractive index. It makes it possible to take into account the transmission of the light flux entering below the surface as well as multiple reflections between microfacets, while handling backscattering and anisotropy. The second main contribution of this work concerns the impolementation of a complete acquisition system for estimating geometry and reflectance from HDR images. Our methodology is based on a complete and automatic reconstruction framework, achieving a higher level of precision, a lower cost of implementation and a shorter processing time compared to photometry-based existing methods.

Page generated in 0.1816 seconds