• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 138
  • 63
  • 21
  • 14
  • 14
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 306
  • 306
  • 101
  • 88
  • 51
  • 43
  • 38
  • 38
  • 34
  • 31
  • 30
  • 29
  • 27
  • 27
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Motion Segmentation for Autonomous Robots Using 3D Point Cloud Data

Kulkarni, Amey S. 13 May 2020 (has links)
Achieving robot autonomy is an extremely challenging task and it starts with developing algorithms that help the robot understand how humans perceive the environment around them. Once the robot understands how to make sense of its environment, it is easy to make efficient decisions about safe movement. It is hard for robots to perform tasks that come naturally to humans like understanding signboards, classifying traffic lights, planning path around dynamic obstacles, etc. In this work, we take up one such challenge of motion segmentation using Light Detection and Ranging (LiDAR) point clouds. Motion segmentation is the task of classifying a point as either moving or static. As the ego-vehicle moves along the road, it needs to detect moving cars with very high certainty as they are the areas of interest which provide cues to the ego-vehicle to plan it's motion. Motion segmentation algorithms segregate moving cars from static cars to give more importance to dynamic obstacles. In contrast to the usual LiDAR scan representations like range images and regular grid, this work uses a modern representation of LiDAR scans using permutohedral lattices. This representation gives ease of representing unstructured LiDAR points in an efficient lattice structure. We propose a machine learning approach to perform motion segmentation. The network architecture takes in two sequential point clouds and performs convolutions on them to estimate if 3D points from the first point cloud are moving or static. Using two temporal point clouds help the network in learning what features constitute motion. We have trained and tested our learning algorithm on the FlyingThings3D dataset and a modified KITTI dataset with simulated motion.
32

Tvorba 3D modelů / 3D reconstruction

Musálek, Martin January 2014 (has links)
Thesis solves 3D reconstruction of an object by method of lighting by pattern. A projector lights the measured object by defined pattern and two cameras are measuring 2D points from it. The pedestal of obejct rotates and during the measure are acquired data from different angles. Points are indentified from measured images, transformed to 3D using stereovision, connected to 3D model and displayed.
33

Entwicklung eines iterativen 3D Rekonstruktionverfahrens für die Kontrolle der Tumorbehandlung mit Schwerionen mittels der Positronen-Emissions-Tomographie

Lauckner, Kathrin January 1999 (has links)
At the Gesellschaft für Schwerionenforschung in Darmstadt a therapy unit for heavy ion cancer treatment has been established in collaboration with the Deutsches Krebsforschungszentrum Heidelberg, the Radiologische Universitätsklinik Heidelberg and the Forschungszentrum Rossendorf. For quality assurance the dual-head positron camera BASTEI (Beta Activity meaSurements at the Therapy with Energetic Ions) has been integrated into this facility. It measures ß+-activity distributions generated via nuclear fragmentation reactions within the target volume. BASTEI has about 4 million coincidence channels. The emission data are acquired in a 3D regime and stored in a list mode data format. Typically counting statstics is two to three orders of magnitude lower than those of typical PET-scans in nuclear medicine. Two iterative 3D reconstruction algorithms based on ISRA (Image Space Reconstruction Algorithm) and MLEM (Maximum Likelihood Expectation Maximization), respectively, have been adapted to this imaging geometry. The major advantage of the developed approaches are run-time Monte-Carlo simulations which are used to calculate the transition matrix. The influences of detector sensitivity variations, randoms, activity from outside of the field of view and attenuation are corrected for the individual coincidence channels. Performance studies show, that the implementation based on MLEM is the algorithm of merit. Since 1997 it has been applied sucessfully to patient data. The localization of distal and lateral gradients of the ß+-activity distribution is guaranteed in the longitudinal sections. Out of the longitudinal sections the lateral gradients of the ß+-activity distribution should be interpreted using a priori knowledge.
34

Generating 3D Scenes From Single RGB Images in Real-Time Using Neural Networks

Grundberg, Måns, Altintas, Viktor January 2021 (has links)
The ability to reconstruct 3D scenes of environments is of great interest in a number of fields such as autonomous driving, surveillance, and virtual reality. However, traditional methods often rely on multiple cameras or sensor-based depth measurements to accurately reconstruct 3D scenes. In this thesis we propose an alternative, deep learning-based approach to 3D scene reconstruction for objects of interest, using nothing but single RGB images. We evaluate our approach using the Deep Object Pose Estimation (DOPE) neural network for object detection and pose estimation, and the NVIDIA Deep learning Dataset Synthesizer for synthetic data generation. Using two unique objects, our results indicate that it is possible to reconstruct 3D scenes from single RGB images within a few centimeters of error margin.
35

Resection Process Map: A novel dynamic simulation system for pulmonary resection / 解剖学的肺切除における新しいシミュレーションシステム、RPMの開発

Tokuno, Junko 23 March 2023 (has links)
京都大学 / 新制・課程博士 / 博士(医学) / 甲第24477号 / 医博第4919号 / 新制||医||1062(附属図書館) / 京都大学大学院医学研究科医学専攻 / (主査)教授 中本 裕士, 教授 波多野 悦朗, 教授 万代 昌紀 / 学位規則第4条第1項該当 / Doctor of Medical Science / Kyoto University / DFAM
36

Comparison of Image Generation and Processing Techniques for 3D Reconstruction of the Human Skull

Marinescu, Ruxandra 03 December 2001 (has links)
No description available.
37

A PDE method for patchwise approximation of large polygon meshes

Sheng, Y., Sourin, A., Gonzalez Castro, Gabriela, Ugail, Hassan January 2010 (has links)
No / Three-dimensional (3D) representations of com- plex geometric shapes, especially when they are recon- structed from magnetic resonance imaging (MRI) and com- puted tomography (CT) data, often result in large polygon meshes which require substantial storage for their handling, and normally have only one fixed level of detail (LOD). This can often be an obstacle for efficient data exchange and interactive work with such objects. We propose to re- place such large polygon meshes with a relatively small set of coefficients of the patchwise partial differential equation (PDE) function representation. With this model, the approx- imations of the original shapes can be rendered with any desired resolution at interactive rates. Our approach can di- rectly work with any common 3D reconstruction pipeline, which we demonstrate by applying it to a large reconstructed medical data set with irregular geometry.
38

An improved effective method for generating 3D printable models from medical imaging

Rathod, Gaurav Dilip 16 November 2017 (has links)
Medical practitioners rely heavily on visualization of medical imaging to get a better understanding of the patient's anatomy. Most cancer treatment and surgery today are performed using medical imaging. Medical imaging is therefore of great importance to the medical industry. Medical imaging continues to depend heavily on a series of 2D scans, resulting in a series of 2D photographs being displayed using light boxes and/or computer monitors. Today, these 2D images are increasingly combined into 3D solid models using software. These 3D models can be used for improved visualization and understanding of the problem at hand, including fabricating physical 3D models using additive manufacturing technologies. Generating precise 3D solid models automatically from 2D scans is non-trivial. Geometric and/or topologic errors are common, and often costly manual editing is required to produce 3D solid models that sufficiently reflect the actual underlying human geometry. These errors arise from the ambiguity of converting from 2D data to 3D data, and also from inherent limitations of the .STL fileformat used in additive manufacturing. This thesis proposes a new, robust method for automatically generating 3D models from 2D scanned data (e.g., computed tomography (CT) or magnetic resonance imaging (MRI)), where the resulting 3D solid models are specifically generated for use with additive manufacturing. This new method does not rely on complicated procedures such as contour evolution and geometric spline generation, but uses volume reconstruction instead. The advantage of this approach is that the original scan data values are kept intact longer, so that the resulting surface is more accurate. This new method is demonstrated using medical CT data of the human nasal airway system, resulting in physical 3D models fabricated via additive manufacturing. / Master of Science / Medical practitioners rely heavily on medical imaging to get a better understanding of the patient’s anatomy. Most cancer treatment and surgery today are performed using medical imaging. Medical imaging is therefore of great importance to the medical industry. Medical imaging continues to depend heavily on a series of 2D scans, resulting in a series of 2D photographs being displayed using light boxes and/or computer monitors. With additive manufacturing technologies (also known as 3D printing), it is now possible to fabricate real-size physical 3D models of the human anatomy. These physical models enable surgeons to practice ahead of time, using realistic true scale model, to increase the likelihood of a successful surgery. These physical models can potentially also be used to develop organ implants that are tailored specifically to each patient’s anatomy. Generating precise 3D solid models automatically from 2D scans is non-trivial. Automated processing often causes geometric and topological (logical) errors, while manual editing is frequently too labor intensisve and time consuming to be considered practical solution. This thesis proposes a new, robust method for automatically generating 3D models from 2D scanned data (e.g., computed tomography (CT) or magnetic resonance imaging (MRI)), where the resulting 3D solid models are specifically generated for use with additive manufacturing. The advantage of this proposed method is that the resulting fabricated surfaces are more accurate.
39

Development of Open-Source Gantry-Plus Robot Systems for Plant Science research

Kaundanya, Adwait Anand 19 December 2024 (has links)
Affordable and readily available automation options for plant research remain scarce, however with the availability of such a system, many research tasks can be streamlined. In this project, we demonstrate a prototype of such an open-source, low-cost, heterogeneous robotic system called Mini T-Rex. We combine two over-the-counter robots and leverage the ROS2 framework to control this heterogeneous system. This system provides a unique advantage of sensor-to-plant method to capture multi-view images at any angle and distance within the workspace. We demonstrate how making a digital twin in ROS2 can help to control a heterogeneous system via abstracted hardware control. We also talk about I2GROW Oasis which is a robotic system consisting of a remotely controlled robot with the ability to capture top-view images. In this thesis we describe the hardware and software design of both these robotic systems. To use this robotic system, the plants can be grown on a growth bed or a hydroponic system below the Mini T-Rex robot, and the camera will approach the plant without any contact with the plants due to the precise control of the robotic manipulator. We used the system to capture several large data sets of 3D phenotypic data for Solanum lycopersicum, Lactuca sativa, and Thlaspi. In conclusion, we have developed a 9-degree of freedom, fully open-source heterogeneous robotic system capable of multi-view, camera-to plant image capture for plant 3D model reconstruction called Mini T-Rex. We show how to use gantry like robots for phenotyping and create longitudinal datasets by automating these high precision robotic systems. / Master of Science / Robotics are being widely used for automating tasks that are monotonous and require high precision. However, developing such application specific robots in itself is a complicated and tedious task. Having different aspects like mechanical design, robot fabrication, software design makes it difficult for any individual or small groups to develop such robots. In order to facilitate plant researchers who may not have any experience in designing robots, we have developed a general robotic system that can be easily assembled and adapted for applications. In this thesis, we discuss how this robotic system can be made using over the counter robots and discuss how the software makes it intelligent enough such that it can navigate the course without any collisions and control the robots as if they are part of one system rather than two different robots that are controlled individually. This enables using the vendor provided software rather than designing the entire robot from scratch. We also show another robot kits, the FarmBot, which can be assembled and adapted to particular use case of monitoring hydroponically growing crops. We demonstrate how this robot can be used as part of complex systems and how it can be automated to collect images to monitor plant growth. We describe in detail of how a user can go from computer aided design (CAD) to hardware control of the robot, and how this system can be used for phenotyping of plants namely early girl tomato, lettuce, and pennycress.
40

Investigations of stereo setup for Kinect

Manuylova, Ekaterina January 2012 (has links)
The main purpose of this work is to investigate the behavior of the recently released by Microsoft company the Kinect sensor, which contains the properties that go beyond ordinary cameras. Normally, in order to create a 3D reconstruction of the scene two cameras are required. Whereas, the Kinect device, due to the properties of the Infrared projector and sensor allows to create the same type of the reconstruction using only one device. However, the depth images, which are generated by the Infrared laser projector and monochrome sensor in Kinect can contain undefined values. Therefore, in addition to other investigations this project contains an idea how to improve the quality of the depth images. However, the base aim of this work is to perform a reconstruction of the scene based on the color images using pair of Kinects which will be compared with the results generated by using depth information from one Kinect. In addition, the report contains the information how to check that all the performed calculations were done correctly. All  the algorithms which were used in the project as well as the achieved results will be described and discussed in the separate chapters in the current report.

Page generated in 0.0352 seconds