Spelling suggestions: "subject:"(3D) deconstruction"" "subject:"(3D) areconstruction""
181 |
Nástroj pro 3D rekonstrukci z dat z více typů senzorů / Scalable Multisensor 3D Reconstruction FrameworkŠolony, Marek January 2017 (has links)
Realistické 3D modely prostředí jsou užitečné v mnoha oborech, od inspekce přírodních struktur nebo budov, navigace robotů a tvorby map až po filmový průmysl při zaměřování scény nebo pro integraci speciálních efektů. Je běžné při snímání takové scény použít různých typů senzorů, jako například monokulární, stereoskopické nebo sférické kamery nebo 360° laserové skenery, pro dosažení velkého pokrytí scény. Výhoda laserových skenerů a sférických kamer spočívá právě v zachycení celého okolí jako jeden celistvý snímek. Použitím konvenčních monokulárních kamer lze naproti tomu snadno pokrýt zastíněné části scény nebo zachytit detaily. Proces 3D rekonstrukce sestává ze tří kroků: snímání, zpracování dat a registrace a zpřesnění rekonstrukce. Přínos této disertační práce je podrobná analýza metod registrace obrazu ze sférických a planárních kamer a implementace unifikovaného systému sensorů a měření pro 3D rekonstrukci, jež umožňuje rekonstrukci ze všech dostupných dat. Hlavní výhodou navržené unifikované reprezentace je, že umožňuje společně optimalizovat všechny pózy sensorů a bodů scény aplikací nelineárních optimalizačních metod. Tím dosahuje lepší přesnosti rekonstrukce aniž by se výrazně zvýšily výpočetní nároky.
|
182 |
Change is Deep: A Remote Sensing PerspectiveWold, Simon, Sandin, Simon January 2023 (has links)
Change detection (CD) has, in recent years, shown promising results in remote sensing (RS). The development of deep learning CD (DLCD) has, in even more recent years, taken change detection to another level and it has become more widely researched. However, the research depends on publicly available datasets that have been manually annotated for the task of CD. This method is cumbersome and the resulting datasets do not often include all types of change. In this thesis, the generalizability to different areas and different change types of a model trained on a widely used public dataset is analyzed. Also, the thesis investigates how 3D information from Maxar Technologies 3D models can be used to automatically create new more general datasets for CD with both binary or non-binary outputs. The access to large amounts of satellite images together with 3D information enables the creation of more general datasets that can capture more types of change.The thesis concludes that a model trained on the publicly available dataset does not generalize to other areas or other types of change. Models trained on the automatically generated datasets yield relatively good results which indicates that using 3D information to automatically create large datasets is a valid method for CD. Even non-binary approaches show promising results which enable using to gain more practical information on the change of an area. While the thesis presents encouraging results, work can definitely be done to further improve the generalization of the models and improve the dataset generation.
|
183 |
3D-Reconstruction of the Common Murre / 3D-Rekonstruering av SillgrisslaHägerlind, Johannes January 2023 (has links)
Automatic 3D reconstruction of birds can aid researchers in studying their behavior. Recently there has been an attempt to reconstruct a variety of birds from single-view images. However, the common murre's appearance is different from the birds that have been studied. Moreover, recent studies have focused on side views. This thesis studies the 3D reconstruction of the common murre from single-view top-view images. A template mesh is first optimized to fit a 3D scan. Then the result is used to optimize a species-specific mean from side-view images annotated with keypoints and silhouettes. The resulting mean mesh is used to initialize the optimization for top-down images. Using a mask loss, a pose prior loss, and a bone length loss that uses a mean vector from the side-view images improves the 3D reconstruction as rated by humans. Furthermore, the intersection over union (IoU) and percentage of correct keypoint (PCK), although used by other authors, are insufficient in a single-view top-view setting.
|
184 |
Automatic Registration of Optical Aerial Imagery to a LiDAR Point Cloud for Generation of Large Scale City ModelsAbayowa, Bernard Olushola 30 August 2013 (has links)
No description available.
|
185 |
Application Of In Vivo Flow Profiling To Stented Human Coronary ArteriNanda, Hitesh 01 January 2004 (has links)
The study applies in vivo technique for profiling hemodynamics and wall shear stress (WSS) distribution in human coronary arteries. The methodology involves fusion of 2D Intra Vascular Ultra Sound and Bi-plane angiograms to reproduce the 3D arterial geometry. This geometry is then used in a Computational Fluid Dynamics (CFD) module for flow modeling. The Walburn and Schneck constitutive relation was used to represent the non-Newtonian blood rheology. The methodology is applied to study the relationship between WSS and Neointimal Hyperplasia (NIH) in two groups of diabetic patients after being treated separately with bare metal stents (BMS) and Sirolimus Eluting Stents (SES). The stent assignments were blinded until the end of the study. The study was repeated for the patients after 9 months. The predicted WSS ranged from (0.1- 8 N/m2) and was categorized into five classes: low ( < 1 N/m2); low-normal (1-2 N/m2); normal (2-3 N/m2); high-normal (3-4 N/m2); high ( > 4 N/m2). The results indicate NIH in 5 of the patients treated with BMS and none in SES cases. These results correlate with our predicted WSS distribution.
|
186 |
3D Reconstruction of Sorghum Plants for High-Throughput PhenotypingMathieu Gaillard (14199137) 01 December 2022 (has links)
<p>High-throughput phenotyping is a recent multidisciplinary research field that investigates the accurate acquisition and analysis of multidimensional phenotypes on large and diverse populations of plants. High-throughput phenotyping is at the crossroad between plant biology and computer vision, and profits from advances in plant modeling, plant reconstruction, and plant structure understanding. So far, most of the data analysis is done on 2D images, yet plants are inherently 3D shapes, and measurements made in 2D can be biased. For example, leaf angles change when they are reprojected in 2D images. Although some research works investigate the 3D reconstruction of plants, high-throughput phenotyping is still limited in its ability to automatically measure a large population of plants in 3D. In fact, plants are difficult to 3D reconstruct because they look self-similar, feature highly irregular geometries, and self-occlusion. </p>
<p><br></p>
<p>In this dissertation, we investigate the research question \textit{whether we can design and validate high-throughput phenotyping algorithms that take advantage of the 3D nature of the plants to outperform existing algorithms based on 2D images?} We present four contributions that address this question. First, we show a voxel 3D reconstruction pipeline and measure phenotypic traits related to canopy architecture over a population of 351 sorghum plants. Second, we show a machine learning-based skeletonization and segmentation algorithm for sorghum plants, which automatically learns from a set of 100 manually annotated plants. Third, we estimate individual leaf angles over a population of 1,098 sorghum plants. Finally, we present a sparse 3D reconstruction algorithm that can triangulate thousands of points of interest from up to 15 views without correspondences, even in the presence of noise and occlusion. We show that our approach outperforms single-view methods by using multiple views for sorghum leaf counting.</p>
<p><br></p>
<p>Progress made towards improving high-throughput phenotyping has the potential to benefit society with a better adaptation of crops to climate change, which will limit food insecurity in the world.</p>
|
187 |
Microstructure Changes In Solid Oxide Fuel Cell Anodes After Operation, Observed Using Three-Dimensional Reconstruction And Microchemical AnalysisParikh, Harshil R. 09 February 2015 (has links)
No description available.
|
188 |
Volumetric Change Detection Using Uncalibrated 3D Reconstruction ModelsDiskin, Yakov 03 June 2015 (has links)
No description available.
|
189 |
DATA REGISTRATION WITHOUT EXPLICIT CORRESPONDENCE FOR ADJUSTMENT OF CAMERA ORIENTATION PARAMETER ESTIMATIONBarsai, Gabor 20 October 2011 (has links)
No description available.
|
190 |
Fast and Scalable Structure-from-Motion for High-precision Mobile Augmented Reality SystemsBae, Hyojoon 24 April 2014 (has links)
A key problem in mobile computing is providing people access to necessary cyber-information associated with their surrounding physical objects. Mobile augmented reality is one of the emerging techniques that address this key problem by allowing users to see the cyber-information associated with real-world physical objects by overlaying that cyber-information on the physical objects's imagery. As a consequence, many mobile augmented reality approaches have been proposed to identify and visualize relevant cyber-information on users' mobile devices by intelligently interpreting users' positions and orientations in 3D and their associated surroundings. However, existing approaches for mobile augmented reality primarily rely on Radio Frequency (RF) based location tracking technologies (e.g., Global Positioning Systems or Wireless Local Area Networks), which typically do not provide sufficient precision in RF-denied areas or require additional hardware and custom mobile devices.
To remove the dependency on external location tracking technologies, this dissertation presents a new vision-based context-aware approach for mobile augmented reality that allows users to query and access semantically-rich 3D cyber-information related to real-world physical objects and see it precisely overlaid on top of imagery of the associated physical objects. The approach does not require any RF-based location tracking modules, external hardware attachments on the mobile devices, and/or optical/fiducial markers for localizing a user's position. Rather, the user's 3D location and orientation are automatically and purely derived by comparing images from the user's mobile device to a 3D point cloud model generated from a set of pre-collected photographs.
A further challenge of mobile augmented reality is creating 3D cyber-information and associating it with real-world physical objects, especially using the limited 2D user interfaces in standard mobile devices. To address this challenge, this research provides a new image-based 3D cyber-physical content authoring method designed specifically for the limited screen sizes and capabilities of commodity mobile devices. This new approach does not only provide a method for creating 3D cyber-information with standard mobile devices, but also provides an automatic association of user-driven cyber-information with real-world physical objects in 3D.
Finally, a key challenge of scalability for mobile augmented reality is addressed in this dissertation. In general, mobile augmented reality is required to work regardless of users' location and environment, in terms of physical scale, such as size of objects, and in terms of cyber-information scale, such as total number of cyber-information entities associated with physical objects. However, many existing approaches for mobile augmented reality have mainly tested their approaches on limited real-world use-cases and have challenges in scaling their approaches. By designing fast direct 2D-to-3D matching algorithms for localization, as well as applying caching scheme, the proposed research consistently supports near real-time localization and information association regardless of users' location, size of physical objects, and number of cyber-physical information items.
To realize all of these research objectives, five research methods are developed and validated: 1) Hybrid 4-Dimensional Augmented Reality (HD4AR), 2) Plane transformation based 3D cyber-physical content authoring from a single 2D image, 3) Cached k-d tree generation for fast direct 2D-to-3D matching, 4) double-stage matching algorithm with a single indexed k-d tree, and 5) K-means Clustering of 3D physical models with geo-information. After discussing each solution with technical details, the perceived benefits and limitations of the research are discussed with validation results. / Ph. D.
|
Page generated in 0.1047 seconds