Spelling suggestions: "subject:"[een] 3D RECONSTRUCTION"" "subject:"[enn] 3D RECONSTRUCTION""
101 |
THREE-DIMENSIONAL RECONSTRUCTION OF THE ALLOYING PROCESS OF GOLD-SILVER NANOPARTICLES BY SMALL-ANGLE X-RAY SCATTERINGWu, Siyu, 0000-0002-0199-5471 January 2023 (has links)
Alloy nanoparticles have been extensively studied for decades. However, the synthesis and characterization of alloy nanoparticles are still posing significant challenges, leading to an increasing demand for in situ characterization techniques. Small-angle X-ray scattering (SAXS) is a powerful method for structural analysis of nanoparticles. As the SAXS signal is essentially the Fourier transform of the electron distribution, it provides structural information for the entire ensemble of nanoparticles. The development of SAXS has been facilitated by significant advances in synchrotron X-ray sources and data processing methods, leading to the development of the 3D-SAXS method, which enables the reconstruction of the 3D structures of particles from SAXS profiles.Although SAXS has the potential to be a powerful tool for investigating the internal structures of alloy nanoparticles, its application is hindered by the challenges posed by polydispersity, which can cause smearing effects that complicate the geometry recovery process. This dissertation presents a novel approach to overcome the problem of polydispersity in SAXS data analysis, thus demonstrating the utility of SAXS in investigating the internal electron density distributions of alloy nanoparticles.
In Chapter 2, the SharPy algorithm is introduced as a size-refocusing program that reduces the smearing effect caused by polydispersity in SAXS data. SharPy is based on a penalized iterative regression approach to fit the pair distance distribution function (PDDF) with an estimated size distribution. It can provide detailed information about the shape of nanoparticles from the smeared SAXS signal under various scenarios and conditions.
Chapter 3 investigates the simulated SAXS profiles of AuAg core-shell nanoparticles with varying size distribution, core-shell ratio, and degrees of alloying. It demonstrates the capability of SAXS in observing the electron density distribution of AuAg core-shell structures. These findings provide insights into the potential of SAXS as a reliable method for investigating the internal structures of alloy nanoparticles.
Chapter 4 focuses on synthesizing and characterizing AuAg nanoparticles. Their SAXS profiles and PDDF analysis demonstrate that SAXS can distinguish between homogeneous and core-shell nanoparticle structures. In this chapter, the SharPy algorithm is first-time applied to real experimental data, demonstrating its ability to reveal the core-shell structure from a polydisperse nanoparticle system.
Chapter 5 investigates the evolution of alloying AuAg nanoparticles through a combination of SAXS/PDDF analysis, 3D reconstruction, and molecular dynamics (MD) simulation. The study presents the 3D electron density distribution of alloying AuAg nanoparticles. The 3D reconstruction with electron density mapping provides a straightforward visualization of the electron density distribution pattern of the alloying AuAg nanoparticles.
The success of the SAXS experiment lies in the development of the 3D-SAXS pipeline, which involves the use of SharPy and 3D reconstruction programs, making 3D SAXS a promising alternative to electron microscopy for visualizing the morphology of nanoparticle systems. / Chemistry
|
102 |
Advanced Processing of Scanning Electron Microscopy Images in 2-D and 3-D Datasets / Advanced Electron Microscopy Techniques for Large-Area Stitching ApplicationsKhoonkari, Nasim January 2023 (has links)
In this thesis, we present three novel algorithms. The first algorithm is a method of identifying numerical landmarks (a definition coined in this thesis). The second algorithm uses the projection of image regions onto x- and y- axes and the matching of the resulting 1D projections to determine an overall 2D translation for use in registration. The third algorithm aligns SEM images of successive layers of a semiconductor device by first extracting the positions of vias in the lower layer, and then searching for the best translation for subsets of vias such that they all or mostly connect to metalization in the upper layer. / To acquire high-resolution Scanning Electron Microscopy (SEM) images over wide areas, we must acquire several images ``tiling'' the surface and assemble them into a single composite image, using a process called image stitching. While for some applications, stitching is now routine, SEM mosaics of semiconductors pose several challenges: (1) by design, the image features (wire, via and dielectric) are highly repetitive, (2) the overlap between image tiles is small, (3) sample charging causes intensity variation between captures of the same region, and (4) machine instability causes non-linear deformation within tiles and between tiles.
In this study, we compare the accuracy and computational cost of three well-known pixel-based techniques: Fast Fourier Transform (FFT), Sum of Squared Differences (SSD), and Normalized Cross Correlation (NCC). We compare well-known 2D algorithms, as well as novel projection-onto-1D versions. The latter reduces the computational complexity from O(n^2) to O(n), where n is the number of pixels, without loss of accuracy, and in some cases, with greater accuracy. Another approach to reducing the computational complexity of image alignment is to compare isolated landmarks, rather than pixels. In semiconductor images, there are no natural fiducials and adding them would destroy the information required to reconstruct their circuits, so we introduce a new class of landmarks which we call numerical landmarks. Related to Harris corners, the novel numerical landmarks are insensitive to brightness variations and noise. Finally, we consider the alignment problem between layers of image mosaics. Unlike in the ``horizontal'' directions, the vertical dimension is only sparsely sampled. Consequently, image features and landmarks cannot be used for alignment. Instead, we must rely on the relationship between vias (through-plane metalization) and wires (in-plane metalization), and we have developed a novel algorithm for matching vias in the lower layer with wires above, and use this to align subimages. / Thesis / Doctor of Philosophy (PhD) / Applications in materials science often require the acquisition of images of semiconductor computer chips at very high resolution. Using cameras with even tens of millions of pixels might not give us enough resolution over a wide field of view. One approach is to acquire several images of parts of the sample at high magnification and assemble them into a single composite image. This way, we can preserve the high resolution over a wide area. Algorithms developed for assembling the composite image are known as tiling or mosaicing. This whole process is known as image stitching (and includes image registration). In this thesis, we develop specialized algorithms suited for the 2D stitching of semiconductor images, including the generalization to 3D. This case is challenging because slight alignment errors may completely change the reconstructed circuit, and the images contain both repeated patterns (such as many parallel wires) and changes in brightness and distortions caused by the scanning device.
|
103 |
Contributions To Automatic Particle Identification In Electron Micrographs: Algorithms, Implementation, And ApplicationsSingh, Vivek 01 January 2005 (has links)
Three dimensional reconstruction of large macromolecules like viruses at resolutions below 8 Ã… - 10 Ã… requires a large set of projection images and the particle identification step becomes a bottleneck. Several automatic and semi-automatic particle detection algorithms have been developed along the years. We present a general technique designed to automatically identify the projection images of particles. The method utilizes Markov random field modelling of the projected images and involves a preprocessing of electron micrographs followed by image segmentation and post processing for boxing of the particle projections. Due to the typically extensive computational requirements for extracting hundreds of thousands of particle projections, parallel processing becomes essential. We present parallel algorithms and load balancing schemes for our algorithms. The lack of a standard benchmark for relative performance analysis of particle identification algorithms has prompted us to develop a benchmark suite. Further, we present a collection of metrics for the relative performance analysis of particle identification algorithms on the micrograph images in the suite, and discuss the design of the benchmark suite.
|
104 |
Evaluation of Digital Holographic Reconstruction Techniques for Use in One-shot Multi-angle Holographic TomographyLiu, Haipeng 26 August 2014 (has links)
No description available.
|
105 |
ANALYSIS OF VERY LARGE SCALE IMAGE DATA USING OUT-OF-CORE TECHNIQUE AND AUTOMATED 3D RECONSTRUCTION USING CALIBRATED IMAGESHassan Raju, Chandrashekara 28 September 2007 (has links)
No description available.
|
106 |
Computer Vision for Quarry ApplicationsChristie, Gordon A. 11 June 2013 (has links)
This thesis explores the use of computer vision to facilitate three different processes of a quarry's operation. The first is the blasting process. This is where operators determine where to drill in order to execute an efficient and safe blast. Having an operator manually determine the drilling angles and positions can lead to inefficient and dangerous blasts. By using two cameras, oriented vertically, and separated by a fixed baseline, Structure from Motion techniques can be used to create a scaled 3D model of a bench. This can then be analyzed to provide operators with borehole locations and drilling angles in relation to fixed reference targets.
The second process explored is the crushing process, where the rocks pass through different crushers that reduce the rocks into smaller sizes. The crushed rocks are then dropped onto a moving conveyor belt. The maximum dimension of the rocks exiting the crushers should not exceed size thresholds that are specific to each crusher. This thesis presents a 2D vision system capable of estimating the size distribution of the rocks by attempting to segment the rocks in each image. The size distribution, based on the maximum dimension of each rock, is estimated by finding the maximum dimension in the image in pixels and converting that to inches.
The third process of the quarry operations explored is where the final product is piled up to form stockpiles. For inventory purposes, operators often carry out a manual estimation of the size of a the stockpile. This thesis presents a vision system capable of providing a more accurate estimate for the size of the stockpile by using Structure from Motion techniques to create a 3D reconstruction. User interaction helps to find the points that are relevant to the stockpile in the resulting point cloud, which are then used to estimate the volume. / Master of Science
|
107 |
Sequential Motion Estimation and Refinement for Applications of Real-time Reconstruction from Stereo VisionStefanik, Kevin Vincent 10 August 2011 (has links)
This paper presents a new approach to the feature-matching problem for 3D reconstruction by taking advantage of GPS and IMU data, along with a prior calibrated stereo camera system. It is expected that pose estimates and calibration can be used to increase feature matching speed and accuracy. Given pose estimates of cameras and extracted features from images, the algorithm first enumerates feature matches based on stereo projection constraints in 2D and then backprojects them to 3D. Then, a grid search algorithm over potential camera poses is proposed to match the 3D features and find the largest group of 3D feature matches between pairs of stereo frames. This approach will provide pose accuracy to within the space that each grid region covers. Further refinement of relative camera poses is performed with an iteratively re-weighted least squares (IRLS) method in order to reject outliers in the 3D matches. The algorithm is shown to be capable of running in real-time correctly, where the majority of processing time is taken by feature extraction and description. The method is shown to outperform standard open source software for reconstruction from imagery. / Master of Science
|
108 |
Online 3D Reconstruction and Ground Segmentation using Drone based Long Baseline Stereo Vision SystemKumar, Prashant 16 November 2018 (has links)
This thesis presents online 3D reconstruction and ground segmentation using unmanned aerial vehicle (UAV) based stereo vision. For this purpose, a long baseline stereo vision system has been designed and built. Application of this system is to work as part of an air and ground based multi-robot autonomous terrain surveying project at Unmanned Systems Lab (USL), Virginia Tech, to act as a first responder robotic system in disaster situations. Areas covered by this thesis are design of long baseline stereo vision system, study of stereo vision raw output, techniques to filter out outliers from raw stereo vision output, a 3D reconstruction method and a study to improve running time by controlling the density of point clouds. Presented work makes use of filtering methods and implementations in Point Cloud Library (PCL) and feature matching on graphics processing unit (GPU) using OpenCV with CUDA. Besides 3D reconstruction, the challenge in the project was speed and several steps and ideas are presented to achieve it. Presented 3D reconstruction algorithm uses feature matching in 2D images, converts keypoints to 3D using disparity images, estimates rigid body transformation between matched 3D keypoints and fits point clouds. To correct and control orientation and localization errors, it fits re-projected UAV positions on GPS recorded UAV positions using iterative closest point (ICP) algorithm as the correction step. A new but computationally intensive process of use of superpixel clustering and plane fitting to increase resolution of disparity images to sub-pixel resolution is also presented. Results section provides accuracy of 3D reconstruction results. The presented process is able to generate application acceptable semi-dense 3D reconstruction and ground segmentation at 8-12 frames per second (fps). In 3D reconstruction of an area of size 25 x 40 m2, with UAV flight altitude of 23 m, average obstacle localization error and average obstacle size/dimension error is found to be of 17 cm and 3 cm, respectively. / MS / This thesis presents near real-time, called online, visual reconstruction in 3-dimensions (3D) using ground facing camera system on an unmanned aerial vehicle. Another result of this thesis is separating ground from obstacles on the ground. To do this the camera system using two cameras, called stereo vision system, with the cameras being positioned comparatively far away from each other at 60 cm was designed as well as an algorithm and software to do the visual 3D reconstruction was developed. Application of this system is to work as part of an air and ground based multi-robot autonomous terrain surveying project at Unmanned Systems Lab, Virginia Tech, to act as a first responder robotic system in disaster situations. Presented work makes use of Point Cloud Library and library functions on graphics processing unit using OpenCV with CUDA, which are popular Computer Vision libraries. Besides 3D reconstruction, the challenge in the project was speed and several steps and ideas are presented to achieve it. Presented 3D reconstruction algorithm is based on feature matching, which is a popular way to mathematically identify unique pixels in an image. Besides using image features in 3D reconstruction, the algorithm also presents a correction step to correct and control orientation and localization errors using iterative closest point algorithm. A new but computationally intensive process to improve resolution of disparity images, which is an output of the developed stereo vision system, from single pixel accuracy to sub-pixel accuracy is also presented. Results section provides accuracy of 3D reconstruction results. The presented process is able to generate application acceptable 3D reconstruction and ground segmentation at 8-12 frames per second. In 3D reconstruction of an area of size 25 x 40 m2 , with UAV flight altitude of 23 m, average obstacle localization error and average obstacle size/dimension error is found to be of 17 cm and 3 cm, respectively.
|
109 |
Autonomous Sample Collection Using Image-Based 3D ReconstructionsTorok, Matthew M. 14 May 2012 (has links)
Sample collection is a common task for mobile robots and there are a variety of manipulators available to perform this operation. This thesis presents a novel scoop sample collection system design which is able to both collect and contain a sample using the same hardware. To ease the operator burden during sampling the scoop system is paired with new semi-autonomous and fully autonomous collection techniques. These are derived from data provided by colored 3D point clouds produced via image-based 3D reconstructions. A custom robotic mobility platform, the Scoopbot, is introduced to perform completely automated imaging of the sampling area and also to pick up the desired sample. The Scoopbot is wirelessly controlled by a base station computer which runs software to create and analyze the 3D point cloud models. Relevant sample parameters, such as dimensions and volume, are calculated from the reconstruction and reported to the operator. During tests of the system in full (48 images) and fast (6-8 images) modes the Scoopbot was able to identify and retrieve a sample without any human intervention. Finally, a new building crack detection algorithm (CDA) is created to use the 3D point cloud outputs from image sets gathered by a mobile robot. The CDA was shown to successfully identify and color-code several cracks in a full-scale concrete building element. / Master of Science
|
110 |
Semi-supervised learning for joint visual odometry and depth estimationPapadopoulos, Kyriakos January 2024 (has links)
Autonomous driving has seen huge interest and improvements in the last few years. Two important functions of autonomous driving is the depth and visual odometry estimation.Depth estimation refers to determining the distance from the camera to each point in the scene captured by the camera, while the visual odometry refers to estimation of ego motion using images recorded by the camera. The algorithm presented by Zhou et al. [1] is a completely unsupervised algorithm for depth and ego motion estimation. This thesis sets out to minimize ambiguity and enhance performance of the algorithm [1]. The purpose of the mentioned algorithm is to estimate the depth map given an image, from a camera attached to the agent, and the ego motion of the agent, in the case of the thesis, the agent is a vehicle. The algorithm lacks the ability to make predictions in the true scale in both depth and ego motion, said differently, it suffers from ambiguity. Two extensions of the method were developed by changing the loss function of the algorithm and supervising ego motion. Both methods show a remarkable improvement in their performance and reduced ambiguity, utilizing only the ego motion ground data which is significantly easier to access than depth ground truth data
|
Page generated in 0.0485 seconds