Spelling suggestions: "subject:"burface reconstruction"" "subject:"1surface reconstruction""
1 |
Area based stereo : modelling, estimation and integration using a Bayesian approachLim, Kok Guan January 1994 (has links)
No description available.
|
2 |
Smoothing Wavelet ReconstructionGarg, Deepak 03 October 2013 (has links)
This thesis present a new algorithm for creating high quality surfaces from large data sets of oriented points, sampled using a laser range scanner. This method works in two phases. In the first phase, using wavelet surface reconstruction method, we calculate a rough estimate of the surface in the form of Haar wavelet coefficients, stored in an Octree. In the second phase, we modify these coefficients to obtain a higher quality surface.
We cast this method as a gradient minimization problem in the wavelet domain. We show that the solution to the gradient minimization problem, in the wavelet domain, is a sparse linear system with dimensionality roughly proportional to the surface of the model in question. We introduce a fast inplace method, which uses various properties of Haar wavelets, to solve the linear system and demonstrate the results of the algorithm.
|
3 |
A Look Into Human Brain Activity with EEG DataSurface ReconstructionPothayath, Naveen 23 April 2018 (has links)
EEG has been used to explore the electrical activity of the brain for manydecades. During that time, different components of the EEG signal have been iso-lated, characterized, and associated with a variety of brain activities. However, nowidely accepted model characterizing the spatio-temporal structure of the full-brainEEG signal exists to date.Modeling the spatio-temporal nature of the EEG signal is a daunting task. Thespatial component of EEG is defined by the locations of recording electrodes (rang-ing between 2 to 256 in number) placed on the scalp, while its temporal componentis defined by the electrical potentials the electrodes detect. The EEG signal is gen-erated by the composite electrical activity of large neuron assemblies in the brain.These neuronal units often perform independent tasks, giving the EEG signal ahighly dynamic and non-linear character. These characteristics make the raw EEGsignal challenging to work with. Thus, most research focuses on extracting andisolating targeted spatial and temporal components of interest. While componentisolation strategies like independent component analysis are useful, their effective-ness is limited by noise contamination and poor reproducibility. These drawbacks tofeature extraction could be improved significantly if they were informed by a globalspatio-temporal model of EEG data.The aim of this thesis is to introduce a novel data-surface reconstruction (DSR)technique for EEG which can model the integrated spatio-temporal structure of EEGdata. To produce physically intuitive results, we utilize a hyper-coordinate transfor-mation which integrates both spatial and temporal information of the EEG signalinto a unified coordinate system. We then apply a non-uniform rational B spline(NURBS) fitting technique which minimizes the point distance from the computedsurface to each element of the transformed data. To validate the effectiveness of thisproposed method, we conduct an evaluation using a 5-state classification problem;with 1 baseline and 4 meditation states comparing the classification accuracy usingthe raw EEG data versus the surface reconstructed data in the broadband rangeand the alpha, beta, delta, gamma and higher gamma frequencies. Results demon-strate that the fitted data consistently outperforms the raw data in the broadbandspectrum and all frequency spectrums.
|
4 |
Geometric representation of neuroanatomical data observed in mouse brain at cellular and gross levelsKoh, Wonryull 15 May 2009 (has links)
This dissertation studies two problems related to geometric representation of
neuroanatomical data: (i) spatial representation and organization of individual neurons,
and (ii) reconstruction of three-dimensional neuroanatomical regions from sparse two-dimensional
drawings. This work has been motivated by nearby development of new
technology, Knife-Edge Scanning Microscopy (KESM), that images a whole mouse
brain at cellular level in less than a month.
A method is introduced to represent neuronal data observed in the mammalian brain at
the cellular level using geometric primitives and spatial indexing. A data representation
scheme is defined that captures the geometry of individual neurons using traditional
geometric primitives, points and cross-sectional areas along a trajectory. This
representation captures inferred synapses as directed links between primitives and
spatially indexes observed neurons based on the locations of their cell bodies. This
method provides a set of rules for acquisition, representation, and indexing of KESMgenerated
data.
Neuroanatomical data observed at the gross level provides the underlying regional
framework for neuronal circuits. Accumulated expert knowledge on neuroanatomical organization is usually given as a series of sparse two-dimensional contours. A data
structure and an algorithm are described to reconstruct separating surfaces among
multiple regions from these sparse cross-sectional contours. A topology graph is defined
for each region that describes the topological skeleton of the region’s boundary surface
and that shows between which contours the surface patches should be generated. A
graph-directed triangulation algorithm is provided to reconstruct surface patches
between contours. This graph-directed triangulation algorithm combined together with
a piecewise parametric curve fitting technique ensures that abutting or shared surface
patches are precisely coincident. This method overcomes limitations in i) traditional
surfaces-from-contours algorithms that assume binary, not multiple, regionalization of
space, and in ii) few existing separating surfaces algorithms that assume conversion of
input into a regular volumetric grid, which is not possible with sparse inter-planar
resolution.
|
5 |
Efficient rendering of real-world environments in a virtual reality application, using segmented multi-resolution meshesChiromo, Tanaka Alois January 2020 (has links)
Virtual reality (VR) applications are becoming increasingly popular and are being used in various applications. VR applications can be used to simulate large real-world landscapes in a computer program for various purposes such as entertainment, education or business.
Typically, 3-dimensional (3D) and VR applications use environments that are made up of meshes of relatively small size. As the size of the meshes increase, the applications start experiencing lagging and run-time memory errors. Therefore, it is inefficient to upload large-sized meshes into a VR application directly. Manually modelling an accurate real-world environment can also be a complicated task, due to the large size and complex nature of the landscapes. In this research, a method to automatically convert 3D point-clouds of any size and complexity into a format that can be efficiently rendered in a VR application is proposed. Apart from reducing the cost on performance, the solution also reduces the risks of virtual reality induced motion sickness.
The pipeline of the system incorporates three main steps: a surface reconstruction step, a texturing step and a segmentation step. The surface reconstruction step is necessary to convert the 3D point-clouds into 3D triangulated meshes. Texturing is required to add a realistic feel to the appearance of themeshes. Segmentation is used to split large-sized meshes into smaller components that can be rendered individually without overflowing the memory.
A novel mesh segmentation algorithm, the Triangle Pool Algorithm (TPA) is designed to segment the mesh into smaller parts. To avoid using the complex geometric and surface features of natural scenes, the TPA algorithm uses the colour attribute of the natural scenes for segmentation. The TPA algorithm manages to produce comparable results to those of state-of-the-art 3D segmentation algorithms when segmenting regular 3D objects and also manages to outperform the state-of-the-art algorithms when segmenting meshes of real-world natural landscapes.
The VR application is designed using the Unreal and Unity 3D engines. Its principle of operation involves rendering regions closer to the user using highly-detailed multiple mesh segments, whilst regions further away from the user are comprised of a lower detailed mesh. The rest of the segments that are not rendered at a particular time, are stored in external storage. The principle of operation manages to free up memory and also to reduce the amount of computational power required to render highly-detailed meshes. / Dissertation (MEng)--University of Pretoria, 2020. / Electrical, Electronic and Computer Engineering / MEng / Unrestricted
|
6 |
Reconstructing and analyzing surfaces in 3-spaceSun, Jian 17 July 2007 (has links)
No description available.
|
7 |
SLAM-based Dense Surface Reconstruction in Monocular Minimally Invasive Surgery and its Application to Augmented RealityChen, L., Tang, W., John, N.W., Wan, Tao Ruan, Zhang, J.J. 08 February 2018 (has links)
Yes / While Minimally Invasive Surgery (MIS) offers considerable benefits to patients, it also imposes big challenges on a surgeon's performance due to well-known issues and restrictions associated with the field of view (FOV), hand-eye misalignment and disorientation, as well as the lack of stereoscopic depth perception in monocular endoscopy. Augmented Reality (AR) technology can help to overcome these limitations by augmenting the real scene with annotations, labels, tumour measurements or even a 3D reconstruction of anatomy structures at the target surgical locations. However, previous research attempts of using AR technology in monocular MIS surgical scenes have been mainly focused on the information overlay without addressing correct spatial calibrations, which could lead to incorrect localization of annotations and labels, and inaccurate depth cues and tumour measurements. In this paper, we present a novel intra-operative dense surface reconstruction framework that is capable of providing geometry information from only monocular MIS videos for geometry-aware AR applications such as site measurements and depth cues. We address a number of compelling issues in augmenting a scene for a monocular MIS environment, such as drifting and inaccurate planar mapping. Methods A state-of-the-art Simultaneous Localization And Mapping (SLAM) algorithm used in robotics has been extended to deal with monocular MIS surgical scenes for reliable endoscopic camera tracking and salient point mapping. A robust global 3D surface reconstruction framework has been developed for building a dense surface using only unorganized sparse point clouds extracted from the SLAM. The 3D surface reconstruction framework employs the Moving Least Squares (MLS) smoothing algorithm and the Poisson surface reconstruction framework for real time processing of the point clouds data set. Finally, the 3D geometric information of the surgical scene allows better understanding and accurate placement AR augmentations based on a robust 3D calibration. Results We demonstrate the clinical relevance of our proposed system through two examples: a) measurement of the surface; b) depth cues in monocular endoscopy. The performance and accuracy evaluations of the proposed framework consist of two steps. First, we have created a computer-generated endoscopy simulation video to quantify the accuracy of the camera tracking by comparing the results of the video camera tracking with the recorded ground-truth camera trajectories. The accuracy of the surface reconstruction is assessed by evaluating the Root Mean Square Distance (RMSD) of surface vertices of the reconstructed mesh with that of the ground truth 3D models. An error of 1.24mm for the camera trajectories has been obtained and the RMSD for surface reconstruction is 2.54mm, which compare favourably with previous approaches. Second, \textit{in vivo} laparoscopic videos are used to examine the quality of accurate AR based annotation and measurement, and the creation of depth cues. These results show the potential promise of our geometry-aware AR technology to be used in MIS surgical scenes. Conclusions The results show that the new framework is robust and accurate in dealing with challenging situations such as the rapid endoscopy camera movements in monocular MIS scenes. Both camera tracking and surface reconstruction based on a sparse point cloud are effective and operated in real-time. This demonstrates the potential of our algorithm for accurate AR localization and depth augmentation with geometric cues and correct surface measurements in MIS with monocular endoscopes.
|
8 |
Multimodal Bioinspired Artificial Skin Module for Tactile SensingAlves de Oliveira, Thiago Eustaquio 30 January 2019 (has links)
Tactile sensors are the last frontier to robots that can handle everyday objects and interact with humans through contact. Robots are expected to recognize the properties of objects in order to handle them safely and efficiently in a variety of applications, such as health- and elder care, manufacturing, or high-risk environments. To be effective, such sensors have to sense the geometry of touched surfaces and objects, as well as any other relevant information for their tasks, such as forces, vibrations, and temperature, that allow them to safely and securely interact within an environment. Given the capability of humans to easily capture and interpret tactile data, one promising direction in order to produce enhanced robotic tactile sensors is to explore and imitate human tactile sensing capabilities. In this context, this thesis presents the design and hardware implementation issues related to the construction of a novel multimodal bio-inspired skin module for dynamic and static tactile surface characterization. Drawing inspiration from the type, functionality, and organization of cutaneous tactile elements in the human skin, the proposed solution determines the placement of two shallow sensors (a tactile array and a nine DOF magnetic, angular rate, and gravity system) and a deep pressure sensor within a flexible compliant structure, similar to the receptive field of the Pacinian mechanoreceptor. The benefit of using a compliant structure is tri-folded. First, the module has the capability of performing touch tasks on unknown surfaces, tackling the tactile inversion problem. The compliant structure guides deforming forces from its surface to the deep pressure sensor, while keeping track of the deformation of the structure using advantageously placed shallow sensors. Second, the module’s compliant structure and its embedded sensor placement provide useful data to overcome the problem of estimating non-normal forces, a significant challenge for the current generation of tactile sensing technologies. This capability allows accommodating sensing modalities essential for acquiring tactile images and classifying surfaces by vibrations and accelerations. Third, the compliant structure of the module also contributes to the relaxation of orientation constraints of end-effectors or other robotic parts carrying the module to contact surfaces of unknown objects. Issues related to the module calibration, its sensing capabilities and possible real-world applications are also presented.
|
9 |
Temporal Surface ReconstructionHeel, Joachim 01 May 1991 (has links)
This thesis investigates the problem of estimating the three-dimensional structure of a scene from a sequence of images. Structure information is recovered from images continuously using shading, motion or other visual mechanisms. A Kalman filter represents structure in a dense depth map. With each new image, the filter first updates the current depth map by a minimum variance estimate that best fits the new image data and the previous estimate. Then the structure estimate is predicted for the next time step by a transformation that accounts for relative camera motion. Experimental evaluation shows the significant improvement in quality and computation time that can be achieved using this technique.
|
10 |
Inference-based Geometric Modeling for the Generation of Complex Cluttered Virtual EnvironmentsBiggers, Keith Edward 2011 May 1900 (has links)
As the use of simulation increases across many diff erent application domains,
the need for high- fidelity three-dimensional virtual representations of real-world environments
has never been greater. This need has driven the research and development
of both faster and easier methodologies for creating such representations. In this research,
we present two diff erent inference-based geometric modeling techniques that
support the automatic construction of complex cluttered environments.
The fi rst method we present is a surface reconstruction-based approach that
is capable of reconstructing solid models from a point cloud capture of a cluttered
environment. Our algorithm is capable of identifying objects of interest amongst a
cluttered scene, and reconstructing complete representations of these objects even in
the presence of occluded surfaces. This approach incorporates a predictive modeling
framework that uses a set of user provided models for prior knowledge, and applies
this knowledge to the iterative identifi cation and construction process. Our approach
uses a local to global construction process guided by rules for fi tting high quality
surface patches obtained from these prior models. We demonstrate the application of
this algorithm on several synthetic and real-world datasets containing heavy clutter and occlusion.
The second method we present is a generative modeling-based approach that can
construct a wide variety of diverse models based on user provided templates. This
technique leverages an inference-based construction algorithm for developing solid
models from these template objects. This algorithm samples and extracts surface
patches from the input models, and develops a Petri net structure that is used by our
algorithm for properly fitting these patches in a consistent fashion. Our approach uses
this generated structure, along with a defi ned parameterization (either user-defi ned
through a simple sketch-based interface or algorithmically de fined through various
methods), to automatically construct objects of varying sizes and con figurations.
These variations can include arbitrary articulation, and repetition and interchanging
of parts sampled from the input models.
Finally, we affim our motivation by showing an application of these two approaches.
We demonstrate how the constructed environments can be easily used
within a physically-based simulation, capable of supporting many diff erent application
domains.
|
Page generated in 0.0851 seconds