231 |
Unfolding and Reconstructing PolyhedraLucier, Brendan January 2006 (has links)
This thesis covers work on two topics: unfolding polyhedra into the plane and reconstructing polyhedra from partial information. For each topic, we describe previous work in the area and present an array of new research and results.
Our work on unfolding is motivated by the problem of characterizing precisely when overlaps will occur when a polyhedron is cut along edges and unfolded. By contrast to previous work, we begin by classifying overlaps according to a notion of locality. This classification enables us to focus upon particular types of overlaps, and use the results to construct examples of polyhedra with interesting unfolding properties.
The research on unfolding is split into convex and non-convex cases. In the non-convex case, we construct a polyhedron for which every edge unfolding has an overlap, with fewer faces than all previously known examples. We also construct a non-convex polyhedron for which every edge unfolding has a particularly trivial type of overlap. In the convex case, we construct a series of example polyhedra for which every unfolding of various types has an overlap. These examples disprove some existing conjectures regarding algorithms to unfold convex polyhedra without overlaps.
The work on reconstruction is centered around analyzing the computational complexity of a number of reconstruction questions. We consider two classes of reconstruction problems. The first problem is as follows: given a collection of edges in space, determine whether they can be rearranged <em>by translation only</em> to form a polygon or polyhedron. We consider variants of this problem by introducing restrictions like convexity, orthogonality, and non-degeneracy. All of these problems are NP-complete, though some are proved to be only weakly NP-complete. We then consider a second, more classical problem: given a collection of edges in space, determine whether they can be rearranged by <em>translation and/or rotation</em> to form a polygon or polyhedron. This problem is NP-complete for orthogonal polygons, but polynomial algorithms exist for non-orthogonal polygons. For polyhedra, it is shown that if degeneracies are allowed then the problem is NP-hard, but the complexity is still unknown for non-degenerate polyhedra.
|
232 |
General Geometry Computed Tomography ReconstructionRamotar, Alexei January 2006 (has links)
The discovery of Carbon Nanotubes and their ability to produce X-rays can usher in a new era in Computed Tomography (CT) technology. These devices will be lightweight, flexible and portable. The proposed device, currently under development, is envisioned as a flexible band of tiny X-ray emitters and detectors. The device is wrapped around an appendage and a CT image is obtained. However, current CT reconstruction algorithms can only be used if the geometry of the CT device is regular (usually circular). We present an efficient and accurate reconstruction technique that is unconstrained by the geometry of the CT device. Indeed the geometry can be both regular and highly irregular. To evaluate the feasibility of reconstructing a CT image from such a device, a simulated test bed was built to generate simulated CT ray sums of an image. This data was then used in our reconstruction method. We take this output data and grid it according to what we would expect from a parallel-beam CT scanner. The Filtered Back Projection can then be used to perform reconstruction. We have also included data inaccuracies as is expected in "real world" situations. Observations of reconstructions, as well as quantitative results, suggest that this simple method is efficient and accurate.
|
233 |
Reconstruction of Orthogonal PolyhedraGenc, Burkay January 2008 (has links)
In this thesis I study reconstruction of orthogonal polyhedral surfaces
and orthogonal polyhedra from partial information about their
boundaries. There are three main questions for which I provide novel
results. The first question is "Given the dual graph, facial angles and
edge lengths of an orthogonal polyhedral surface or polyhedron, is it
possible to reconstruct the dihedral angles?" The second question is
"Given the dual graph, dihedral angles and edge lengths of an
orthogonal polyhedral surface or polyhedron, is it possible to
reconstruct the facial angles?" The third question is "Given the
vertex coordinates of an orthogonal polyhedral surface or polyhedron, is
it possible to reconstruct the edges and faces, possibly after
rotating?"
For the first two questions, I show that the answer is "yes" for
genus-0 orthogonal polyhedra and polyhedral surfaces under some
restrictions, and provide linear time algorithms. For the third
question, I provide results and algorithms for orthogonally convex
polyhedra. Many related problems are studied as well.
|
234 |
The Study of Synthetic Aperture Sonar SystemSung, Chen-Hung 31 August 2010 (has links)
This research is to study the fundamental theory of Synthetic Aperture Sonar (SAS) through numerical simulation and experimental analysis. The basic principle of SAS is to enhance the capability of spatial resolution by moving the transducer element to increase aperture so that it achieves a better resolution. The factors affecting the capability of resolution include the actual size of the transducers, frequency and its bandwidth, pulse length, and moving speeds. The effects of various factors on the resolution were examined through numerical simulation. The results have shown that the smaller the true size of the transducer, the better the resolution. Moreover, when the bandwidth is increased, the resolution also increases. The SAS is sensitive to the speed of movement due to the fact that data acquisition may be limited, therefore the speed can not be too high, e.g., less than 1.5 m/s. The experiment was carried out in a water tank of size 4 m x 3.5 m x 2 m. The transducers of AST MK VI 192 kHz were employed to transmit and receive signals. Copper spheres of various sizes (3 cm, 6 cm, 8 cm diameter) were used as targets. The data were obtained and analyzed, and the results have shown that the resolution may be achieved by SAS analysis, establishing the fundamental principle and offering opportunity for future study.
|
235 |
Geometric representation of neuroanatomical data observed in mouse brain at cellular and gross levelsKoh, Wonryull 15 May 2009 (has links)
This dissertation studies two problems related to geometric representation of
neuroanatomical data: (i) spatial representation and organization of individual neurons,
and (ii) reconstruction of three-dimensional neuroanatomical regions from sparse two-dimensional
drawings. This work has been motivated by nearby development of new
technology, Knife-Edge Scanning Microscopy (KESM), that images a whole mouse
brain at cellular level in less than a month.
A method is introduced to represent neuronal data observed in the mammalian brain at
the cellular level using geometric primitives and spatial indexing. A data representation
scheme is defined that captures the geometry of individual neurons using traditional
geometric primitives, points and cross-sectional areas along a trajectory. This
representation captures inferred synapses as directed links between primitives and
spatially indexes observed neurons based on the locations of their cell bodies. This
method provides a set of rules for acquisition, representation, and indexing of KESMgenerated
data.
Neuroanatomical data observed at the gross level provides the underlying regional
framework for neuronal circuits. Accumulated expert knowledge on neuroanatomical organization is usually given as a series of sparse two-dimensional contours. A data
structure and an algorithm are described to reconstruct separating surfaces among
multiple regions from these sparse cross-sectional contours. A topology graph is defined
for each region that describes the topological skeleton of the region’s boundary surface
and that shows between which contours the surface patches should be generated. A
graph-directed triangulation algorithm is provided to reconstruct surface patches
between contours. This graph-directed triangulation algorithm combined together with
a piecewise parametric curve fitting technique ensures that abutting or shared surface
patches are precisely coincident. This method overcomes limitations in i) traditional
surfaces-from-contours algorithms that assume binary, not multiple, regionalization of
space, and in ii) few existing separating surfaces algorithms that assume conversion of
input into a regular volumetric grid, which is not possible with sparse inter-planar
resolution.
|
236 |
Towards the Development of Training Tools for Face RecognitionRodriguez, Jobany 2011 May 1900 (has links)
Distinctiveness plays an important role in the recognition of faces, i.e., a distinctive face is usually easier to remember than a typical face in a recognition task. This distinctiveness effect explains why caricatures are recognized faster and more accurately than unexaggerated (i.e., veridical) faces. Furthermore, using caricatures during training can facilitate recognition of a person’s face at a later time. The objective of this thesis is to determine the extent to which photorealistic computer-generated caricatures may be used in training tools to improve recognition of faces by humans. To pursue this objective, we developed a caricaturization procedure for three-dimensional (3D) face models, and characterized face recognition performance (by humans) through a series of perceptual studies.
The first study focused on 3D shape information without texture. Namely, we tested whether exposure to caricatures during an initial familiarization phase would aid in the recognition of their veridical counterparts at a later time. We examined whether this effect would emerge with frontal rather than three-quarter views, after very brief exposure to caricatures during the learning phase and after modest rotations of faces during the recognition phase. Results indicate that, even under these difficult training conditions, people are more accurate at recognizing unaltered faces if they are first familiarized with caricatures of the faces, rather than with the unaltered faces. These preliminary findings support the use of caricatures in new training methods to improve face recognition.
In the second study, we incorporated texture into our 3D models, which allowed us to generate photorealistic renderings. In this study, we sought to determine the extent to which familiarization with caricaturized faces could also be used to reduce other-race effects (e.g., the phenomenon whereby faces from other races appear less distinct than faces from our own race). Using an old/new face recognition paradigm, Caucasian participants were first familiarized with a set of faces from multiple races, and then asked to recognize those faces among a set of confounders. Participants who were familiarized with and then asked to recognize veridical versions of the faces showed a significant other-race effect on Indian faces. In contrast, participants who were familiarized with caricaturized versions of the same faces, and then asked to recognize their veridical versions, showed no other-race effects on Indian faces. This result suggests that caricaturization may be used to help individuals focus their attention to features that are useful for recognition of other-race faces.
The third and final experiment investigated the practical application of our earlier results. Since 3D facial scans are not generally available, here we also sought to determine whether 3D reconstructions from 2D frontal images could be used for the same purpose. Using the same old/new face recognition paradigm, participants who were familiarized with reconstructed faces and then asked to recognize the ground truth versions of the faces showed a significant reduction in performance compared to the previous study. In addition, participants who were familiarized with caricatures of reconstructed versions, and then asked to recognize their corresponding ground truth versions, showed a larger reduction in performance. Our results suggest that, despite the high level of photographic realism achieved by current 3D facial reconstruction methods, additional research is needed in order to reduce reconstruction errors and capture the distinctive facial traits of an individual. These results are critical for the development of training tools based on computer-generated photorealistic caricatures from “mug shot” images.
|
237 |
Dose Reconstruction Using Computational Modeling of Handling a Particular Arsenic-73/Arsenic-74 SourceStallard, Alisha M. 2010 May 1900 (has links)
A special work evolution was performed at Los Alamos National Laboratory (LANL) with a particular 73As/74As source but the worker’s extremity dosimeter did not appear to provide appropriate dosimetric information for the tasks performed. This prompted a reconstruction of the dose to the worker’s hands. The computer code MCNP was chosen to model the tasks that the worker performed to evaluate the potential nonuniform hand dose distribution. A model was constructed similar to the worker’s hands to represent the performed handling tasks. The model included the thumb, index finger, middle finger, and the palm. The dose was calculated at the 7 mg cm-2 skin depth. To comply with the Code of Federal Regulations, 10 CFR 835, the 100 cm2 area that received the highest dose must be calculated. It could be determined if the dose received by the worker exceeded any regulatory limit. The computer code VARSKIN was also used to provide results to compare with those from MCNP where applicable.
The results from the MCNP calculations showed that the dose to the worker’s hands did not exceed the regulatory limit of 0.5 Sv (50 rem). The equivalent nonuniform dose was 0.126 Sv (12.6 rem) to the right hand and 0.082 Sv (8.2 rem) to the left hand.
|
238 |
Guaranteed Verification of Finite Element Solutions of Heat ConductionWang, Delin 2011 May 1900 (has links)
This dissertation addresses the accuracy of a-posteriori error estimators for finite element solutions of problems with high orthotropy especially for cases where rather
coarse meshes are used, which are often encountered in engineering computations. We present sample computations which indicate lack of robustness of all standard
residual estimators with respect to high orthotropy. The investigation shows that the main culprit behind the lack of robustness of residual estimators is the coarseness
of the finite element meshes relative to the thickness of the boundary and interface layers in the solution.
With the introduction of an elliptic reconstruction procedure, a new error estimator based on the solution of the elliptic reconstruction problem is invented to
estimate the exact error measured in space-time C-norm for both semi-discrete and fully discrete finite element solutions to linear parabolic problem. For a fully discrete solution, a temporal error estimator is also introduced to evaluate the discretization error in the temporal field. In the meantime, the implicit Neumann subdomain residual estimator for elliptic equations, which involves the solution of the local residual
problem, is combined with the elliptic reconstruction procedure to carry out a posteriori error estimation for the linear parabolic problem. Numerical examples are
presented to illustrate the superconvergence properties in the elliptic reconstruction and the performance of the bounds based on the space-time C-norm.
The results show that in the case of L^2 norm for smooth solution there is no superconvergence in elliptic reconstruction for linear element, and for singular solution the superconvergence does not exist for element of any order while in the case of energy norm the superconvergence always exists in elliptic reconstruction. The research also shows that the performance of the bounds based on space-time C-norm is robust, and in the case of fully discrete finite element solution the bounds for the temporal error are sharp.
|
239 |
AUTOGENOUS BULK STRUCTURAL BONE GRAFTING FOR RECONSTRUCTION OF THE ACETABLUM IN PRIMARY TOTAL HIP ARTHROPLASTY: AVERAGE 12-YEAR FOLLOW-UPMASUI, TETSUO, IWASE, TOSHIKI, KOUYAMA, ATSUSHI, SHIDOU, TETSURO 09 1900 (has links)
No description available.
|
240 |
Structural studies of the SARS virus Nsp15 endonuclease and the human innate immunity receptor TLR3Sun, Jingchuan 16 August 2006 (has links)
Three-dimensional (3D) structural determination of biological macromolecules is not only critical to understanding their mechanisms, but also has practical applications. Combining the high resolution imaging of transmission electron microscopy (TEM) and efficient computer processing, protein structures in solution or in two-dimensional (2D) crystals can be determined. The lipid monolayer technique uses the high affinity binding of 6His-tagged proteins to a Ni-nitrilotriacetic (NTA) lipid to create high local protein concentrations, which facilitates 2D crystal formation. In this study, several proteins have been crystallized using this technique, including the SARS virus Nsp15 endonuclease and the human Toll-like receptor (TLR) 3 extracellular domain (ECD). Single particle analysis can determine protein structures in solution without the need for crystals. 3D structures of several protein complexes had been solved by the single particle method, including IniA from Mycobacterium tuberculosis, Nsp15 and TLR3 ECD. Determining the structures of these proteins is an important step toward understanding pathogenic microbes and our immune system.
|
Page generated in 0.0964 seconds