• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1633
  • 548
  • 454
  • 349
  • 171
  • 67
  • 67
  • 59
  • 31
  • 22
  • 21
  • 21
  • 14
  • 11
  • 11
  • Tagged with
  • 4030
  • 615
  • 591
  • 474
  • 431
  • 394
  • 304
  • 294
  • 284
  • 254
  • 245
  • 228
  • 211
  • 207
  • 195
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

General Geometry Computed Tomography Reconstruction

Ramotar, Alexei January 2006 (has links)
The discovery of Carbon Nanotubes and their ability to produce X-rays can usher in a new era in Computed Tomography (CT) technology. These devices will be lightweight, flexible and portable. The proposed device, currently under development, is envisioned as a flexible band of tiny X-ray emitters and detectors. The device is wrapped around an appendage and a CT image is obtained. However, current CT reconstruction algorithms can only be used if the geometry of the CT device is regular (usually circular). We present an efficient and accurate reconstruction technique that is unconstrained by the geometry of the CT device. Indeed the geometry can be both regular and highly irregular. To evaluate the feasibility of reconstructing a CT image from such a device, a simulated test bed was built to generate simulated CT ray sums of an image. This data was then used in our reconstruction method. We take this output data and grid it according to what we would expect from a parallel-beam CT scanner. The Filtered Back Projection can then be used to perform reconstruction. We have also included data inaccuracies as is expected in "real world" situations. Observations of reconstructions, as well as quantitative results, suggest that this simple method is efficient and accurate.
232

Reconstruction of Orthogonal Polyhedra

Genc, Burkay January 2008 (has links)
In this thesis I study reconstruction of orthogonal polyhedral surfaces and orthogonal polyhedra from partial information about their boundaries. There are three main questions for which I provide novel results. The first question is "Given the dual graph, facial angles and edge lengths of an orthogonal polyhedral surface or polyhedron, is it possible to reconstruct the dihedral angles?" The second question is "Given the dual graph, dihedral angles and edge lengths of an orthogonal polyhedral surface or polyhedron, is it possible to reconstruct the facial angles?" The third question is "Given the vertex coordinates of an orthogonal polyhedral surface or polyhedron, is it possible to reconstruct the edges and faces, possibly after rotating?" For the first two questions, I show that the answer is "yes" for genus-0 orthogonal polyhedra and polyhedral surfaces under some restrictions, and provide linear time algorithms. For the third question, I provide results and algorithms for orthogonally convex polyhedra. Many related problems are studied as well.
233

The Study of Synthetic Aperture Sonar System

Sung, Chen-Hung 31 August 2010 (has links)
This research is to study the fundamental theory of Synthetic Aperture Sonar (SAS) through numerical simulation and experimental analysis. The basic principle of SAS is to enhance the capability of spatial resolution by moving the transducer element to increase aperture so that it achieves a better resolution. The factors affecting the capability of resolution include the actual size of the transducers, frequency and its bandwidth, pulse length, and moving speeds. The effects of various factors on the resolution were examined through numerical simulation. The results have shown that the smaller the true size of the transducer, the better the resolution. Moreover, when the bandwidth is increased, the resolution also increases. The SAS is sensitive to the speed of movement due to the fact that data acquisition may be limited, therefore the speed can not be too high, e.g., less than 1.5 m/s. The experiment was carried out in a water tank of size 4 m x 3.5 m x 2 m. The transducers of AST MK VI 192 kHz were employed to transmit and receive signals. Copper spheres of various sizes (3 cm, 6 cm, 8 cm diameter) were used as targets. The data were obtained and analyzed, and the results have shown that the resolution may be achieved by SAS analysis, establishing the fundamental principle and offering opportunity for future study.
234

Geometric representation of neuroanatomical data observed in mouse brain at cellular and gross levels

Koh, Wonryull 15 May 2009 (has links)
This dissertation studies two problems related to geometric representation of neuroanatomical data: (i) spatial representation and organization of individual neurons, and (ii) reconstruction of three-dimensional neuroanatomical regions from sparse two-dimensional drawings. This work has been motivated by nearby development of new technology, Knife-Edge Scanning Microscopy (KESM), that images a whole mouse brain at cellular level in less than a month. A method is introduced to represent neuronal data observed in the mammalian brain at the cellular level using geometric primitives and spatial indexing. A data representation scheme is defined that captures the geometry of individual neurons using traditional geometric primitives, points and cross-sectional areas along a trajectory. This representation captures inferred synapses as directed links between primitives and spatially indexes observed neurons based on the locations of their cell bodies. This method provides a set of rules for acquisition, representation, and indexing of KESMgenerated data. Neuroanatomical data observed at the gross level provides the underlying regional framework for neuronal circuits. Accumulated expert knowledge on neuroanatomical organization is usually given as a series of sparse two-dimensional contours. A data structure and an algorithm are described to reconstruct separating surfaces among multiple regions from these sparse cross-sectional contours. A topology graph is defined for each region that describes the topological skeleton of the region’s boundary surface and that shows between which contours the surface patches should be generated. A graph-directed triangulation algorithm is provided to reconstruct surface patches between contours. This graph-directed triangulation algorithm combined together with a piecewise parametric curve fitting technique ensures that abutting or shared surface patches are precisely coincident. This method overcomes limitations in i) traditional surfaces-from-contours algorithms that assume binary, not multiple, regionalization of space, and in ii) few existing separating surfaces algorithms that assume conversion of input into a regular volumetric grid, which is not possible with sparse inter-planar resolution.
235

Towards the Development of Training Tools for Face Recognition

Rodriguez, Jobany 2011 May 1900 (has links)
Distinctiveness plays an important role in the recognition of faces, i.e., a distinctive face is usually easier to remember than a typical face in a recognition task. This distinctiveness effect explains why caricatures are recognized faster and more accurately than unexaggerated (i.e., veridical) faces. Furthermore, using caricatures during training can facilitate recognition of a person’s face at a later time. The objective of this thesis is to determine the extent to which photorealistic computer-generated caricatures may be used in training tools to improve recognition of faces by humans. To pursue this objective, we developed a caricaturization procedure for three-dimensional (3D) face models, and characterized face recognition performance (by humans) through a series of perceptual studies. The first study focused on 3D shape information without texture. Namely, we tested whether exposure to caricatures during an initial familiarization phase would aid in the recognition of their veridical counterparts at a later time. We examined whether this effect would emerge with frontal rather than three-quarter views, after very brief exposure to caricatures during the learning phase and after modest rotations of faces during the recognition phase. Results indicate that, even under these difficult training conditions, people are more accurate at recognizing unaltered faces if they are first familiarized with caricatures of the faces, rather than with the unaltered faces. These preliminary findings support the use of caricatures in new training methods to improve face recognition. In the second study, we incorporated texture into our 3D models, which allowed us to generate photorealistic renderings. In this study, we sought to determine the extent to which familiarization with caricaturized faces could also be used to reduce other-race effects (e.g., the phenomenon whereby faces from other races appear less distinct than faces from our own race). Using an old/new face recognition paradigm, Caucasian participants were first familiarized with a set of faces from multiple races, and then asked to recognize those faces among a set of confounders. Participants who were familiarized with and then asked to recognize veridical versions of the faces showed a significant other-race effect on Indian faces. In contrast, participants who were familiarized with caricaturized versions of the same faces, and then asked to recognize their veridical versions, showed no other-race effects on Indian faces. This result suggests that caricaturization may be used to help individuals focus their attention to features that are useful for recognition of other-race faces. The third and final experiment investigated the practical application of our earlier results. Since 3D facial scans are not generally available, here we also sought to determine whether 3D reconstructions from 2D frontal images could be used for the same purpose. Using the same old/new face recognition paradigm, participants who were familiarized with reconstructed faces and then asked to recognize the ground truth versions of the faces showed a significant reduction in performance compared to the previous study. In addition, participants who were familiarized with caricatures of reconstructed versions, and then asked to recognize their corresponding ground truth versions, showed a larger reduction in performance. Our results suggest that, despite the high level of photographic realism achieved by current 3D facial reconstruction methods, additional research is needed in order to reduce reconstruction errors and capture the distinctive facial traits of an individual. These results are critical for the development of training tools based on computer-generated photorealistic caricatures from “mug shot” images.
236

Dose Reconstruction Using Computational Modeling of Handling a Particular Arsenic-73/Arsenic-74 Source

Stallard, Alisha M. 2010 May 1900 (has links)
A special work evolution was performed at Los Alamos National Laboratory (LANL) with a particular 73As/74As source but the worker’s extremity dosimeter did not appear to provide appropriate dosimetric information for the tasks performed. This prompted a reconstruction of the dose to the worker’s hands. The computer code MCNP was chosen to model the tasks that the worker performed to evaluate the potential nonuniform hand dose distribution. A model was constructed similar to the worker’s hands to represent the performed handling tasks. The model included the thumb, index finger, middle finger, and the palm. The dose was calculated at the 7 mg cm-2 skin depth. To comply with the Code of Federal Regulations, 10 CFR 835, the 100 cm2 area that received the highest dose must be calculated. It could be determined if the dose received by the worker exceeded any regulatory limit. The computer code VARSKIN was also used to provide results to compare with those from MCNP where applicable. The results from the MCNP calculations showed that the dose to the worker’s hands did not exceed the regulatory limit of 0.5 Sv (50 rem). The equivalent nonuniform dose was 0.126 Sv (12.6 rem) to the right hand and 0.082 Sv (8.2 rem) to the left hand.
237

Guaranteed Verification of Finite Element Solutions of Heat Conduction

Wang, Delin 2011 May 1900 (has links)
This dissertation addresses the accuracy of a-posteriori error estimators for finite element solutions of problems with high orthotropy especially for cases where rather coarse meshes are used, which are often encountered in engineering computations. We present sample computations which indicate lack of robustness of all standard residual estimators with respect to high orthotropy. The investigation shows that the main culprit behind the lack of robustness of residual estimators is the coarseness of the finite element meshes relative to the thickness of the boundary and interface layers in the solution. With the introduction of an elliptic reconstruction procedure, a new error estimator based on the solution of the elliptic reconstruction problem is invented to estimate the exact error measured in space-time C-norm for both semi-discrete and fully discrete finite element solutions to linear parabolic problem. For a fully discrete solution, a temporal error estimator is also introduced to evaluate the discretization error in the temporal field. In the meantime, the implicit Neumann subdomain residual estimator for elliptic equations, which involves the solution of the local residual problem, is combined with the elliptic reconstruction procedure to carry out a posteriori error estimation for the linear parabolic problem. Numerical examples are presented to illustrate the superconvergence properties in the elliptic reconstruction and the performance of the bounds based on the space-time C-norm. The results show that in the case of L^2 norm for smooth solution there is no superconvergence in elliptic reconstruction for linear element, and for singular solution the superconvergence does not exist for element of any order while in the case of energy norm the superconvergence always exists in elliptic reconstruction. The research also shows that the performance of the bounds based on space-time C-norm is robust, and in the case of fully discrete finite element solution the bounds for the temporal error are sharp.
238

AUTOGENOUS BULK STRUCTURAL BONE GRAFTING FOR RECONSTRUCTION OF THE ACETABLUM IN PRIMARY TOTAL HIP ARTHROPLASTY: AVERAGE 12-YEAR FOLLOW-UP

MASUI, TETSUO, IWASE, TOSHIKI, KOUYAMA, ATSUSHI, SHIDOU, TETSURO 09 1900 (has links)
No description available.
239

Structural studies of the SARS virus Nsp15 endonuclease and the human innate immunity receptor TLR3

Sun, Jingchuan 16 August 2006 (has links)
Three-dimensional (3D) structural determination of biological macromolecules is not only critical to understanding their mechanisms, but also has practical applications. Combining the high resolution imaging of transmission electron microscopy (TEM) and efficient computer processing, protein structures in solution or in two-dimensional (2D) crystals can be determined. The lipid monolayer technique uses the high affinity binding of 6His-tagged proteins to a Ni-nitrilotriacetic (NTA) lipid to create high local protein concentrations, which facilitates 2D crystal formation. In this study, several proteins have been crystallized using this technique, including the SARS virus Nsp15 endonuclease and the human Toll-like receptor (TLR) 3 extracellular domain (ECD). Single particle analysis can determine protein structures in solution without the need for crystals. 3D structures of several protein complexes had been solved by the single particle method, including IniA from Mycobacterium tuberculosis, Nsp15 and TLR3 ECD. Determining the structures of these proteins is an important step toward understanding pathogenic microbes and our immune system.
240

Adaptive finite element methods for fluorescence enhanced optical tomography

Joshi, Amit 30 October 2006 (has links)
Fluorescence enhanced optical tomography is a promising molecular imaging modality which employs a near infrared fluorescent molecule as an imaging agent and time-dependent measurements of fluorescent light propagation and generation. In this dissertation a novel fluorescence tomography algorithm is proposed to reconstruct images of targets contrasted by fluorescence within the tissues from boundary fluorescence emission measurements. An adaptive finite element based reconstruction algorithm for high resolution, fluorescence tomography was developed and validated with non-contact, planewave frequency-domain fluorescence measurements on a tissue phantom. The image reconstruction problem was posed as an optimization problem in which the fluorescence optical property map which minimized the difference between the experimentally observed boundary fluorescence and that predicted from the diffusion model was sought. A regularized Gauss-Newton algorithm was derived and dual adaptive meshes were employed for solution of coupled photon diffusion equations and for updating the fluorescence optical property map in the tissue phantom. The algorithm was developed in a continuous function space setting in a mesh independent manner. This allowed the meshes to adapt during the tomography process to yield high resolution images of fluorescent targets and to accurately simulate the light propagation in tissue phantoms from area-illumination. Frequency-domain fluorescence data collected at the illumination surface was used for reconstructing the fluorescence yield distribution in a 512 cm3, tissue phantom filled with 1% Liposyn solution. Fluorescent targets containing 1 micro-molar Indocyanine Green solution in 1% Liposyn and were suspended at the depths of up to 2cm from the illumination surface. Fluorescence measurements at the illumination surface were acquired by a gain-modulated image intensified CCD camera system outfitted with holographic band rejection and optical band pass filters. Excitation light at the phantom surface source was quantified by utilizing cross polarizers. Rayleigh resolution studies to determine the minimum detectable sepatation of two embedded fluorescent targets was attempted and in the absence of measurement noise, resolution down to the transport limit of 1mm was attained. The results of this work demonstrate the feasibility of high-resolution, molecular tomography in clinic with rapid non-contact area measurements.

Page generated in 0.0529 seconds