• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1640
  • 548
  • 454
  • 349
  • 171
  • 67
  • 67
  • 60
  • 31
  • 22
  • 21
  • 21
  • 14
  • 11
  • 11
  • Tagged with
  • 4040
  • 617
  • 592
  • 474
  • 431
  • 395
  • 305
  • 295
  • 284
  • 254
  • 245
  • 228
  • 211
  • 208
  • 196
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Structural studies of the SARS virus Nsp15 endonuclease and the human innate immunity receptor TLR3

Sun, Jingchuan 16 August 2006 (has links)
Three-dimensional (3D) structural determination of biological macromolecules is not only critical to understanding their mechanisms, but also has practical applications. Combining the high resolution imaging of transmission electron microscopy (TEM) and efficient computer processing, protein structures in solution or in two-dimensional (2D) crystals can be determined. The lipid monolayer technique uses the high affinity binding of 6His-tagged proteins to a Ni-nitrilotriacetic (NTA) lipid to create high local protein concentrations, which facilitates 2D crystal formation. In this study, several proteins have been crystallized using this technique, including the SARS virus Nsp15 endonuclease and the human Toll-like receptor (TLR) 3 extracellular domain (ECD). Single particle analysis can determine protein structures in solution without the need for crystals. 3D structures of several protein complexes had been solved by the single particle method, including IniA from Mycobacterium tuberculosis, Nsp15 and TLR3 ECD. Determining the structures of these proteins is an important step toward understanding pathogenic microbes and our immune system.
222

Virtual reconstruction of a seventeenth-century Portuguese nau

Wells, Audrey Elizabeth 10 October 2008 (has links)
This interdisciplinary research project combines the fields of nautical archaeology and computer visualization to create an interactive virtual reconstruction of the 1606 Portuguese vessel Nossa Senhora dos Mártires, also known as the Pepper Wreck. Using reconstruction information provided by Dr. Filipe Castro (Texas A&M Department of Anthropology), a detailed 3D computer model of the ship was constructed and filled with cargo to demonstrate how the ship might have been loaded on the return voyage from India. The models are realistically shaded, lighted, and placed into an appropriate virtual environment. The scene can be viewed using the real-time immersive and interactive system developed by Dr. Frederic Parke (Texas A&M Department of Visualization). The process developed to convert the available information and data into a reconstructed 3D model is documented. This documentation allows future projects to adapt this process for other archaeological visualizations, as well as informs archaeologists about the type of data most useful for computer visualizations of this kind.
223

Memory and documentation in exhibition-making: A case study of the Protea Village exhibition, A History of Paradise 1829 - 2002.

Baduza, Uthando Lubabalo. January 2008 (has links)
<p><font face="Times New Roman"> <p align="left">This mini-thesis seeks to interrogate the interplay between memory and documentation in the process of exhibition-making by a looking at the preparation for and mounting of the exhibition, Museum. This will be achieved by looking at the institutional methodologies employed by the Museum in dealing with ex-residents of District Six, their memories and artefacts in the heritage practice of a Museum as a forum. This practice was put into effect as the District Six Museum engaged ex-residents of other locations of removal.</p> </font></p>
224

Two Case Studies on Vision-based Moving Objects Measurement

Zhang, Ji 2011 August 1900 (has links)
In this thesis, we presented two case studies on vision-based moving objects measurement. In the first case, we used a monocular camera to perform ego-motion estimation for a robot in an urban area. We developed the algorithm based on vertical line features such as vertical edges of buildings and poles in an urban area, because vertical lines are easy to be extracted, insensitive to lighting conditions/shadows, and sensitive to camera/robot movements on the ground plane. We derived an incremental estimation algorithm based on the vertical line pairs. We analyzed how errors are introduced and propagated in the continuous estimation process by deriving the closed form representation of covariance matrix. Then, we formulated the minimum variance ego-motion estimation problem into a convex optimization problem, and solved the problem with the interior-point method. The algorithm was extensively tested in physical experiments and compared with two popular methods. Our estimation results consistently outperformed the two counterparts in robustness, speed, and accuracy. In the second case, we used a camera-mirror system to measure the swimming motion of a live fish and the extracted motion data was used to drive animation of fish behavior. The camera-mirror system captured three orthogonal views of the fish. We also built a virtual fish model to assist the measurement of the real fish. The fish model has a four-link spinal cord and meshes attached to the spinal cord. We projected the fish model into three orthogonal views and matched the projected views with the real views captured by the camera. Then, we maximized the overlapping area of the fish in the projected views and the real views. The maximization result gave us the position, orientation, and body bending angle for the fish model that was used for the fish movement measurement. Part of this algorithm is still under construction and will be updated in the future.
225

A survey of algebraic algorithms in computerized tomography

Brooks, Martin 01 August 2010 (has links)
X-ray computed tomography (CT) is a medical imaging framework. It takes measured projections of X-rays through two-dimensional cross-sections of an object from multiple angles and incorporates algorithms in building a sequence of two-dimensional reconstructions of the interior structure. This thesis comprises a review of the different types of algebraic algorithms used in X-ray CT. Using simulated test data, I evaluate the viability of algorithmic alternatives that could potentially reduce overexposure to radiation, as this is seen as a major health concern and the limiting factor in the advancement of CT [36, 34]. Most of the current evaluations in the literature [31, 39, 11] deal with low-resolution reconstructions and the results are impressive, however, modern CT applications demand very high-resolution imaging. Consequently, I selected ve of the fundamental algebraic reconstruction algorithms (ART, SART, Cimmino's Method, CAV, DROP) for extensive testing and the results are reported in this thesis. The quantitative numerical results obtained in this study, con rm the qualitative suggestion that algebraic techniques are not yet adequate for practical use. However, as algebraic techniques can actually produce an image from corrupt and/or missing data, I conclude that further re nement of algebraic techniques may ultimately lead to a breakthrough in CT. / UOIT
226

Do Bank Bailouts Work? The Effect of Reconstruction Finance Corporation Aid During the Crisis of 1933

Bobroff, Katherine 24 April 2009 (has links)
Do bank bailouts work? Government aid initiatives implemented to stem the current crisis raise important questions about the role of monetary policy in preventing bank failures. The scale of this bailout program defies comparison with any other aid package implemented in the post-World War II period. Fortunately, the operations of the Reconstruction Finance Corporation (RFC) during the Great Depression provide a historical experiment to examine the effects of government rescue programs on financial institutions. This paper examines the effects of the RFC's loan and preferred stock programs on bank failure rates during the crisis of 1933. Using a new database on Michigan banks, I employ survival analysis to examine the effectiveness of the RFC's loan program and preferred stock purchases on bank failure rates. My analysis suggests that the loan program increased the failure rates of banks during the crisis by increasing the indebtedness of financial institutions. Conversely, I find that the RFC's purchases of preferred stock increased the chances that a bank survived the financial crisis. Injections of capital helped repair the balance sheets of banks and restored confidence in the financial system. Ultimately, this historical experiment provides some insight into how government aid programs might curtail banking crises.
227

Comparison of track reconstruction algorithms for the Moon Shadow Analysis in IceCube

Kim, Kwang Seong January 2013 (has links)
No description available.
228

Unfolding and Reconstructing Polyhedra

Lucier, Brendan January 2006 (has links)
This thesis covers work on two topics: unfolding polyhedra into the plane and reconstructing polyhedra from partial information. For each topic, we describe previous work in the area and present an array of new research and results. Our work on unfolding is motivated by the problem of characterizing precisely when overlaps will occur when a polyhedron is cut along edges and unfolded. By contrast to previous work, we begin by classifying overlaps according to a notion of locality. This classification enables us to focus upon particular types of overlaps, and use the results to construct examples of polyhedra with interesting unfolding properties. The research on unfolding is split into convex and non-convex cases. In the non-convex case, we construct a polyhedron for which every edge unfolding has an overlap, with fewer faces than all previously known examples. We also construct a non-convex polyhedron for which every edge unfolding has a particularly trivial type of overlap. In the convex case, we construct a series of example polyhedra for which every unfolding of various types has an overlap. These examples disprove some existing conjectures regarding algorithms to unfold convex polyhedra without overlaps. The work on reconstruction is centered around analyzing the computational complexity of a number of reconstruction questions. We consider two classes of reconstruction problems. The first problem is as follows: given a collection of edges in space, determine whether they can be rearranged <em>by translation only</em> to form a polygon or polyhedron. We consider variants of this problem by introducing restrictions like convexity, orthogonality, and non-degeneracy. All of these problems are NP-complete, though some are proved to be only weakly NP-complete. We then consider a second, more classical problem: given a collection of edges in space, determine whether they can be rearranged by <em>translation and/or rotation</em> to form a polygon or polyhedron. This problem is NP-complete for orthogonal polygons, but polynomial algorithms exist for non-orthogonal polygons. For polyhedra, it is shown that if degeneracies are allowed then the problem is NP-hard, but the complexity is still unknown for non-degenerate polyhedra.
229

General Geometry Computed Tomography Reconstruction

Ramotar, Alexei January 2006 (has links)
The discovery of Carbon Nanotubes and their ability to produce X-rays can usher in a new era in Computed Tomography (CT) technology. These devices will be lightweight, flexible and portable. The proposed device, currently under development, is envisioned as a flexible band of tiny X-ray emitters and detectors. The device is wrapped around an appendage and a CT image is obtained. However, current CT reconstruction algorithms can only be used if the geometry of the CT device is regular (usually circular). We present an efficient and accurate reconstruction technique that is unconstrained by the geometry of the CT device. Indeed the geometry can be both regular and highly irregular. To evaluate the feasibility of reconstructing a CT image from such a device, a simulated test bed was built to generate simulated CT ray sums of an image. This data was then used in our reconstruction method. We take this output data and grid it according to what we would expect from a parallel-beam CT scanner. The Filtered Back Projection can then be used to perform reconstruction. We have also included data inaccuracies as is expected in "real world" situations. Observations of reconstructions, as well as quantitative results, suggest that this simple method is efficient and accurate.
230

Reconstruction of Orthogonal Polyhedra

Genc, Burkay January 2008 (has links)
In this thesis I study reconstruction of orthogonal polyhedral surfaces and orthogonal polyhedra from partial information about their boundaries. There are three main questions for which I provide novel results. The first question is "Given the dual graph, facial angles and edge lengths of an orthogonal polyhedral surface or polyhedron, is it possible to reconstruct the dihedral angles?" The second question is "Given the dual graph, dihedral angles and edge lengths of an orthogonal polyhedral surface or polyhedron, is it possible to reconstruct the facial angles?" The third question is "Given the vertex coordinates of an orthogonal polyhedral surface or polyhedron, is it possible to reconstruct the edges and faces, possibly after rotating?" For the first two questions, I show that the answer is "yes" for genus-0 orthogonal polyhedra and polyhedral surfaces under some restrictions, and provide linear time algorithms. For the third question, I provide results and algorithms for orthogonally convex polyhedra. Many related problems are studied as well.

Page generated in 0.1169 seconds