Spelling suggestions: "subject:"three dimensional display"" "subject:"shree dimensional display""
41 |
GPU-friendly marching cubes.January 2008 (has links)
Xie, Yongming. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2008. / Includes bibliographical references (leaves 77-85). / Abstracts in English and Chinese. / Abstract --- p.i / Acknowledgement --- p.ii / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Isosurfaces --- p.1 / Chapter 1.2 --- Graphics Processing Unit --- p.2 / Chapter 1.3 --- Objective --- p.3 / Chapter 1.4 --- Contribution --- p.3 / Chapter 1.5 --- Thesis Organization --- p.4 / Chapter 2 --- Marching Cubes --- p.5 / Chapter 2.1 --- Introduction --- p.5 / Chapter 2.2 --- Marching Cubes Algorithm --- p.7 / Chapter 2.3 --- Triangulated Cube Configuration Table --- p.12 / Chapter 2.4 --- Summary --- p.16 / Chapter 3 --- Graphics Processing Unit --- p.18 / Chapter 3.1 --- Introduction --- p.18 / Chapter 3.2 --- History of Graphics Processing Unit --- p.19 / Chapter 3.2.1 --- First Generation GPU --- p.20 / Chapter 3.2.2 --- Second Generation GPU --- p.20 / Chapter 3.2.3 --- Third Generation GPU --- p.20 / Chapter 3.2.4 --- Fourth Generation GPU --- p.21 / Chapter 3.3 --- The Graphics Pipelining --- p.21 / Chapter 3.3.1 --- Standard Graphics Pipeline --- p.21 / Chapter 3.3.2 --- Programmable Graphics Pipeline --- p.23 / Chapter 3.3.3 --- Vertex Processors --- p.25 / Chapter 3.3.4 --- Fragment Processors --- p.26 / Chapter 3.3.5 --- Frame Buffer Operations --- p.28 / Chapter 3.4 --- GPU CPU Analogy --- p.31 / Chapter 3.4.1 --- Memory Architecture --- p.31 / Chapter 3.4.2 --- Processing Model --- p.32 / Chapter 3.4.3 --- Limitation of GPU --- p.33 / Chapter 3.4.4 --- Input and Output --- p.34 / Chapter 3.4.5 --- Data Readback --- p.34 / Chapter 3.4.6 --- FramebufFer --- p.34 / Chapter 3.5 --- Summary --- p.35 / Chapter 4 --- Volume Rendering --- p.37 / Chapter 4.1 --- Introduction --- p.37 / Chapter 4.2 --- History of Volume Rendering --- p.38 / Chapter 4.3 --- Hardware Accelerated Volume Rendering --- p.40 / Chapter 4.3.1 --- Hardware Acceleration Volume Rendering Methods --- p.41 / Chapter 4.3.2 --- Proxy Geometry --- p.42 / Chapter 4.3.3 --- Object-Aligned Slicing --- p.43 / Chapter 4.3.4 --- View-Aligned Slicing --- p.45 / Chapter 4.4 --- Summary --- p.48 / Chapter 5 --- GPU-Friendly Marching Cubes --- p.49 / Chapter 5.1 --- Introduction --- p.49 / Chapter 5.2 --- Previous Work --- p.50 / Chapter 5.3 --- Traditional Method --- p.52 / Chapter 5.3.1 --- Scalar Volume Data --- p.53 / Chapter 5.3.2 --- Isosurface Extraction --- p.53 / Chapter 5.3.3 --- Flow Chart --- p.54 / Chapter 5.3.4 --- Transparent Isosurfaces --- p.56 / Chapter 5.4 --- Our Method --- p.56 / Chapter 5.4.1 --- Cell Selection --- p.59 / Chapter 5.4.2 --- Vertex Labeling --- p.61 / Chapter 5.4.3 --- Cell Indexing --- p.62 / Chapter 5.4.4 --- Interpolation --- p.65 / Chapter 5.5 --- Rendering Translucent Isosurfaces --- p.67 / Chapter 5.6 --- Implementation and Results --- p.69 / Chapter 5.7 --- Summary --- p.74 / Chapter 6 --- Conclusion --- p.76 / Bibliography --- p.77
|
42 |
Leveraging Text-to-Scene Generation for Language Elicitation and DocumentationUlinski, Morgan Elizabeth January 2019 (has links)
Text-to-scene generation systems take input in the form of a natural language text and output a 3D scene illustrating the meaning of that text. A major benefit of text-to-scene generation is that it allows users to create custom 3D scenes without requiring them to have a background in 3D graphics or knowledge of specialized software packages. This contributes to making text-to-scene useful in scenarios from creative applications to education. The primary goal of this thesis is to explore how we can use text-to-scene generation in a new way: as a tool to facilitate the elicitation and formal documentation of language. In particular, we use text-to-scene generation (a) to assist field linguists studying endangered languages; (b) to provide a cross-linguistic framework for formally modeling spatial language; and (c) to collect language data using crowdsourcing. As a side effect of these goals, we also explore the problem of multilingual text-to-scene generation, that is, systems for generating 3D scenes from languages other than English.
The contributions of this thesis are the following. First, we develop a novel tool suite (the WordsEye Linguistics Tools, or WELT) that uses the WordsEye text-to-scene system to assist field linguists with eliciting and documenting endangered languages. WELT allows linguists to create custom elicitation materials and to document semantics in a formal way. We test WELT with two endangered languages, Nahuatl and Arrernte. Second, we explore the question of how to learn a syntactic parser for WELT. We show that an incremental learning method using a small number of annotated dependency structures can produce reasonably accurate results. We demonstrate that using a parser trained in this way can significantly decrease the time it takes an annotator to label a new sentence with dependency information. Third, we develop a framework that generates 3D scenes from spatial and graphical semantic primitives. We incorporate this system into the WELT tools for creating custom elicitation materials, allowing users to directly manipulate the underlying semantics of a generated scene. Fourth, we introduce a deep semantic representation of spatial relations and use this to create a new resource, SpatialNet, which formally declares the lexical semantics of spatial relations for a language. We demonstrate how SpatialNet can be used to support multilingual text-to-scene generation. Finally, we show how WordsEye and the semantic resources it provides can be used to facilitate elicitation of language using crowdsourcing.
|
43 |
Reconstructing specular objects with image based rendering using color cachingChhabra, Vikram. January 2001 (has links)
Thesis (M.S.)--Worcester Polytechnic Institute. / Keywords: scene reconstruction vision, image based rendering, graphics, color consistency, specular objects. Includes bibliographical references (p. 56-57).
|
44 |
A framework for automatic creation of talking heads for multimedia applications /Choi, KyoungHo. January 2002 (has links)
Thesis (Ph. D.)--University of Washington, 2002. / Vita. Includes bibliographical references (leaves 88-92).
|
45 |
Heart frontal section and hypertrophic cardiomyopathy /Kang, Robin. January 2010 (has links)
Thesis (M.F.A.)--Rochester Institute of Technology, 2010. / Typescript. Includes bibliographical references.
|
46 |
Algorithmic approaches to finding cover in three-dimensional, virtual environments /Morgan, David J. January 2003 (has links) (PDF)
Thesis (M.S. in Modeling, Virtual Environments and Simulation)--Naval Postgraduate School, September 2003. / Thesis advisor(s): Christian J. Darken, Joseph A. Sullivan. Includes bibliographical references (p. 91-92). Also available online.
|
47 |
Human model reconstruction from image sequence /Chang, Ka Kit. January 2003 (has links)
Thesis (Ph. D.)--Hong Kong University of Science and Technology, 2003. / Includes bibliographical references (leaves 124-134). Also available in electronic version. Access restricted to campus users.
|
48 |
Design and evaluation of a multimedia computing architecture based on a 3D graphics pipeline /Chung, Chris Yoochang. January 2002 (has links)
Thesis (Ph. D.)--University of Washington, 2002. / Vita. Includes bibliographical references (leaves 114-123).
|
49 |
Boundary/finite element meshing from volumetric data with applicationsZhang, Yongjie 28 August 2008 (has links)
Not available / text
|
50 |
Interaction techniques for common tasks in immersive virtual environments : design, evaluation, and applicationBowman, Douglas A. January 1999 (has links)
No description available.
|
Page generated in 0.0876 seconds