• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 1
  • Tagged with
  • 133
  • 133
  • 133
  • 122
  • 51
  • 22
  • 19
  • 19
  • 19
  • 18
  • 17
  • 16
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Probabilistic Roadmaps for Virtual Camera Pathing with Cinematographic Principles

Davis, Katherine 01 April 2017 (has links)
As technology use increases in the world and inundates everyday life, the visual aspect of technology or computer graphics becomes increasingly important. This thesis presents a system for the automatic generation of virtual camera paths for fly-throughs of a digital scene. The sample scene used in this work is an underwater setting featuring a shipwreck model with other virtual underwater elements such as rocks, bubbles and caustics. The digital shipwreck model was reconstructed from an actual World War II shipwreck, resting off the coast of Malta. Video and sonar scans from an autonomous underwater vehicle were used in a photogrammetry pipeline to create the model. This thesis presents an algorithm to automatically generate virtual camera paths using a robotics motion planning algorithm, specifically the probabilistic roadmap. This algorithm uses a rapidly-exploring random tree to quickly cover a space and generate small maps with good coverage. For this work, the camera pitch and height along a specified path were automatically generated using cinematographic and geometric principles. These principles were used to evaluate potential viewpoints and influence whether or not a view is used in the final path. A computational evaluation of ‘the rule of thirds’ and evaluation of the model normals relative to the camera viewpoint are used to represent cinematography and geometry principles. In addition to the system that automatically generates virtual camera paths, a user study is presented which evaluates ten different videos produced via camera paths with this system. The videos were created using different viewpoint evaluation methods and different path generation characteristics. The user study indicates that users prefer paths generated by our system over flat and randomly generated paths. Specifically, users prefer paths generated using the computational evaluation of the rule of thirds and paths that show the wreck from a large variety of angles but without too much camera undulation.
22

Methodology of Augmented Reality Chinese Language Articulatory Pronunciation Practice: Game and Study Design

Sinyagovskaya, Daria 01 January 2022 (has links) (PDF)
Learning a language can be hard. Learning a language that contains tones to convey meaning is even harder. This dissertation presents a novel methodology for creating a language practice using augmented reality that has never been used before. The design of a new app in AR and non-AR versions can evaluate the same practice methodology. This methodology was applied to new software and was examined in regard to the importance of this software. Although the study results are inconclusive, progress has been made in answering research questions on the effectiveness of AR versus non-AR and the reliability of peer assessment. This study is essential for developing future language applications using design and methodologies in AR and peer evaluation.
23

Terrain Impostors

Hess, William Hamilton 01 December 2010 (has links) (PDF)
Interactive software applications which need to render large terrain meshes can suffer from slow frame rates if the geometry of the terrain is sufficiently dense. However, the viewing angle to many distant features of the terrain does not change rapidly with respect to time. If the movement of the viewing position is limited to continuous motion and restrained to a known speed, many terrain features may be rendered once in high detail and reused for several frames. This thesis proposes a method to increase the rendering speed of large complex terrains by splitting the terrain into contiguous chunks. If a given chunk is far enough away from the camera and its viewing angle will not change quickly, it is rendered into an image buffer. This buffer is then used to texture map a simplified version of the terrain mesh. The simplified and textured mesh is rendered in place of the original chunk of geometrically complex terrain. The simplified mesh is used to approximate parallax effects as the viewing angle changes in small increments. This technique is shown to as much as double the rendering speed of large terrain meshes without reducing the quality of the final image.
24

Interactions Between Humans, Virtual Agent Characters and Virtual Avatars

Griffith, Tamara 01 January 2020 (has links)
Simulations allow people to experience events as if they were happening in the real world in a way that is safer and less expensive than live training. Despite improvements in realism in simulated environments, one area that still presents a challenge is interpersonal interactions. The subtleties of what makes an interaction rich are difficult to define. We may never fully understand the complexity of human interchanges, however there is value in building on existing research into how individuals react to virtual characters to inform future investments. Virtual characters can either be automated through computational processes, referred to as agents, or controlled by a human, referred to as an avatar. Knowledge of interactions with virtual characters will facilitate the building of simulated characters that support training tasks in a manner that will appropriately engage learners. Ultimately, the goal is to understand what might cause people to engage or disengage with virtual characters. To answer that question, it is important to establish metrics that would indicate when people believe their interaction partner is real, or has agency. This study makes use of three types of measures: objective, behavioral and self-report. The objective measures were neural, galvanic skin response, and heart rate measures. The behavioral measure was gestures and facial expressions. Surveys provided an opportunity to gain self-report data. The objective of this research study was to determine what metrics could be used during social interactions to achieve the sense of agency in an interactive partner. The results provide valuable feedback on how users need to see and be seen by their interaction partner to ensure non-verbal cues provide context and additional meaning to the dialog. This study provides insight into areas of future research, offering a foundation of knowledge for further exploration and lessons learned. This can lead to more realistic experiences that open the door to human dimension training.
25

Physics Engine on the GPU with OpenGL Compute Shaders

Bui, Quan Huy Minh 01 March 2021 (has links) (PDF)
Any kind of graphics simulation can be thought of like a fancy flipbook. This notion is, of course, nothing new. For instance, in a game, the central computing unit (CPU) needs to process frame by frame, figuring out what is happening, and then finally issues draw calls to the graphics processing unit (GPU) to render the frame and display it onto the monitor. Traditionally, the CPU has to process a lot of things: from the creation of the window environment for the processed frames to be displayed, handling game logic, processing artificial intelligence (AI) for non-player characters (NPC), to the physics, and issuing draw calls; and all of these have to be done within roughly 0.0167 seconds to maintain real-time performance of 60 frames per second (fps). The main goal of this thesis is to move the physics pipeline of any kind of simulation to the GPU instead of the CPU. The main tool to make this possible would be the usage of OpenGL Compute Shaders. OpenGL is a high-performance graphics application programming interface (API), used as an abstraction layer for the CPU to communicate with the GPU. OpenGL was created by the Khronos Group primarily for graphics, or drawing frames only. In the later versions of OpenGL, the Khronos Group has introduced Compute Shader, which can be used for general-purpose computing on the GPU (GPGPU). This means the GPU can be used to process any arbitrary math computations, and not limited to only process the vertices and fragments of polygons. This thesis features Broad Phase and Narrow Phase collision detection stages, and a collision Resolution Phase with Sequential Impulses entirely on the GPU with real-time performance.
26

An Exploration of Tablet-Based Presentation Systems and Learning Styles

Phan, Ngan T 01 October 2008 (has links) (PDF)
Learning in the classroom can occur as a combination of students' personal effort to study class material, the instructor's attempt to present class material, and the interaction that takes place between instructor and students. In a more traditional setting, instructors can lecture by writing notes on a chalkboard or a whiteboard. If instructors want to display prepared lecture slides, they can use the overhead projector and write additional notes on top of these overhead transparencies. With many technological advances, various researchers are advocating towards integration between technology and learning. With the advent of tablet PCs, researchers recognize the potential usefulness of its functions within the classroom. Not only can electronic materials be presented via the computer, tablet PCs allow instructors to handwrite notes on top of the slides, mimicking manual devices such as the overhead. Even though the use of tablet PCs can be advantageous to instructors and students, no research found so far has focused on the issue of how well tablet PC features address varying learning styles of students (e.g. visually oriented vs. text-based learning). According to Felder, "understanding learning style differences is thus an important step in designing balanced instruction that is effective for all students” [22]. Hence, this research explores the correlation between tablet-based presentation systems and learning styles by taking two approaches: performing a pilot study and distributing a survey. The results from these approaches are evaluated to yield statistically significant conclusions on how well tablet-based presentation systems encompass the different learning needs of student.
27

FlexRender: A Distributed Rendering Architecture for Ray Tracing Huge Scenes on Commodity Hardware.

Somers, Robert Edward 01 June 2012 (has links) (PDF)
As the quest for more realistic computer graphics marches steadily on, the demand for rich and detailed imagery is greater than ever. However, the current "sweet spot" in terms of price, power consumption, and performance is in commodity hardware. If we desire to render scenes with tens or hundreds of millions of polygons as cheaply as possible, we need a way of doing so that maximizes the use of the commodity hardware we already have at our disposal. Techniques such as normal mapping and level of detail have attempted to address the problem by reducing the amount of geometry in a scene. This is problematic for applications that desire or demand access to the scene's full geometric complexity at render time. More recently, out-of-core techniques have provided methods for rendering large scenes when the working set is larger than the available system memory. We propose a distributed rendering architecture based on message-passing that is designed to partition scene geometry across a cluster of commodity machines in a spatially coherent way, allowing the entire scene to remain in-core and enabling the construction of hierarchical spatial acceleration structures in parallel. The results of our implementation show roughly an order of magnitude speedup in rendering time compared to the traditional approach, while keeping memory overhead for message queuing around 1%.
28

Bone Erosion Measurement in Subjects with Rheumatoid Arthritis Using Magnetic Resonance Imaging

Emond, Patrick D. 04 1900 (has links)
<p>Rheumatoid arthritis (RA) is a systemic disease that can affect the nervous system, lungs, heart, skin, reticuloendothelium and joints. Currently, the gold-standard measurement for tracking the progression of the disease involves a semi-quantitative assessment of bone erosion, bone marrow edema and synovitis, as seen in magnetic resonance (MR) images, by a musculoskeletal radiologist. The work presented in this thesis identifies how computer automation can be used to quantify bone erosion volumes in MR images without a radiologists' expert and time consuming intervention. A new semi-automated hybrid segmentation algorithm that combines two established techniques: region growing and level-set segmentation, is described and evaluated for use in a clinical setting. A total of 40 participants with RA were scanned using a 1-Tesla peripheral MR scanner. Eight of the participant scans were used to train the algorithm with the remaining used to determine the accuracy, precision, and speed of the technique. The reproducibility of the hybrid algorithm and that of manual segmentation were defined in terms of intra-class correlation coefficients (ICCs). Both techniques were equally precise with ICC values greater than 0.9. According to a least squares fit between erosion volumes obtained by the hybrid algorithm with those obtained from manual tracings drawn by a radiologist, the former was found to be highly accurate ( m=1.030, b=1.385: r-squared=0.923). The hybrid algorithm was significantly faster than manual segmentation, which took two to four times longer to complete. In conclusion, computer automation shows promise as a means to quantitatively assess bone erosion volumes. The new hybrid segmentation algorithm described in this thesis could be used in a clinical setting to track the progression of RA and to evaluate the effectiveness of treatment.</p> / Doctor of Philosophy (PhD)
29

Digital Forensics Tool Interface Visualization

Altiero, Roberto A. 15 January 2015 (has links)
Recent trends show digital devices utilized with increasing frequency in most crimes committed. Investigating crime involving these devices is labor-intensive for the practitioner applying digital forensics tools that present possible evidence with results displayed in tabular lists for manual review. This research investigates how enhanced digital forensics tool interface visualization techniques can be shown to improve the investigator's cognitive capacities to discover criminal evidence more efficiently. This paper presents visualization graphs and contrasts their properties with the outputs of The Sleuth Kit (TSK) digital forensic program. Exhibited is the textual-based interface proving the effectiveness of enhanced data presentation. Further demonstrated is the potential of the computer interface to present to the digital forensic practitioner an abstract, graphic view of an entire dataset of computer files. Enhanced interface design of digital forensic tools means more rapidly linking suspicious evidence to a perpetrator. Introduced in this study is a mixed methodology of ethnography and cognitive load measures. Ethnographically defined tasks developed from the interviews of digital forensics subject matter experts (SME) shape the context for cognitive measures. Cognitive load testing of digital forensics first-responders utilizing both a textual-based and visualized-based application established a quantitative mean of the mental workload during operation of the applications under test. A t-test correlating the dependent samples' mean tested for the null hypothesis of less than a significant value between the applications' comparative workloads of the operators. Results of the study indicate a significant value, affirming the hypothesis that a visualized application would reduce the cognitive workload of the first-responder analyst. With the supported hypothesis, this work contributes to the body of knowledge by validating a method of measurement and by providing empirical evidence that the use of the visualized digital forensics interface will provide a more efficient performance by the analyst, saving labor costs and compressing time required for the discovery phase of a digital investigation.
30

Hydrographic Surface Modeling Through A Raster Based Spline Creation Method

Alexander, Julie G 16 May 2014 (has links)
The United States Army Corp of Engineers relies on accurate and detailed surface models for various construction projects and preventative measures. To aid in these efforts, it is necessary to work for advancements in surface model creation. Current methods for model creation include Delaunay triangulation, raster grid interpolation, and Hydraulic Spline grid generation. While these methods produce adequate surface models, attempts for improved methods can still be made. A method for raster based spline creation is presented as a variation of the Hydraulic Spline algorithm. By implementing Hydraulic Splines in raster data instead of vector data, the model creation process is streamlined. This method is shown to be more efficient and less computationally expensive than previous methods of surface model creation due to the inherent advantages of raster data over vector data.

Page generated in 0.1195 seconds