• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • Tagged with
  • 124
  • 124
  • 124
  • 124
  • 51
  • 22
  • 19
  • 19
  • 19
  • 17
  • 16
  • 16
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Interactions Between Humans, Virtual Agent Characters and Virtual Avatars

Griffith, Tamara 01 January 2020 (has links)
Simulations allow people to experience events as if they were happening in the real world in a way that is safer and less expensive than live training. Despite improvements in realism in simulated environments, one area that still presents a challenge is interpersonal interactions. The subtleties of what makes an interaction rich are difficult to define. We may never fully understand the complexity of human interchanges, however there is value in building on existing research into how individuals react to virtual characters to inform future investments. Virtual characters can either be automated through computational processes, referred to as agents, or controlled by a human, referred to as an avatar. Knowledge of interactions with virtual characters will facilitate the building of simulated characters that support training tasks in a manner that will appropriately engage learners. Ultimately, the goal is to understand what might cause people to engage or disengage with virtual characters. To answer that question, it is important to establish metrics that would indicate when people believe their interaction partner is real, or has agency. This study makes use of three types of measures: objective, behavioral and self-report. The objective measures were neural, galvanic skin response, and heart rate measures. The behavioral measure was gestures and facial expressions. Surveys provided an opportunity to gain self-report data. The objective of this research study was to determine what metrics could be used during social interactions to achieve the sense of agency in an interactive partner. The results provide valuable feedback on how users need to see and be seen by their interaction partner to ensure non-verbal cues provide context and additional meaning to the dialog. This study provides insight into areas of future research, offering a foundation of knowledge for further exploration and lessons learned. This can lead to more realistic experiences that open the door to human dimension training.
22

Physics Engine on the GPU with OpenGL Compute Shaders

Bui, Quan Huy Minh 01 March 2021 (has links) (PDF)
Any kind of graphics simulation can be thought of like a fancy flipbook. This notion is, of course, nothing new. For instance, in a game, the central computing unit (CPU) needs to process frame by frame, figuring out what is happening, and then finally issues draw calls to the graphics processing unit (GPU) to render the frame and display it onto the monitor. Traditionally, the CPU has to process a lot of things: from the creation of the window environment for the processed frames to be displayed, handling game logic, processing artificial intelligence (AI) for non-player characters (NPC), to the physics, and issuing draw calls; and all of these have to be done within roughly 0.0167 seconds to maintain real-time performance of 60 frames per second (fps). The main goal of this thesis is to move the physics pipeline of any kind of simulation to the GPU instead of the CPU. The main tool to make this possible would be the usage of OpenGL Compute Shaders. OpenGL is a high-performance graphics application programming interface (API), used as an abstraction layer for the CPU to communicate with the GPU. OpenGL was created by the Khronos Group primarily for graphics, or drawing frames only. In the later versions of OpenGL, the Khronos Group has introduced Compute Shader, which can be used for general-purpose computing on the GPU (GPGPU). This means the GPU can be used to process any arbitrary math computations, and not limited to only process the vertices and fragments of polygons. This thesis features Broad Phase and Narrow Phase collision detection stages, and a collision Resolution Phase with Sequential Impulses entirely on the GPU with real-time performance.
23

An Exploration of Tablet-Based Presentation Systems and Learning Styles

Phan, Ngan T 01 October 2008 (has links) (PDF)
Learning in the classroom can occur as a combination of students' personal effort to study class material, the instructor's attempt to present class material, and the interaction that takes place between instructor and students. In a more traditional setting, instructors can lecture by writing notes on a chalkboard or a whiteboard. If instructors want to display prepared lecture slides, they can use the overhead projector and write additional notes on top of these overhead transparencies. With many technological advances, various researchers are advocating towards integration between technology and learning. With the advent of tablet PCs, researchers recognize the potential usefulness of its functions within the classroom. Not only can electronic materials be presented via the computer, tablet PCs allow instructors to handwrite notes on top of the slides, mimicking manual devices such as the overhead. Even though the use of tablet PCs can be advantageous to instructors and students, no research found so far has focused on the issue of how well tablet PC features address varying learning styles of students (e.g. visually oriented vs. text-based learning). According to Felder, "understanding learning style differences is thus an important step in designing balanced instruction that is effective for all students” [22]. Hence, this research explores the correlation between tablet-based presentation systems and learning styles by taking two approaches: performing a pilot study and distributing a survey. The results from these approaches are evaluated to yield statistically significant conclusions on how well tablet-based presentation systems encompass the different learning needs of student.
24

FlexRender: A Distributed Rendering Architecture for Ray Tracing Huge Scenes on Commodity Hardware.

Somers, Robert Edward 01 June 2012 (has links) (PDF)
As the quest for more realistic computer graphics marches steadily on, the demand for rich and detailed imagery is greater than ever. However, the current "sweet spot" in terms of price, power consumption, and performance is in commodity hardware. If we desire to render scenes with tens or hundreds of millions of polygons as cheaply as possible, we need a way of doing so that maximizes the use of the commodity hardware we already have at our disposal. Techniques such as normal mapping and level of detail have attempted to address the problem by reducing the amount of geometry in a scene. This is problematic for applications that desire or demand access to the scene's full geometric complexity at render time. More recently, out-of-core techniques have provided methods for rendering large scenes when the working set is larger than the available system memory. We propose a distributed rendering architecture based on message-passing that is designed to partition scene geometry across a cluster of commodity machines in a spatially coherent way, allowing the entire scene to remain in-core and enabling the construction of hierarchical spatial acceleration structures in parallel. The results of our implementation show roughly an order of magnitude speedup in rendering time compared to the traditional approach, while keeping memory overhead for message queuing around 1%.
25

Balancing Darkness And Visibility: An Algorithmic Approach To Light Placement In Low-Light, Ray-Traced Scenes

Kuo, Briana 01 June 2024 (has links) (PDF)
In recent years, digital media has seen incredible advancements in rendering visually stunning computer graphics scenes. Photo-realistic games, animated films, and more leave viewers blown away by the sheer beauty of their graphics. However, challenges arise when depicting dark scenes, often resulting in visual monotony and difficulty in comprehension due to insufficient detail within the scene. In order to enhance readability and visual interest of a scene, additional, artificial lights can be placed throughout a scene to enhance the aesthetic. These lights, however, must be strategically placed in order to retain an essence of darkness and maintain the delicate balance between light and dark. In this thesis, we explore an algorithm for light placement within low light, ray-traced scenes which leverages a k-means layering scheme to partition a scene and place artificial lights for artistic enhancement. Multiple scenes were generated and user feedback was collected comparing various lighting configurations for each scene, assessing the algorithm's effectiveness in improving readability and maintaining the desired level of darkness as well as how additional lighting affects the user's perception of the scene.
26

Bone Erosion Measurement in Subjects with Rheumatoid Arthritis Using Magnetic Resonance Imaging

Emond, Patrick D. 04 1900 (has links)
<p>Rheumatoid arthritis (RA) is a systemic disease that can affect the nervous system, lungs, heart, skin, reticuloendothelium and joints. Currently, the gold-standard measurement for tracking the progression of the disease involves a semi-quantitative assessment of bone erosion, bone marrow edema and synovitis, as seen in magnetic resonance (MR) images, by a musculoskeletal radiologist. The work presented in this thesis identifies how computer automation can be used to quantify bone erosion volumes in MR images without a radiologists' expert and time consuming intervention. A new semi-automated hybrid segmentation algorithm that combines two established techniques: region growing and level-set segmentation, is described and evaluated for use in a clinical setting. A total of 40 participants with RA were scanned using a 1-Tesla peripheral MR scanner. Eight of the participant scans were used to train the algorithm with the remaining used to determine the accuracy, precision, and speed of the technique. The reproducibility of the hybrid algorithm and that of manual segmentation were defined in terms of intra-class correlation coefficients (ICCs). Both techniques were equally precise with ICC values greater than 0.9. According to a least squares fit between erosion volumes obtained by the hybrid algorithm with those obtained from manual tracings drawn by a radiologist, the former was found to be highly accurate ( m=1.030, b=1.385: r-squared=0.923). The hybrid algorithm was significantly faster than manual segmentation, which took two to four times longer to complete. In conclusion, computer automation shows promise as a means to quantitatively assess bone erosion volumes. The new hybrid segmentation algorithm described in this thesis could be used in a clinical setting to track the progression of RA and to evaluate the effectiveness of treatment.</p> / Doctor of Philosophy (PhD)
27

Digital Forensics Tool Interface Visualization

Altiero, Roberto A. 15 January 2015 (has links)
Recent trends show digital devices utilized with increasing frequency in most crimes committed. Investigating crime involving these devices is labor-intensive for the practitioner applying digital forensics tools that present possible evidence with results displayed in tabular lists for manual review. This research investigates how enhanced digital forensics tool interface visualization techniques can be shown to improve the investigator's cognitive capacities to discover criminal evidence more efficiently. This paper presents visualization graphs and contrasts their properties with the outputs of The Sleuth Kit (TSK) digital forensic program. Exhibited is the textual-based interface proving the effectiveness of enhanced data presentation. Further demonstrated is the potential of the computer interface to present to the digital forensic practitioner an abstract, graphic view of an entire dataset of computer files. Enhanced interface design of digital forensic tools means more rapidly linking suspicious evidence to a perpetrator. Introduced in this study is a mixed methodology of ethnography and cognitive load measures. Ethnographically defined tasks developed from the interviews of digital forensics subject matter experts (SME) shape the context for cognitive measures. Cognitive load testing of digital forensics first-responders utilizing both a textual-based and visualized-based application established a quantitative mean of the mental workload during operation of the applications under test. A t-test correlating the dependent samples' mean tested for the null hypothesis of less than a significant value between the applications' comparative workloads of the operators. Results of the study indicate a significant value, affirming the hypothesis that a visualized application would reduce the cognitive workload of the first-responder analyst. With the supported hypothesis, this work contributes to the body of knowledge by validating a method of measurement and by providing empirical evidence that the use of the visualized digital forensics interface will provide a more efficient performance by the analyst, saving labor costs and compressing time required for the discovery phase of a digital investigation.
28

Hydrographic Surface Modeling Through A Raster Based Spline Creation Method

Alexander, Julie G 16 May 2014 (has links)
The United States Army Corp of Engineers relies on accurate and detailed surface models for various construction projects and preventative measures. To aid in these efforts, it is necessary to work for advancements in surface model creation. Current methods for model creation include Delaunay triangulation, raster grid interpolation, and Hydraulic Spline grid generation. While these methods produce adequate surface models, attempts for improved methods can still be made. A method for raster based spline creation is presented as a variation of the Hydraulic Spline algorithm. By implementing Hydraulic Splines in raster data instead of vector data, the model creation process is streamlined. This method is shown to be more efficient and less computationally expensive than previous methods of surface model creation due to the inherent advantages of raster data over vector data.
29

Shark Sim: A Procedural Method of Animating Leopard Sharks Based on Raw Location Data

Blizard, Katherine S 01 June 2013 (has links)
Fish such as the Leopard Shark (Triakis semifasciata) can be tagged on their fin, released back into the wild, and their location tracked though technologies such as autonomous robots. Timestamped location data about their target is stored. We present a way to procedurally generate an animated simulation of T. semifasciata using only these timestamped location points. This simulation utilizes several components. Input timestamps dictate a monotonic time-space curve mapping the simulation clock to the space curve. The space curve connects all the location points as a spline without any sharp folds that are too implausible for shark traversal. We create a model leopard shark that has convincing kinematics that respond to the space curve. This is achieved through acquiring a skinned model and applying T. semifasciata motion kinematics that respond to velocity and turn commands. These kinematics affect the spine and all fins that control locomotion and direction. Kinematic- based procedural keyframes added onto a queue interpolate while the shark model traverses the path. This simulation tool generates animation sequences that can be viewed in real-time. A user study of 27 individuals was deployed to measure the perceived realism of the sequences as judged by the user by contrasting 5 different film sequences. Results of the study show that on average, viewers perceive our simulation as more realistic than not.
30

Face Recognition: Study and Comparison of PCA and EBGM Algorithms

Katadound, Sachin 01 January 2004 (has links)
Face recognition is a complex and difficult process due to various factors such as variability of illumination, occlusion, face specific characteristics like hair, glasses, beard, etc., and other similar problems affecting computer vision problems. Using a system that offers robust and consistent results for face recognition, various applications such as identification for law enforcement, secure system access, computer human interaction, etc., can be automated successfully. Different methods exist to solve the face recognition problem. Principal component analysis, Independent component analysis, and linear discriminant analysis are few other statistical techniques that are commonly used in solving the face recognition problem. Genetic algorithm, elastic bunch graph matching, artificial neural network, etc. are few of the techniques that have been proposed and implemented. The objective of this thesis paper is to provide insight into different methods available for face recognition, and explore methods that provided an efficient and feasible solution. Factors affecting the result of face recognition and the preprocessing steps that eliminate such abnormalities are also discussed briefly. Principal Component Analysis (PCA) is the most efficient and reliable method known for at least past eight years. Elastic bunch graph matching (EBGM) technique is one of the promising techniques that we studied in this thesis work. We also found better results with EBGM method than PCA in the current thesis paper. We recommend use of a hybrid technique involving the EBGM algorithm to obtain better results. Though, the EBGM method took a long time to train and generate distance measures for the given gallery images compared to PCA. But, we obtained better cumulative match score (CMS) results for the EBGM in comparison to the PCA method. Other promising techniques that can be explored separately in other paper include Genetic algorithm based methods, Mixture of principal components, and Gabor wavelet techniques.

Page generated in 0.1469 seconds