• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 149
  • 34
  • 22
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 347
  • 347
  • 154
  • 132
  • 63
  • 60
  • 55
  • 51
  • 50
  • 41
  • 35
  • 35
  • 33
  • 30
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Applying Facial Emotion Recognition to Usability Evaluations to Reduce Analysis Time

Chao, Gavin Kam 01 June 2021 (has links) (PDF)
Usability testing is an important part of product design that offers developers insight into a product’s ability to help users achieve their goals. Despite the usefulness of usability testing, human usability evaluations are costly and time-intensive processes. Developing methods to reduce the time and costs of usability evaluations is important for organizations to improve the usability of their products without expensive investments. One prospective solution to this is the application of facial emotion recognition to automate the collection of qualitative metrics normally identified by human usability evaluators. In this paper, facial emotion recognition (FER) was applied to mock usability recordings to evaluate how well FER could parse moments of emotional significance. To determine the accuracy of FER in this context, a FER Python library created by Justin Shenk was compared with data tags produced by human reporters. This study found that the facial emotion recognizer could only match its emotion recognition output with less than 40% of the human-reported emotion timestamps and less than 78% of the emotion data tags were recognized at all. The current lack of consistency with the human reported emotions found in this thesis makes it difficult to recommend using FER for parsing moments of semantic significance over conventional human usability evaluators.
222

Exploring Material Representations for Sparse Voxel DAGs

Pineda, Steven 01 June 2021 (has links) (PDF)
Ray tracing is a popular technique used in movies and video games to create compelling visuals. Ray traced computer images are increasingly becoming more realistic and almost indistinguishable from real-word images. Due to the complexity of scenes and the desire for high resolution images, ray tracing can become very expensive in terms of computation and memory. To address these concerns, researchers have examined data structures to efficiently store geometric and material information. Sparse voxel octrees (SVOs) and directed acyclic graphs (DAGs) have proven to be successful geometric data structures for reducing memory requirements. Moxel DAGs connect material properties to these geometric data structures, but experience limitations related to memory, build times, and render times. This thesis examines the efficacy of connecting an alternative material data structure to existing geometric representations. The contributions of this thesis include the creation of a new material representation using hashing to accompany DAGs, a method to calculate surface normals using neighboring voxel data, and a demonstration and validation that DAGs can be used to super sample based on proximity. This thesis also validates the visual acuity from these methods via a user survey comparing different output images. In comparison to the Moxel DAG implementation, this work increases render time, but reduces build times and memory, and improves the visual quality of output images.
223

Tessellated Voxelization for Global Illumination Using Voxel Cone Tracing

Freed, Sam Thomas 01 June 2018 (has links) (PDF)
Modeling believable lighting is a crucial component of computer graphics applications, including games and modeling programs. Physically accurate lighting is complex and is not currently feasible to compute in real-time situations. Therefore, much research is focused on investigating efficient ways to approximate light behavior within these real-time constraints. In this thesis, we implement a general purpose algorithm for real-time applications to approximate indirect lighting. Based on voxel cone tracing, we use a filtered representation of a scene to efficiently sample ambient light at each point in the scene. We present an approach to scene voxelization using hardware tessellation and compare it with an approach utilizing hardware rasterization. We also investigate possible methods of warped voxelization. Our contributions include a complete and open-source implementation of voxel cone tracing along with both voxelization algorithms. We find similar performance and quality with both voxelization algorithms.
224

EnVRMent: Investigating Experience in a Virtual User-Composed Environment

Key, Matthew 01 December 2020 (has links) (PDF)
Virtual Reality is a technology that has long held society's interest, but has only recently began to reach a critical mass of everyday consumers. The idea of modern VR can be traced back decades, but because of the limitations of the technology (both hardware and software), we are only now exploring its potential. At present, VR can be used for tele-surgery, PTSD therapy, social training, professional meetings, conferences, and much more. It is no longer just an expensive gimmick to go on a momentary field trip; it is a tool, and as with the automobile, personal computer, and smartphone, it will only evolve as more and more adopt and utilize it in various ways. It can provide a three dimensional interface where only two dimensions were previously possible. It can allow us to express ourselves to one another in new ways regardless of the distance between individuals. It has astronomical potential, but with this potential we must first understand what makes it adoptable and attractive to the average consumer. The interaction with technology is often times the bottleneck through which the public either adopts or abandons that technology. The goal of this project is to explore user immerision and emotion during a VR experience centered around creating a virtual world. We also aimed to explore if the naturality of the user interface had any effect on user experience. Very limited user testing was available, however a small user group conducted in depth testing and feedback. While our sample size is small, the users were able to test the system and show that there is a positive correlation between influence on the virtual environment and a positive user emotional experience (immersion, empowerment, etc.), along with a few unexpected emotions (anxiety). We present the system developed, the user study, and proposed extensions for fruitful directions for this work by which a future project may continue the study.
225

GPU High-Performance Framework for PIC-Like Simulation Methods Using the Vulkan® Explicit API

Yager, Kolton Jacob 01 March 2021 (has links) (PDF)
Within computational continuum mechanics there exists a large category of simulation methods which operate by tracking Lagrangian particles over an Eulerian background grid. These Lagrangian/Eulerian hybrid methods, descendants of the Particle-In-Cell method (PIC), have proven highly effective at simulating a broad range of materials and mechanics including fluids, solids, granular materials, and plasma. These methods remain an area of active research after several decades, and their applications can be found across scientific, engineering, and entertainment disciplines. This thesis presents a GPU driven PIC-like simulation framework created using the Vulkan® API. Vulkan is a cross-platform and open-standard explicit API for graphics and GPU compute programming. Compared to its predecessors, Vulkan offers lower overhead, support for host parallelism, and finer grain control over both device resources and scheduling. This thesis harnesses those advantages to create a programmable GPU compute pipeline backed by a Vulkan adaptation of the SPgrid data-structure and multi-buffered particle arrays. The CPU host system works asynchronously with the GPU to maximize utilization of both the host and device. The framework is demonstrated to be capable of supporting Particle-in-Cell like simulation methods, making it viable for GPU acceleration of many Lagrangian particle on Eulerian grid hybrid methods. This novel framework is the first of its kind to be created using Vulkan® and to take advantage of GPU sparse memory features for grid sparsity.
226

Developing Digital Field Guides for Plants: A Study from the Perspective of Users

Schwarz, Emily Roseanne 01 June 2011 (has links) (PDF)
A field guide is a tool to identify an object of natural history. Field guidescover a wide range of topics from plants to fungi, birds to mammals, and shells to minerals. Traditionally, field guides are books, usually small enough to be carried outdoors . They enjoy wide popularity in modern life; almost every American home and library owns at least one field guide, and the same is also true for other areas of the world. At this time, companies, non-profits, and universities are developing computertechnologies to replace printed field guides for identifying plants. This thesisexamines the state of the art in field guides for plants. First, a framework isestablished for evaluating both printed and digital field guides. Second, fourprint and three digital field guides are evaluated against the criteria. Third, anovel digital field guide is presented and evaluated.
227

Out-of-Core GPU Path Tracing on Large Instanced Scenes via Geometry Streaming

Berchtold, Jeremy 01 June 2022 (has links) (PDF)
We present a technique for out-of-core GPU path tracing of arbitrarily large scenes that is compatible with hardware-accelerated ray-tracing. Our technique improves upon previous works by subdividing the scene spatially into streamable chunks that are loaded using a priority system that maximizes ray throughput and minimizes GPU memory usage. This allows for arbitrarily large scaling of scene complexity. Our system required under 19 minutes to render a solid color version of Disney's Moana Island scene (39.3 million instances, 261.1 million unique quads, and 82.4 billion instanced quads at a resolution of 1024x429 and 1024spp on an RTX 5000 (24GB memory total, 22GB used, 13GB geometry cache, with the remainder for temporary buffers and storage) (Wald et al.). As a scalability test, our system rendered 26 Moana Island scenes without multi-level instancing (1.02 billion instances, 2.14 trillion instanced quads, ~230GB if all resident) in under 1h:28m. Compared to state-of-the-art hardware-accelerated renders of the Moana Island scene, our system can render larger scenes on a single GPU. Our system is faster than the previous out-of-core approach and is able to render larger scenes than previous in-core approaches given the same memory constraints (Hellmuth, Zellman et al, Wald).
228

Evaluation of 2D and 3D Command Sources for Individuals with High Tetraplegia

Williams, Matthew R. 02 April 2009 (has links)
No description available.
229

Concurrent versus retrospective verbal protocol for comparing window usability

Bowers, Victoria A. 16 September 2005 (has links)
The measurement of software usability has become an important issue in recent years. Metrics of usability include time, errors, questionnaires, ratings, and results of verbal protocols. Concurrent verbal protocol, a method in which the user "thinks aloud" while completing given tasks, has been heavily employed by software usability researchers who want to know the reason a user is having difficulties. Possible problems associated with using concurrent verbal protocol are (1) that verbalization may interfere with the processing required to complete the task, and (2) that subjects may not be able to monitor and express the information of interest to the researcher. A relatively new approach which may avoid these problems is heavily cued retrospective verbal protocol in which the user is presented subsequently with a representation (a video tape, for example) which helps him recall his thoughts during the task without interfering with task completion. This research compared the performance of subjects while completing tasks using both methods of verbal protocol. The verbal data collected by the two protocol techniques was compared to assess any information differences due to the methods of collection. No performance differences were found between the two protocol methods. Reasons for this lack of degradation due to concurrent verbalization are discussed. The kinds of information gathered were quite different for the two methods, with concurrent protocol subjects giving procedural information and retrospective protocol subjects giving explanations and design statements. Implications for usability testing are discussed. The two methods of protocol were employed in a comparison of two different size monitors, a 30.48 cm diagonal and a 53.34 cm diagonal. The subjects' performance, as measured by steps to completion, task completion time, and errors committed, was compared across the two monitors. Subjects were required to complete 12 tasks which varied in the difficulty of the windowing required. Subjective data were also collected in the form of task difficulty ratings, as well as a global measure of user satisfaction. These performance measures and subjective measures were compared across protocol methods as well as monitors. Performance data, as well as subjective data, indicate that on tasks that do not require extensive windowing, there are no difference between the two monitor sizes. As windowing difficulty increases, however, the large monitor's advantages become apparent. Tasks with a high level of windowing difficulty are judged to be easier and require fewer steps on the large monitor than on the small monitor. / Ph. D.
230

A Minimally Invasive High-Bandwidth Wireless Brain-Computer Interface Platform

Zeng, Nanyu January 2024 (has links)
Brain-computer interfaces (BCIs) provide direct access to the brain, serving crucial roles in treating neurological disorders and developing neural prostheses. Recent clinical successes include diagnosing and treating epilepsy and advancing prosthesis for visual and limb impairments. Achieving high spatial and temporal resolution is essential for accurately localizing seizures, mapping brain functions, and controlling neuronal activity. However, existing solutions have substantial form factors, necessitating large-size craniotomy, permanent removal of a part of the skull, or wires running through the body, which limits real-world applicability and complicates post-surgery recovery. We present a minimally invasive, high-bandwidth, and fully wireless brain-machine interface platform that addresses these challenges through a combination of an implantable application-specific integrated circuit (ASIC) chip and a wearable relay station. The platform supports an aggregate sampling rate of 8.68 MSPS at 10-bit resolution and a 108.48/54.24 Mbps data rate using impulse radio ultra-wideband (IR-UWB). A high-density microelectrode array (HD-MEA) with configurable electrode options is integrated into the ASIC implant, enabling simultaneous readout of 1024/256 channels at 8.48/33.9 KSPS. By reducing the ASIC implant to a thickness of 25 µm, the total volume of the implant is only 3.6 mm³, making it thinner than a strand of human hair and occupying less than a third of the volume of a grain of rice. We conducted in-vivo experiments in the cortices of pigs and monkeys and successfully achieved ultra-high resolution receptive field mapping. This work sets a new standard for volumetric efficiency in implantable brain-computer interfaces.

Page generated in 0.1034 seconds