• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • Tagged with
  • 124
  • 124
  • 124
  • 124
  • 51
  • 22
  • 19
  • 19
  • 19
  • 17
  • 16
  • 16
  • 16
  • 15
  • 15
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Juiciness in Citizen Science Computer Games: Analysis of a Prototypical Game

Buckthal, Eric D. Ebucktha 01 June 2014 (has links) (PDF)
Incorporating the collective problem-solving skills of non-experts could ac- celerate the advancement of scientific research. Citizen science games leverage puzzles to present computationally difficult problems to players. Such games typ- ically map the scientific problem to game mechanics and visual feed-back helps players improve their solutions. Like games for entertainment, citizen science games intend to capture and retain player attention. “Juicy” game design refers to augmented visual feedback systems that give a game personality without modi- fying fundamental game mechanics. A “juicy” game feels alive and polished. This thesis explores the use of “juicy” game design applied to the citizen science genre. We present the results of a user study in its effect on player motivation with a prototypical citizen science game inspired by clustering-based E. coli bacterial strain analysis.
2

Real-Time Ray Traced Global Illumination Using Fast Sphere Intersection Approximation for Dynamic Objects

Garmsen, Reed Phillip 01 February 2019 (has links) (PDF)
Realistic lighting models are an important component of modern computer generated, interactive 3D applications. One of the more difficult to emulate aspects of real-world lighting is the concept of indirect lighting, often referred to as global illumination in computer graphics. Balancing speed and accuracy requires carefully considered trade-offs to achieve plausible results and acceptable framerates. We present a novel technique of supporting global illumination within the constraints of the new DirectX Raytracing (DXR) API used with DirectX 12. By pre-computing spherical textures to approximate the diffuse color of dynamic objects, we build a smaller set of approximate geometry used for second bounce lighting calculations for diffuse light rays. This speeds up both the necessary intersection tests and the amount of geometry that needs to be updated within the GPU's acceleration structure. Our results show that our approach for diffuse bounced light is faster than using the conservative mesh for triangle-ray intersection in some cases. Since we are using this technique for diffuse bounced light the lower resolution of the spheres is close to the quality of traditional raytracing techniques for most materials.
3

Human Recognition Theory and Facial Recognition Technology: A Topic Modeling Approach to Understanding the Ethical Implication of a Developing Algorithmic Technologies Landscape on How We View Ourselves and Are Viewed by Others

Albalawi, Hajer 15 August 2023 (has links) (PDF)
The emergence of algorithmic-driven technology has significantly impacted human life in the current century. Algorithms, as versatile constructs, hold different meanings across various disciplines, including computer science, mathematics, social science, and human-artificial intelligence studies. This study defines algorithms from an ethical perspective as the foundation of an information society and focuses on their implications in the context of human recognition. Facial recognition technology, driven by algorithms, has gained widespread use, raising important ethical questions regarding privacy, bias, and accuracy. This dissertation aims to explore the impact of algorithms on machine perception of human individuals and how humans perceive one another and themselves. By analyzing publications from the National Institute of Standards and Technology (NIST) and employing topic modeling, this research identifies the ethical themes surrounding facial recognition technology. The findings contribute to a broader understanding of the ethical implications of algorithms in shaping human perception and interaction, with a focus on the multidimensional aspects of human recognition theory. The research also examines the ethical considerations in AI-AI interactions, human-AI interactions, and humans perceiving themselves in the context of facial recognition technology. The study establishes a framework of human recognition theory that encompasses the alteration and reshaping of fundamental human values and self-perception, highlighting the transformative effects of algorithmic-driven technologies on human identity and values. The dissertation chapters provide a comprehensive overview of the influence of AI on societal values and identity, the revolution of big data and Information and Communication Technology (ICT), the concept of digital identity in the fourth industrial revolution, and recognition theory in the era of algorithms. The research aims to inform discussions and policy decisions regarding the responsible development and deployment of algorithms in recognition processes, addressing the challenges and opportunities brought about by algorithmic systems in shaping human recognition, identity, and the social fabric of our increasingly algorithmic society.
4

Studying Memes During Covid Lockdown as a Lens Through Which to Understand Video-Mediated Communication Interactions

Claytor, Tatyana 15 August 2023 (has links) (PDF)
The purpose of this study is to analyze image macros about video-mediated communication (VMC) created during the time frame of 2020-2021 when people all over the world started using Zoom and VMC for work and school. It is a unique opportunity to study how users' interactions with themselves and with others were affected at a time when a lot of people started using the technology at the same time. Because the focus is on interactions, I narrowed it down to three topics to analyze the memes: presence, self, and space and place to analyze the memes. I chose memes relating to these topics that were found on three popular meme databases: KnowYourMeme, Memedroid, and Memes.Com. Utilizing visual analysis tools and Shifman's format for analyzing memes, each meme was placed in a group and analyzed. The research revealed that users experienced some stressful situations regarding elements of presence, such as feeling isolated and embarrassed at times. Users were also distracted by seeing their image, were overly focused on their appearance (particularly when on camera) and utilized virtual backgrounds for self-expression. Finally, users demonstrated that the collision of private and public space happened when family members or pets interrupted meetings. They also noted that privacy was often intruded upon when other users gained personal information not normally available in face-to-face gatherings. Finally, some took advantage of the changed format to assert power. Most research concerning Zoom and other VMC focuses on how to use it effectively. There is very little research about creative reactions to the usage of this technology and this research fills that gap.
5

tidyTouch: An Interactive Visualization Tool for Data Science Education

DeVaney, Jonah E. 01 May 2020 (has links)
Accessibility and usability of software define the programs used for both professional and academic activities. While many proprietary tools are easy to grasp, some challenges exist in using more technical resources, such as the statistical programming language R. The creative project tidyTouch is a web application designed to help educate any user in basic R data visualization and transformation using the popular ggplot2 and dplyr packages. Providing point-and-click interactivity to explore potential modifications of graphics for data presentation, the application uses an intuitive interface to make R more accessible to those without programming experience. This project is in a state of continual development and will expand to cover introductory data science topics relevant to academics and professionals alike. The code for tidyTouch and this document can be found at https://github.com/devaneyJE/tidyTouch_thesis (see ui.R and server.R files for application code).
6

Applications for Machine Learning on Readily Available Data from Virtual Reality Training Experiences

Moore, Alec 01 January 2022 (has links) (PDF)
The purpose of the research presented in this dissertation is to improve virtual reality (VR) training systems by enhancing their understanding of users. While the field of intelligent tutoring systems (ITS) has seen value in this approach, much research into making use of biometrics to improve user understanding and subsequently training, relies on specialized hardware. Through the presented research, I show that with machine learning (ML), the VR system itself can serve as that specialized hardware for VR training systems. I begin by discussing my explorations into using an ecologically valid, specialized training simulation as a testbed to predict knowledge acquisition by users unfamiliar with the task being trained. Then I look at predicting the cognitive and psychomotor outcomes retained after a one week period. Next I describe our work towards using ML models to predict the transfer of skills from a non-specialized VR assembly training environment to the real-world, based on recorded tracking data. I continue by examining the identifiability of participants in the specialized training task, allowing us to better understand the associated privacy concerns and how the representation of the data can affect identifiability. By using the same tasks separated temporally by a week, we expand our understanding of the diminishing identifiability of user's movements. Finally, I make use of the assembly training environment to explore the feasibility of across-task identifiability, by making use of two different tasks with the same context.
7

Authoring Tools for Augmented Reality Scenario Based Training Experiences

Vargas Gonzalez, Andres 01 January 2022 (has links) (PDF)
Augmented Reality's (AR) scope and capabilities have grown considerably in the last few years. AR applications can be run across devices such as phones, wearables, and head-mounted displays (HMDs). The increasing research and commercial efforts in HMDs capabilities allow end users to map a 3D environment and interact with virtual objects that can respond to the physical aspects of the scene. Within this context, AR is an ideal format for in-situ training scenarios. However, building such AR scenarios requires proficiency in game engine development environments and programming expertise. These difficulties can make it challenging for domain experts to create training content in AR. To combat this problem, this thesis presents strategies and guidelines for building authoring tools to generate scenario-based training experiences in AR. The authoring tools were built leveraging concepts from the 3D user interfaces and interaction techniques literature. We found from early research in the field and our experimentation that scenario and object behavior authoring are substantial aspects needed to create a training experience by an author. This work also presents a technique to author object component behaviors with high usability scores, followed by an analysis of the different aspects of authoring object component behaviors across AR, VR, and Desktop. User studies were run to evaluate authoring strategies, and the results provide insights into future directions for building AR/VR immersive authoring tools. Finally, we discuss how this knowledge can influence the development, guidelines, and strategies in the direction of a more compelling set of tools to author augmented reality SBT experiences.
8

Navigating Immersive and Interactive VR Environments With Connected 360° Panoramas

Cosgrove, Samuel 01 January 2020 (has links) (PDF)
Emerging research is expanding the idea of using 360-degree spherical panoramas of real-world environments for use in "360 VR" experiences beyond video and image viewing. However, most of these experiences are strictly guided, with few opportunities for interaction or exploration. There is a desire to develop experiences with cohesive virtual environments created with 360 VR that allow for choice in navigation, versus scripted experiences with limited interaction. Unlike standard VR with the freedom of synthetic graphics, there are challenges in designing appropriate user interfaces (UIs) for 360 VR navigation within the limitations of fixed assets. To tackle this gap, we designed RealNodes, a software system that presents an interactive and explorable 360 VR environment. We also developed four visual guidance UIs for 360 VR navigation. The results of a pilot study showed that choice of UI had a significant effect on task completion times, showing one of our methods, Arrow, was best. Arrow also exhibited positive but non-significant trends in average measures with preference, user engagement, and simulator-sickness. RealNodes, the UI designs, and the pilot study results contribute preliminary information that inspire future investigation of how to design effective explorable scenarios in 360 VR and visual guidance metaphors for navigation in applications using 360 VR environments.
9

Balancing User Experience for Mobile One-to-One Interpersonal Telepresence

Pfeil, Kevin 01 January 2022 (has links) (PDF)
The COVID-19 virus disrupted all aspects of our daily lives, and though the world is finally returning to normalcy, the pandemic has shown us how ill-prepared we are to support social interactions when expected to remain socially distant. Family members missed major life events of their loved ones; face-to-face interactions were replaced with video chat; and the technologies used to facilitate interim social interactions caused an increase in depression, stress, and burn-out. It is clear that we need better solutions to address these issues, and one avenue showing promise is that of Interpersonal Telepresence. Interpersonal Telepresence is an interaction paradigm in which two people can share mobile experiences and feel as if they are together, even though geographically distributed. In this dissertation, we posit that this paradigm has significant value in one-to-one, asymmetrical contexts, where one user can live-stream their experiences to another who remains at home. We discuss a review of the recent Interpersonal Telepresence literature, highlighting research trends and opportunities that require further examination. Specifically, we show how current telepresence prototypes do not meet the social needs of the streamer, who often feels socially awkward when using obtrusive devices. To combat this negative finding, we present a qualitative co-design study in which end users worked together to design their ideal telepresence systems, overcoming value tensions that naturally arise between Viewer and Streamer. Expectedly, virtual reality techniques are desired to provide immersive views of the remote location; however, our participants noted that the devices to facilitate this interaction need to be hidden from the public eye. This suggests that 360$^\circ$ cameras should be used, but the lenses need to be embedded in wearable systems, which might affect the viewing experience. We thus present two quantitative studies in which we examine the effects of camera placement and height on the viewing experience, in an effort to understand how we can better design telepresence systems. We found that camera height is not a significant factor, meaning wearable cameras do not need to be positioned at the natural eye-level of the viewer; the streamer is able to place them according to their own needs. Lastly, we present a qualitative study in which we deploy a custom interpersonal telepresence prototype on the co-design findings. Our participants preferred our prototype instead of simple video chat, even though it caused a somewhat increased sense of self-consciousness. Our participants indicated that they have their own preferences, even with simple design decisions such as style of hat, and we as a community need to consider ways to allow customization within our devices. Overall, our work contributes new knowledge to the telepresence field and helps system designers focus on the features that truly matter to users, in an effort to let people have richer experiences and virtually bridge the distance to their loved ones.
10

Moxel DAGs: Connecting Material Information to High Resolution Sparse Voxel DAGs

Williams, Brent Robert 01 June 2015 (has links) (PDF)
As time goes on, the demand for higher resolution and more visually rich images only increases. Unfortunately, creating these more realistic computer graphics is pushing our computational resources to their limits. In realistic rendering, one of the common ways 3D objects are represented is as volumetric elements called voxels. Traditionally, voxel data structures are known for their high memory requirements. One of the standard ways these requirements are minimized is by storing the voxels in a sparse voxel octree (SVO). Very recently, a method called High Resolution Sparse Voxel DAGs was presented that can store binary voxel data orders of magnitudes more efficiently than SVOs. This memory efficiency is achieved by converting the tree into a directed acyclic graph (DAG). The method was also shown to have competitive rendering performance to recent GPU ray tracers. Unfortunately, it does not support storing collections of rendering attributes, commonly called materials. These represent a given object's reflectance properties, and are necessary for calculating its perceived color. We present a method for connecting material information to High Resolution Sparse Voxel DAGs for mid-level scenes, with multiple meshes, and several different materials. This is achieved using an extended Sparse Voxel DAG, called a Moxel DAG, and an external data structure for holding the material information, we call a Moxel Table. Our method is much more memory efficient than traditional SVOs, and only increases in efficiency in comparison when at higher resolutions. Because it stores the equivalent information as SVOs, it achieves the exact same visual quality at the same resolutions.

Page generated in 0.1398 seconds