• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 6
  • 2
  • 2
  • 1
  • Tagged with
  • 83
  • 83
  • 83
  • 28
  • 23
  • 21
  • 18
  • 16
  • 15
  • 15
  • 14
  • 14
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Effective User Guidance through Augmented Reality Interfaces: Advances and Applications

Daniel S Andersen (8755488) 24 April 2020 (has links)
<div>Computer visualization can effectively deliver instructions to a user whose task requires understanding of a real world scene. Consider the example of surgical telementoring, where a general surgeon performs an emergency surgery under the guidance of a remote mentor. The mentor guidance includes annotations of the operating field, which conventionally are displayed to the surgeon on a nearby monitor. However, this conventional visualization of mentor guidance requires the surgeon to look back and forth between the monitor and the operating field, which can lead to cognitive load, delays, or even medical errors. Another example is 3D acquisition of a real-world scene, where an operator must acquire multiple images of the scene from specific viewpoints to ensure appropriate scene coverage and thus achieve quality 3D reconstruction. The conventional approach is for the operator to plan the acquisition locations using conventional visualization tools, and then to try to execute the plan from memory, or with the help of a static map. Such approaches lead to incomplete coverage during acquisition, resulting in an inaccurate reconstruction of the 3D scene which can only be addressed at the high and sometimes prohibitive cost of repeating acquisition.</div><div><br></div><div>Augmented reality (AR) promises to overcome the limitations of conventional out-of-context visualization of real world scenes by delivering visual guidance directly into the user's field of view, guidance that remains in-context throughout the completion of the task. In this thesis, we propose and validate several AR visual interfaces that provide effective visual guidance for task completion in the context of surgical telementoring and 3D scene acquisition.</div><div><br></div><div>A first AR interface provides a mentee surgeon with visual guidance from a remote mentor using a simulated transparent display. A computer tablet suspended above the patient captures the operating field with its on-board video camera, the live video is sent to the mentor who annotates it, and the annotations are sent back to the mentee where they are displayed on the tablet, integrating the mentor-created annotations directly into the mentee's view of the operating field. We show through user studies that surgical task performance improves when using the AR surgical telementoring interface compared to when using the conventional visualization of the annotated operating field on a nearby monitor. </div><div><br></div><div>A second AR surgical telementoring interface provides the mentee surgeon with visual guidance through an AR head-mounted display (AR HMD). We validate this approach in user studies with medical professionals in the context of practice cricothyrotomy and lower-limb fasciotomy procedures, and show improved performance over conventional surgical guidance. A comparison between our simulated transparent display and our AR HMD surgical telementoring interfaces reveals that the HMD has the advantages of reduced workspace encumbrance and of correct depth perception of annotations, whereas the transparent display has the advantage of reduced surgeon head and neck encumbrance and of annotation visualization quality. </div><div><br></div><div>A third AR interface provides operator guidance for effective image-based modeling and rendering of real-world scenes. During the modeling phase, the AR interface builds and dynamically updates a map of the scene that is displayed to the user through an AR HMD, which leads to the efficient acquisition of a five-degree-of-freedom image-based model of large, complex indoor environments. During rendering, the interface guides the user towards the highest-density parts of the image-based model which result in the highest output image quality. We show through a study that first-time users of our interface can acquire a quality image-based model of a 13m $\times$ 10m indoor environment in 7 minutes.</div><div><br></div><div>A fourth AR interface provides operator guidance for effective capture of a 3D scene in the context of photogrammetric reconstruction. The interface relies on an AR HMD with a tracked hand-held camera rig to construct a sufficient set of six-degrees-of-freedom camera acquisition poses and then to steer the user to align the camera with the prescribed poses quickly and accurately. We show through a study that first-time users of our interface are significantly more likely to achieve complete 3D reconstructions compared to conventional freehand acquisition. We then investigated the design space of AR HMD interfaces for mid-air pose alignment with an added ergonomics concern, which resulted in five candidate interfaces that sample this design space. A user study identified the aspects of the AR interface design that influence the ergonomics during extended use, informing AR HMD interface design for the important task of mid-air pose alignment.</div>
22

Investigating the Self-tracking Use for Mental Wellness of New Parents

Eunkyung Jo (6633707) 15 May 2019 (has links)
<p>New parents often experience significant stress as they take on new roles and responsibilities. Personal informatics practices have increasingly gained attention as they support various aspects of wellness of individuals by providing data-driven self-insights. While several PI systems have been proposed to support mental wellness of individuals not only by providing self-knowledge but also by helping individuals deal with negative emotions, few studies investigated how parenting stress can be managed through PI practices. In this paper, I set out to investigate how new parents make use of flexible self-tracking practices in the context of stress management. The findings of this study indicate that flexible self-tracking practices enable individuals to develop self-knowledge as well as to better communicate with their spouses through data. Based on the findings, I discuss how the self-tracking experiences for the mental wellness of parents can be better designed and provide some considerations for future research and design for parenting stress management.</p>
23

Evaluating the Effects of BKT-LSTM on Students' Learning Performance

Jianyao Li (11794436) 20 December 2021 (has links)
<div>Today, machine learning models and Deep Neural Networks (DNNs) are prevalent in various areas. Also, educational Artificial Intelligence (AI) is drawing increasing attention</div><div>with the rapid development of online learning platforms. Researchers explore different types of educational AI to improve students’ learning performance and experience in online classes. Educational AIs can be categorized into “interactive” and “predictive.” Interactive AIs answer simple course questions for students, such as the due day of homework and the final project’s minimum page requirement. Predictive educational AIs play a role in predicting students’ learning states. Instructors can adjust the learning content based on the students’ learning states. However, most AIs are not evaluated in an actual class setting. Therefore, we want to evaluate the effects of a state-of-the-art educational AI model, BKT (Bayesian Knowledge Tracing)-LSTM(Long Short-Term Memory), on students’ learning performance in an actual class setting. Data came from the course CNIT 25501, a large introductory Java program?ming class at Purdue University. Participants were randomly separated into the control and experimental groups (AI-group). Weekly quizzes measured participants’ learning performance. Pre-quiz and base quizzes estimated participants’ prior knowledge levels. Using BKT-LSTM, participants in the experimental group had questions from the knowledge that they were most lacking. However, participants in the control group had questions from randomly picked knowledge. The results suggested that both the experimental and control groups had lower scores in review quizzes than in base quizzes. However, the score difference between base quizzes and review quizzes for the experimental group was more often significantly different (three quizzes) compared to the control group (two quizzes), demonstrating the predictive capability of BKT-LSTM to some extent. Initially, we expected that BKT-LSTM would enhance students’ learning performance. However, in post-quiz, participants in the control group had significantly higher scores than those in the experimental group. The result suggested that continuous complex questions may negatively affect students’ learning initiatives. On the contrary, relatively easy questions may improve their learning initiatives.</div>
24

An experimental study of the effects of a bayesian knowledge tracing model on student perceived engagement

Arjun Kramadhati Gopi (11799026) 20 December 2021 (has links)
<div>With the advent of Machine Learning and Deep Learning models, many avenues of development have opened. Today, these technologies are being leveraged to perform a wide variety of tasks that were otherwise not possible with traditional systems. The power of Machine Learning and Artificial Intelligence makes it possible to compute very complicated tasks at near real-time speeds. To provide an example, Machine Learning models are used extensively in the retail industry to predict and analyze critical parameters such as sales, promotions, customer behavior, recommendations, and offers.</div><div><br></div><div><br></div><div>Today, it is increasingly common to observe AI being used across many of the biggest domains such as Health, Environment, Military, and Business. Artificial Intelligence being used in educational settings has thus been a growing field of focus and study. For example, conversational AI being deployed to act as virtual tutors to answer student questions and concerns. Additionally, there is a fill-the-hole type of AIs that will help students learn tasks such as coding by either showing them how to do it or by predicting where the student might go wrong and suggesting preemptive corrective steps. </div><div><br></div><div>As described, a great deal of literature exists about the use of Deep Learning and Machine Learning models in education. However, the existing tools and models act as external appendages that add to the course structure, thereby altering it. This proposed study introduces a Bayesian Knowledge Transfer model based on the Long Short Term Memory structure (BKT-LSTM) utilized in a live STEM (Science, Technology, Engineering, and Mathematics) classroom. The model discovers individual student learning profiles based on past quiz performance and customizes future quizzes based on the learned patterns. The BKT-LSTM model works in tandem with the existing course curriculum and only tests those knowledge items that have already been covered in the classroom. The model does not change the course structure but rather aims to improve the student’s learning experience by focusing on areas of the student's knowledge that require more practice in learning. </div><div><br></div><div><br></div><div>Within a live STEM classroom, the BKT-LSTM model acts as a herald of change in the way students interact with the curriculum, even though no major changes are observed in the course structure. Students interacting with the model are subjected to quizzes with questions that target the individual student’s lack of learning in particular knowledge areas. Thus, students can be expected to perceive the change as unwelcoming due to the increasing difficulty in subsequent quizzes. Regardless, the study focuses on measuring the learning performance of the students. Do the students learn more in the new system? Another focus of the study is the student’s perception of engagement while interacting with the BKT-LSTM model. The effectiveness of the new educational process is determined not only by increased student learning performance, but also by the student’s perception of engagement while interacting with the model. Are the students enjoying the new experience? Do the students feel like they are learning something? Another important factor was also studied, that is learning performance of students interacting with the BKT-LSTM. </div><div><br></div>
25

Investigating Cyber Performance: An Individual Differences Study

Kelly Anne Cole (10907916) 04 August 2021 (has links)
<div>The persistent issues that have been identified in the cyber defense domain, such as information-overload, burn-out and high turn-over rates among cyber analysts leads us to question what the cognitive ability contribution is to a more successful cyber performance. Cyber defense researchers theorize that individual differences are determinants of cyber performance success but have yet to establish empirically the role of individual differences. Therefore, the study uses an individual differences approach under a work performance framework to study the contributions of cognitive ability (i.e., attention control) on cyber performance success in a specific cyber work-role (i.e., the Incident Reponder), and through its well-defined primary task (i.e., incident detection system performance). The sample included actual network analysts with a wide range of incident detection expertise, age, and education levels for more reliable and valid scores. The results of the correlational analysis showed that individual differences in attention control (i.e., flexibility and spatial attention) contribute most to the differences in Incident Responder work-performance. A linear regression model then demonstrated that spatial attention and flexibility predict 53 to 60 percent of the variance in cyber performance scores. It is suggested that the KSA's from the NICE framework be updated with the cognitive abilities that contribute to and/or predict cyber performance success, for superior recruitment efforts towards a more efficient cyber defense work-force. </div><div><br></div>
26

MULTIMODAL DIGITAL IMAGE EXPLORATION WITH SYNCHRONOUS INTELLIGENT ASSISTANCE FOR THE BLIND

Ting Zhang (8636196) 16 April 2020 (has links)
Emerging haptic devices have granted individuals who are blind the capabilities to explore images in real-time, which has always been a challenge for them. However, when only haptic-based interaction is available, and no visual feedback is given, image comprehension demands time and major cognitive resources. This research developed an approach to improve blind people’s exploration performance by providing assisting strategies in various sensory modalities, when certain exploratory behavior is performed. There are three fundamental components developed in this approach: the user model, the assistance model, and the user interface. The user model recognizes users’ image exploration procedures. A learning framework utilizing spike-timing neural network is developed to classify the frequently applied exploration procedures. The assistance model provides different assisting strategies when certain exploration procedure is performed. User studies were conducted to understand the goals of each exploration procedure and assisting strategies were designed based on the discovered goals. These strategies give users hints of objects’ locations and relationships. The user interface then determines the optimal sensory modality to deliver each assisting strategy. Within-participants experiments were performed to compare three sensory modalities for each assisting strategy, including vibration, sound and virtual magnetic force. A complete computer-aided system was developed by integrating all the validated assisting strategies. Experiments were conducted to evaluate the complete system with each assisting strategy expressed through the optimal modality. Performance metrics including task performance and workload assessment were applied for the evaluation.
27

Cognitive Load Estimation with Behavioral Cues in Human-Machine Interaction

Goeum Cha (9757181) 14 December 2020 (has links)
Detecting human cognitive load is an increasingly important issue in the interaction between humans and machines, computers, and robots. In the past decade, several studies have sought to distinguish the cognitive load, or workload, state of humans based on multiple observations, such as behavioral, physiological, or multi-modal data. In the Human-Machine Interaction (HMI) cases, estimating human workload is essential because manipulators' performance could be adversely affected when they have many tasks that may be demanding. If the workload level can be detected, it will be beneficial to reallocate tasks on manipulators to improve the productivity of HMI tasks. However, it is still on question marks what kinds of cues can be utilized to know the degree of workload. In this research, eye blinking and mouse tracking are chosen as behavioral cues, exploring the possibility of a non-intrusive and automated workload estimator. During tests, behavior cues are statistically analyzed to find the difference among levels, using a dataset focused on three levels of the dual n-back memory game. The statistically analyzed signal is trained in a deep neural network model to classify the workload level. In this study, eye blinking related data and mouse tracking data have been statistically analyzed. The one-way repeated measure analysis of variance test result showed eye blinking duration on the dual 1-back and 3-back are significantly different. The mouse tracking data could not pass the statistical test. A three-dimension convolutional deep neural network is used to train visual data of human behavior. Classifying the dual 1-back and 3-back data accuracy is 51% with 0.66 F1-score on 1-back and 0.14 on 3-back data. In conclusion, blinking and mouse tracking are unlikely helpful cues when estimating different levels of workload. <br>
28

Exploring Social Roles in Twitch Chatrooms

Qingheng Zhou (8085977) 06 December 2019 (has links)
<p>With the popularity of the gaming industry, game streaming appeared and became a global phenomenon with high participation in recent years. Game streaming platforms such as Twitch had millions of active users participated in the community by watching and chatting. Yet there was lack of investigation about how chat behaviors connected with the overall participation in game streaming community. This study aims to describe and analyze the roles taken on by viewers as they engaged in chat while watching game streaming and identify how these roles influenced participation. I designed a qualitative study with online observations on several Twitch channels streaming Overwatch. By analyzing the chatlogs collected, I identified four social roles among chatters: Lurker, Troll, Collaborator, and Moderator. A discourse analysis was applied to further investigate the interactions among these roles and how they shape the conversation in chatrooms. With these findings, I generated a four-role model that specific for chatters in Twitch personal channels. Limitations of this study and suggestions for future research were also provided.</p>
29

APPLYING MULTIMODAL SENSING TO HUMAN MOTION TRACKING IN MOBILE SYSTEMS

Siyuan Cao (9029135) 29 June 2020 (has links)
<div> <div> <div> <p>Billions of “smart” things in our lives have been equipped with various sensors. Current devices, such as smartphones, smartwatches, tablets, and VR/AR headsets, are equipped with a variety of embedded sensors, e.g. accelerometer, gyroscope, magnetometer, camera, GPS sensor, etc. Based on these sensor data, many technologies have been developed to track human motion at different granularities and to enable new applications. This dissertation examines two challenging problems in human motion tracking. One problem is the ID association issue when utilizing external sensors to simultaneously track multiple people. Although an “outside” system can track all human movements in a designated area, it needs to digitally associate each tracking trajectory to the corresponding person, or say the smart device carried by that person, to provide customized service based on the tracking results. Another problem is the inaccuracy caused by limited sensing information when merely using the embedded sensors located on the devices being tracked. Since sensor data may contain inevitable noises and there is no external beacon used as a reference point for calibration, it is hard to accurately track human motion only with internal sensors.</p><p>In this dissertation, we focus on applying multimodal sensing to perform human motion tracking in mobile systems. To address the two above problems separately, we conduct the following research works. (1) The first work seeks to enable public cameras to send personalized messages to people without knowing their phone addresses. We build a system which utilizes the users’ motion patterns captured by the cameras as their communication addresses, and depends on their smartphones to locally compare the sensor data with the addresses and to accept the correct messages. To protect user privacy, the system requires no data from the users and transforms the motion patterns into low-dimensional codes to prevent motion leaks. (2) To enhance distinguishability and scalability of the camera-to-human communication system, we introduce context features which include both motion patterns and ambience features (e.g. magnetic field, Wi-Fi fingerprint, etc.) to identify people. The enhanced system achieves higher association accuracy and is demonstrated to work with dense people in a retailer, with a fixed-length packet overhead. The first two works explore the potential of widely deployed surveillance cameras and provide a generic underlay to various practical applications, such as automatic audio guide, indoor localization, and sending safety alerts. (3) We close this dissertation with a fine-grained motion tracking system which aims to track the positions of two hand-held motion controllers in a mobile VR system. To achieve high tracking accuracy without external sensors, we introduce new types of information, e.g. ultrasonic ranging among the headset and the controllers, and a kinematic arm model. Effectively fusing this additional information with inertial sensing generates accurate controller positions in real time. Compared with commodity mobile VR controllers which only support rotational tracking, our system provides an interactive VR experience by letting the user actually move the controllers’ positions in a VR scene. To summarize, this dissertation shows that multimodal sensing can further explore the potential power in sensor data and can take sensor-based applications to the next generation of innovation.</p><div><br></div></div></div></div><div><div><div> </div> </div> </div>
30

INTELLIGENT SELF ADAPTING APPAREL TO ADAPT COMFORT UTILITY

Minji Lee (10725849) 30 April 2021 (has links)
<div>Enhancing the capability to control a tremendous range of physical actuators and sensors, combined with wireless technology and the Internet of Things (IoT), apparel technologies play a significant role in supporting safe, comfortable and healthy living, observing each customer’s conditions. Since apparel technologies have advanced to enable humans to work as a team with the clothing they wear, the interaction between a human and apparel is further enhanced with the introduction of sensors, wireless network, and artificially intelligent techniques. A variety of wearable technologies have been developed and spread to meet the needs of customers, however, some wearable devices are considered as non-practical tech-oriented, not consumer-oriented.</div><div>The purpose of this research is to develop an apparel system which integrates intelligent autonomous agents, human-based sensors, wireless network protocol, mobile application management system and a zipper robot. This research is an augmentation to the existing research and literature, which are limited to the zipping and unzipping process without much built in intelligence. This research is to face the challenges of the elderly and people with self-care difficulties. The intent is to provide a scientific path for intelligent zipper robot systems with potential, not only to help people, but also to be commercialized.</div><div>The research develops an intelligent system to control of zippers fixed on garments, based on the profile and desire of the human. The theoretical and practical elements of developing small, integrated, intelligent zipper robots that interact with an application by using a lightweight MQTT protocol for application in the daily lives of diverse populations of people with physical challenges. The system functions as intelligent automatized garment to ensure users could positively utilize a zipper robot device to assist in putting on garments which also makes them feel comfortable wearing and interacting with the system. This research is an approach towards the “future of fashion”, and the goal is to incentivize and inspire others to develop new instances of wearable robots and sensors that help people with specific needs to live a better life.</div>

Page generated in 0.1085 seconds