Spelling suggestions: "subject:"human 1interaction"" "subject:"human aninteraction""
31 |
Investigating the Self-tracking Use for Mental Wellness of New ParentsEunkyung Jo (6633707) 15 May 2019 (has links)
<p>New parents often experience significant stress as they take
on new roles and responsibilities. Personal informatics practices have increasingly gained
attention as they support various aspects of wellness of individuals by
providing data-driven self-insights. While several PI systems have been
proposed to support mental wellness of individuals not only by providing
self-knowledge but also by helping individuals deal with negative emotions, few
studies investigated how parenting stress can be managed through PI practices. In
this paper, I set out to investigate how new parents make use of flexible
self-tracking practices in the context of stress management. The findings of
this study indicate that flexible self-tracking practices enable individuals to
develop self-knowledge as well as to better communicate with their spouses through
data. Based on the findings, I discuss how the self-tracking experiences for the
mental wellness of parents can be better designed and provide some
considerations for future research and design for parenting stress management.</p>
|
32 |
Evaluating the Effects of BKT-LSTM on Students' Learning PerformanceJianyao Li (11794436) 20 December 2021 (has links)
<div>Today, machine learning models and Deep Neural Networks (DNNs) are prevalent in various areas. Also, educational Artificial Intelligence (AI) is drawing increasing attention</div><div>with the rapid development of online learning platforms. Researchers explore different types of educational AI to improve students’ learning performance and experience in online classes. Educational AIs can be categorized into “interactive” and “predictive.” Interactive AIs answer simple course questions for students, such as the due day of homework and the final project’s minimum page requirement. Predictive educational AIs play a role in predicting students’ learning states. Instructors can adjust the learning content based on the students’ learning states. However, most AIs are not evaluated in an actual class setting. Therefore, we want to evaluate the effects of a state-of-the-art educational AI model, BKT (Bayesian Knowledge Tracing)-LSTM(Long Short-Term Memory), on students’ learning performance in an actual class setting. Data came from the course CNIT 25501, a large introductory Java program?ming class at Purdue University. Participants were randomly separated into the control and experimental groups (AI-group). Weekly quizzes measured participants’ learning performance. Pre-quiz and base quizzes estimated participants’ prior knowledge levels. Using BKT-LSTM, participants in the experimental group had questions from the knowledge that they were most lacking. However, participants in the control group had questions from randomly picked knowledge. The results suggested that both the experimental and control groups had lower scores in review quizzes than in base quizzes. However, the score difference between base quizzes and review quizzes for the experimental group was more often significantly different (three quizzes) compared to the control group (two quizzes), demonstrating the predictive capability of BKT-LSTM to some extent. Initially, we expected that BKT-LSTM would enhance students’ learning performance. However, in post-quiz, participants in the control group had significantly higher scores than those in the experimental group. The result suggested that continuous complex questions may negatively affect students’ learning initiatives. On the contrary, relatively easy questions may improve their learning initiatives.</div>
|
33 |
An experimental study of the effects of a bayesian knowledge tracing model on student perceived engagementArjun Kramadhati Gopi (11799026) 20 December 2021 (has links)
<div>With the advent of Machine Learning and Deep Learning models, many avenues of development have opened. Today, these technologies are being leveraged to perform a wide variety of tasks that were otherwise not possible with traditional systems. The power of Machine Learning and Artificial Intelligence makes it possible to compute very complicated tasks at near real-time speeds. To provide an example, Machine Learning models are used extensively in the retail industry to predict and analyze critical parameters such as sales, promotions, customer behavior, recommendations, and offers.</div><div><br></div><div><br></div><div>Today, it is increasingly common to observe AI being used across many of the biggest domains such as Health, Environment, Military, and Business. Artificial Intelligence being used in educational settings has thus been a growing field of focus and study. For example, conversational AI being deployed to act as virtual tutors to answer student questions and concerns. Additionally, there is a fill-the-hole type of AIs that will help students learn tasks such as coding by either showing them how to do it or by predicting where the student might go wrong and suggesting preemptive corrective steps. </div><div><br></div><div>As described, a great deal of literature exists about the use of Deep Learning and Machine Learning models in education. However, the existing tools and models act as external appendages that add to the course structure, thereby altering it. This proposed study introduces a Bayesian Knowledge Transfer model based on the Long Short Term Memory structure (BKT-LSTM) utilized in a live STEM (Science, Technology, Engineering, and Mathematics) classroom. The model discovers individual student learning profiles based on past quiz performance and customizes future quizzes based on the learned patterns. The BKT-LSTM model works in tandem with the existing course curriculum and only tests those knowledge items that have already been covered in the classroom. The model does not change the course structure but rather aims to improve the student’s learning experience by focusing on areas of the student's knowledge that require more practice in learning. </div><div><br></div><div><br></div><div>Within a live STEM classroom, the BKT-LSTM model acts as a herald of change in the way students interact with the curriculum, even though no major changes are observed in the course structure. Students interacting with the model are subjected to quizzes with questions that target the individual student’s lack of learning in particular knowledge areas. Thus, students can be expected to perceive the change as unwelcoming due to the increasing difficulty in subsequent quizzes. Regardless, the study focuses on measuring the learning performance of the students. Do the students learn more in the new system? Another focus of the study is the student’s perception of engagement while interacting with the BKT-LSTM model. The effectiveness of the new educational process is determined not only by increased student learning performance, but also by the student’s perception of engagement while interacting with the model. Are the students enjoying the new experience? Do the students feel like they are learning something? Another important factor was also studied, that is learning performance of students interacting with the BKT-LSTM. </div><div><br></div>
|
34 |
Investigating Cyber Performance: An Individual Differences StudyKelly Anne Cole (10907916) 04 August 2021 (has links)
<div>The persistent issues that have been identified in the cyber defense domain, such as information-overload, burn-out and high turn-over rates among cyber analysts leads us to question what the cognitive ability contribution is to a more successful cyber performance. Cyber defense researchers theorize that individual differences are determinants of cyber performance success but have yet to establish empirically the role of individual differences. Therefore, the study uses an individual differences approach under a work performance framework to study the contributions of cognitive ability (i.e., attention control) on cyber performance success in a specific cyber work-role (i.e., the Incident Reponder), and through its well-defined primary task (i.e., incident detection system performance). The sample included actual network analysts with a wide range of incident detection expertise, age, and education levels for more reliable and valid scores. The results of the correlational analysis showed that individual differences in attention control (i.e., flexibility and spatial attention) contribute most to the differences in Incident Responder work-performance. A linear regression model then demonstrated that spatial attention and flexibility predict 53 to 60 percent of the variance in cyber performance scores. It is suggested that the KSA's from the NICE framework be updated with the cognitive abilities that contribute to and/or predict cyber performance success, for superior recruitment efforts towards a more efficient cyber defense work-force. </div><div><br></div>
|
35 |
MULTIMODAL DIGITAL IMAGE EXPLORATION WITH SYNCHRONOUS INTELLIGENT ASSISTANCE FOR THE BLINDTing Zhang (8636196) 16 April 2020 (has links)
Emerging haptic devices have granted individuals who are blind the capabilities to explore images in real-time, which has always been a challenge for them. However, when only haptic-based interaction is available, and no visual feedback is given, image comprehension demands time and major cognitive resources. This research developed an approach to improve blind people’s exploration performance by providing assisting strategies in various sensory modalities, when certain exploratory behavior is performed. There are three fundamental components developed in this approach: the user model, the assistance model, and the user interface. The user model recognizes users’ image exploration procedures. A learning framework utilizing spike-timing neural network is developed to classify the frequently applied exploration procedures. The assistance model provides different assisting strategies when certain exploration procedure is performed. User studies were conducted to understand the goals of each exploration procedure and assisting strategies were designed based on the discovered goals. These strategies give users hints of objects’ locations and relationships. The user interface then determines the optimal sensory modality to deliver each assisting strategy. Within-participants experiments were performed to compare three sensory modalities for each assisting strategy, including vibration, sound and virtual magnetic force. A complete computer-aided system was developed by integrating all the validated assisting strategies. Experiments were conducted to evaluate the complete system with each assisting strategy expressed through the optimal modality. Performance metrics including task performance and workload assessment were applied for the evaluation.
|
36 |
Cognitive Load Estimation with Behavioral Cues in Human-Machine InteractionGoeum Cha (9757181) 14 December 2020 (has links)
Detecting human cognitive load is an increasingly important issue in the interaction between humans and machines, computers, and robots. In the past decade, several studies have sought to distinguish the cognitive load, or workload, state of humans based on multiple observations, such as behavioral, physiological, or multi-modal data. In the Human-Machine Interaction (HMI) cases, estimating human workload is essential because manipulators' performance could be adversely affected when they have many tasks that may be demanding. If the workload level can be detected, it will be beneficial to reallocate tasks on manipulators to improve the productivity of HMI tasks. However, it is still on question marks what kinds of cues can be utilized to know the degree of workload. In this research, eye blinking and mouse tracking are chosen as behavioral cues, exploring the possibility of a non-intrusive and automated workload estimator. During tests, behavior cues are statistically analyzed to find the difference among levels, using a dataset focused on three levels of the dual n-back memory game. The statistically analyzed signal is trained in a deep neural network model to classify the workload level. In this study, eye blinking related data and mouse tracking data have been statistically analyzed. The one-way repeated measure analysis of variance test result showed eye blinking duration on the dual 1-back and 3-back are significantly different. The mouse tracking data could not pass the statistical test. A three-dimension convolutional deep neural network is used to train visual data of human behavior. Classifying the dual 1-back and 3-back data accuracy is 51% with 0.66 F1-score on 1-back and 0.14 on 3-back data. In conclusion, blinking and mouse tracking are unlikely helpful cues when estimating different levels of workload. <br>
|
37 |
Exploring Social Roles in Twitch ChatroomsQingheng Zhou (8085977) 06 December 2019 (has links)
<p>With the popularity
of the gaming industry, game streaming appeared and became a global phenomenon
with high participation in recent years. Game streaming platforms such as
Twitch had millions of active users participated in the community by watching
and chatting. Yet there was lack of investigation about how chat behaviors connected
with the overall participation in game streaming community. This study aims to describe
and analyze the roles taken on by viewers as they engaged in chat while
watching game streaming and identify how these roles influenced participation. I
designed a qualitative study with online observations on several Twitch
channels streaming Overwatch. By analyzing the chatlogs collected, I identified
four social roles among chatters: Lurker, Troll, Collaborator, and Moderator. A
discourse analysis was applied to further investigate the interactions among
these roles and how they shape the conversation in chatrooms. With these
findings, I generated a four-role model that specific for chatters in Twitch
personal channels. Limitations of this study and suggestions for future
research were also provided.</p>
|
38 |
APPLYING MULTIMODAL SENSING TO HUMAN MOTION TRACKING IN MOBILE SYSTEMSSiyuan Cao (9029135) 29 June 2020 (has links)
<div>
<div>
<div>
<p>Billions of “smart” things in our lives have been equipped with various sensors. Current devices, such as smartphones, smartwatches, tablets, and VR/AR headsets, are equipped with a variety of embedded sensors, e.g. accelerometer, gyroscope, magnetometer, camera, GPS sensor, etc. Based on these sensor data, many technologies have been developed to track human motion at different granularities and to enable new applications. This dissertation examines two challenging problems in human motion tracking. One problem is the ID association issue when utilizing external sensors to simultaneously track multiple people. Although an “outside” system can track all human movements in a designated area, it needs to digitally associate each tracking trajectory to the corresponding person, or say the smart device carried by that person, to provide customized service based on the tracking results. Another problem is the inaccuracy caused by limited sensing information when merely using the embedded sensors located on the devices being tracked. Since sensor data may contain inevitable noises and there is no external beacon used as a reference point for calibration, it is hard to accurately track human motion only with internal sensors.</p><p>In this dissertation, we focus on applying multimodal sensing to perform human motion tracking in mobile systems. To address the two above problems separately, we conduct the following research works. (1) The first work seeks to enable public cameras to send personalized messages to people without knowing their phone addresses. We build a system which utilizes the users’ motion patterns captured by the cameras as their communication addresses, and depends on their smartphones to locally compare the sensor data with the addresses and to accept the correct messages. To protect user privacy, the system requires no data from the users and transforms the motion patterns into low-dimensional codes to prevent motion leaks. (2) To enhance distinguishability and scalability of the camera-to-human communication system, we introduce context features which include both motion patterns and ambience features (e.g. magnetic field, Wi-Fi fingerprint, etc.) to identify people. The enhanced system achieves higher association accuracy and is demonstrated to work with dense people in a retailer, with a fixed-length packet overhead. The first two works explore the potential of widely deployed surveillance cameras and provide a generic underlay to various practical applications, such as automatic audio guide, indoor localization, and sending safety alerts. (3) We close this dissertation with a fine-grained motion tracking system which aims to track the positions of two hand-held motion controllers in a mobile VR system. To achieve high tracking accuracy without external sensors, we introduce new types of information, e.g. ultrasonic ranging among the headset and the controllers, and a kinematic arm model. Effectively fusing this additional information with inertial sensing generates accurate controller positions in real time. Compared with commodity mobile VR controllers which only support rotational tracking, our system provides an interactive VR experience by letting the user actually move the controllers’ positions in a VR scene. To summarize, this dissertation shows that multimodal sensing can further explore the potential power in sensor data and can take sensor-based applications to the next generation of innovation.</p><div><br></div></div></div></div><div><div><div>
</div>
</div>
</div>
|
39 |
INTELLIGENT SELF ADAPTING APPAREL TO ADAPT COMFORT UTILITYMinji Lee (10725849) 30 April 2021 (has links)
<div>Enhancing the capability to control a tremendous range of physical actuators and sensors, combined with wireless technology and the Internet of Things (IoT), apparel technologies play a significant role in supporting safe, comfortable and healthy living, observing each customer’s conditions. Since apparel technologies have advanced to enable humans to work as a team with the clothing they wear, the interaction between a human and apparel is further enhanced with the introduction of sensors, wireless network, and artificially intelligent techniques. A variety of wearable technologies have been developed and spread to meet the needs of customers, however, some wearable devices are considered as non-practical tech-oriented, not consumer-oriented.</div><div>The purpose of this research is to develop an apparel system which integrates intelligent autonomous agents, human-based sensors, wireless network protocol, mobile application management system and a zipper robot. This research is an augmentation to the existing research and literature, which are limited to the zipping and unzipping process without much built in intelligence. This research is to face the challenges of the elderly and people with self-care difficulties. The intent is to provide a scientific path for intelligent zipper robot systems with potential, not only to help people, but also to be commercialized.</div><div>The research develops an intelligent system to control of zippers fixed on garments, based on the profile and desire of the human. The theoretical and practical elements of developing small, integrated, intelligent zipper robots that interact with an application by using a lightweight MQTT protocol for application in the daily lives of diverse populations of people with physical challenges. The system functions as intelligent automatized garment to ensure users could positively utilize a zipper robot device to assist in putting on garments which also makes them feel comfortable wearing and interacting with the system. This research is an approach towards the “future of fashion”, and the goal is to incentivize and inspire others to develop new instances of wearable robots and sensors that help people with specific needs to live a better life.</div>
|
40 |
Designing for Co-Creation to Engage Multiple Perspectives on Ethics in Technology PracticeSai Shruthi Chivukula (11172018) 22 July 2021 (has links)
<div>As part of an increasing interest in a "Turn to Practice," HCI scholars have investigated the felt design complexities and ethical concerns in everyday technology practice, calling for practice-led research approaches. Given the ethical nature of technology design work, practitioners have to often negotiate and mediate their personal values, disciplinary notions of ethics, organizational policies and values, and societal impact of their design work. To tease apart and describe practitioner accounts of ethical aspects of their design work, I used three different approaches to investigate what practitioners from different professional roles communicate about and participate in (potentially) strengthening their ethical engagement in their everyday design work within and across role boundaries: survey, design of co-creation activities, and deployment/pilot of these co-creation activities. </div><div><br></div><div>In the survey study, I identify and describe the differences in disciplinary values, responsibilities, commitments, and alignment in relation to ethics and social responsibility through captured data from 256 technology and design practitioners from a range of professional roles.</div><div><br></div><div>As a part of the design phase of co-creation activities, I design, iterate, and prototype three co-creation activities (A: Tracing the Complexity; B: Dilemma Postcards; and C: Method Heuristics) and sequences of these activities to engage a range of different professional roles to communicate about their ethical action and (potentially) strengthen their ethical engagement in everyday design work. I define design vocabulary/Schemas: 1) <i>A.E.I.O.YOU model</i> to investigate the landscape of ethics in practice and 2) <i>Classifiers</i> to codify the activities and potential variants.</div><div><br></div><div>As a part of the deployment phase of these designed co-creation activities, I piloted four sequences of these activities with twelve practitioners with three different professional roles per sequence, engaging in approx. 23 hours of facilitation, artifact creation, and conversation. I present the results of deployment of the co-creation sessions where practitioners articulated that the co-creation activities helped <i>expand</i> their ethical horizons through self-awareness, <i>learn</i> new approaches to ethics vocabulary, <i>become (re-)aware </i>of their current practice, and <i>imagine</i> trajectories of change in their practice. Practitioners also identified a preliminary set of ethics-related practices that could be better supported such as tools for performance, leadership support, ethics education, and resources for ethical decision making. </div><div><br></div><div>Based on the results from these three approaches, I propose contributions to HCI and design audiences. For HCI researchers, practitioners, and educators, the survey results describe differences in professional notions and valence of ethics, framing the need for translation and transdisciplinary approach to ethics in a practice context. For design researchers, the designing of the co-creation activities is a methodological contribution where I propose and illustrate opportunities for creating novel ways to engage practitioners in co-creation work as a means of communicating their felt ethical concerns and practices. For co-creation researchers and professional ethicists, the engagement of practitioners in the co-creation reveal: 1) complexities to facilitate different disciplinary roles and design a space for ``representing'' a range of practitioners; and 2) gaps and potential synergies in supporting practitioners through practice-resonant ethics-focused methods. </div>
|
Page generated in 0.1074 seconds