• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2321
  • 825
  • 769
  • 188
  • 90
  • 72
  • 18
  • 16
  • 15
  • 13
  • 8
  • 8
  • 8
  • 8
  • 8
  • Tagged with
  • 4721
  • 4721
  • 4721
  • 1962
  • 1929
  • 1900
  • 1221
  • 946
  • 945
  • 751
  • 574
  • 573
  • 542
  • 475
  • 391
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Cinemacraft: Exploring Fidelity Cues in Collaborative Virtual World Interactions

Narayanan, Siddharth 15 February 2018 (has links)
The research presented in this thesis concerns the contribution of virtual human (or avatar) fidelity to social interaction in virtual environments (VEs) and how sensory fusion can improve these interactions. VEs present new possibilities for mediated communication by placing people in a shared 3D context. However, there are technical constraints in creating photo realistic and behaviorally realistic avatars capable of mimicking a person's actions or intentions in real time. At the same time, previous research findings indicate that virtual humans can elicit social responses even with minimal cues, suggesting that full realism may not be essential for effective social interaction. This research explores the impact of avatar behavioral realism on people's experience of interacting with virtual humans by varying the interaction fidelity. This is accomplished through the creation of Cinemacraft, a technology-mediated immersive platform for collaborative human-computer interaction in a virtual 3D world and the incorporation of sensory fusion to improve the fidelity of interactions and realtime collaboration. It investigates interaction techniques within the context of a multiplayer sandbox voxel game engine and proposes how interaction qualities of the shared virtual 3D space can be used to further involve a user as well as simultaneously offer a stimulating experience. The primary hypothesis of the study is that embodied interactions result in a higher degree of presence and co-presence, and that sensory fusion can improve the quality of presence and co-presence. The argument is developed through research justification, followed by a user-study to demonstrate the qualitative results and quantitative metrics.This research comprises of an experiment involving 24 participants. Experiment tasks focus on distinct but interrelated questions as higher levels of interaction fidelity are introduced.The outcome of this research is the generation of an interactive and accessible sensory fusion platform capable of delivering compelling live collaborative performances and empathetic musical storytelling that uses low fidelity avatars to successfully sidestep the 'uncanny valley'. This research contributes to the field of immersive collaborative interaction by making transparent the methodology, instruments and code. Further, it is presented in non-technical terminology making it accessible for developers aspiring to use interactive 3D media to pro-mote further experimentation and conceptual discussions, as well as team members with less technological expertise. / Master of Science
412

Getting Lost in Email: How and Why Users Spend More Time in Email than Intended

Hanrahan, Benjamin Vincent 21 January 2015 (has links)
Email has become deeply embedded in many users' daily lives. To investigate how email features in users lives, particularly how users attend to email and get lost within it, I ran five studies that probed how users determined relevancy of messages, logged interactions with email, gathered diary entries related to individual sessions, and investigated the gratifications sought from email use. For the first study, I performed an exploratory experiment in the laboratory to determine how participants assessed the importance of individual emails (N=10). The next investigation I undertook involved three different studies, which I detail individually: a survey on email usage (N=54); a two-week study of email usage (N=20); and finally, the application of Attentional Network Test (N=9). My final study was to validate my findings around the reasons for attending to email, this was done through deploying a survey that followed the Uses and Gratification Theory tradition (N=52) In my studies I found that the majority of attentional effort is around reading email and participating in conversations, as opposed to email management. I also found that participants attended to email primarily based on notifications, instead of the number of unread messages in their inbox. I present my results through answering several research questions, and leverage Conversation Analysis (CA), particularly conversation openings, to explicate several problematic aspects around email use. My findings point to inefficiencies in email as a communication medium, mainly, around how summons are (or are not) issued. This results in an increased burden on email users to maintain engagement and determine (or construct) the appropriate moment for interruption. My findings have several implications: email triage does not seem to be problematic for the participants in my studies, somewhat in contrast to previous research; much of the problem around email, particularly emph{getting lost in email} is in managing the tension between promptly responding to messages while limiting engagement with email; due to the social nature of the problems with email, modifications to the email client are limited in their potential effectiveness to prevent getting lost and reduce email related anxiety. / Ph. D.
413

The Physical-Social Context in Information Refinding

Sawyer, Blake Allen 05 May 2016 (has links)
Modern operating systems allow users to organize and refind information using many contextual keys such as timestamps, content, custom tags, origin and even location. As humans naturally engage in activities with people and groups of people, we want to investigate how we can use the context of people's social interactions to support information archiving and refinding. Past research has tracked and used remote, social interactions through email communication; this work will concentrate on using physical, social interactions (i.e., face-to-face) to support information archiving and refinding. Research questions include: (1) How do we effectively associate one's information with one's social world? (2) How do we design a user interface that supports refinding information based on social contexts? and (3) How does our approach (i.e., system) affect the users information archiving and refinding practices? This dissertation presents results from two user studies, exploring two refinding systems. The first, longitudinal study examines three participants using a custom refinding tool that tags information based on the people physically present with the user. Our second, diary-driven study examines a refinding tool that integrates information activity with a person's calendar. Our contributions are threefold: (1) an exploration of adding physical social interactions as contextual keys for information archiving and refinding (2), an examination of two user interface designs that enable users to refind information through their physical-social interactions (i.e., people and groups), and (3), a diary-driven methodology for studying realistic refinding behaviors while reducing participant interruptions. / Ph. D.
414

Supporting User Interactions with Smart Built Environments

Handosa, Mohamed Hussein Hafez 04 February 2019 (has links)
Before the recent advances in sensing, actuation, computing and communication technologies, the integration between the digital and the physical environment was limited. Humans linked those two worlds by collecting data about the physical environment before feeding it into the digital environment, and by changing the state of the physical environment based on the state of the digital environment. The incorporation of computing, communication, sensing, and actuation technologies into everyday physical objects has empowered the vision of the Internet of Things (IoT). Things can autonomously collect data about the physical environment, exchange information with other things, and take actions on behalf of humans. Application domains that can benefit from IoT include smart buildings, smart cities, smart water, smart agriculture, smart animal farming, smart metering, security and emergencies, retail, logistics, industrial control, and health care. For decades, building automation, intelligent buildings, and more recently smart buildings have received a considerable attention in both academia and industry. We use the term smart built environments (SBE) to describe smart, intelligent, physical, built, architectural spaces ranging from a single room to a whole city. Legacy SBEs were often closed systems operating their own standards and custom protocols. SBEs evolved to Internet-connected systems leveraging the Internet technologies and services (e.g., cloud services) to unleash new capabilities. IoT-enabled SBEs, as one of the various applications of the IoT, can change the way we experience our homes and workplaces significantly and make interacting with technology almost inevitable. This can provide several benefits to modern society and help to make our life easier. Meanwhile, security, privacy, and safety concerns should be addressed appropriately. Unlike traditional computing devices, things usually have no or limited input/output (I/O) capabilities. Leveraging the ubiquity of general-purpose computing devices (e.g., smartphones), thing vendors usually provide interfaces for their products in the form of mobile apps or web-based portals. Interacting with different things using different mobile apps or web-based portals does not scale well. Requiring the user to switch between tens or hundreds of mobile apps and web-based portals to interact with different things in different smart spaces may not be feasible. Moreover, it can be tricky for non-domestic users (e.g., visitors) of a given SBE to figure out, without guidance, what mobile apps or web-based portals they need to use to interact with the surrounding. While there has been a considerable research effort to address a variety of challenges associated with the thing-to-thing interaction, human-to-thing interaction related research is limited. Many of the proposed approaches and industry-adopted techniques rely on more traditional, well understood and widely used Human-Computer Interaction (HCI) methods and techniques to support interaction between humans and things. Such techniques have mostly originated in a world of desktop computers that have a screen, mouse, and keyboard. However, SBEs introduce a radically different interaction context where there are no centralized, easily identifiable input and output devices. A desktop computer of the past is being replaced with the whole SBE. Depending on the task at hand and personal preferences, a user may prefer to use one interaction modality over another. For instance, turning lights on/off using an app may be more cumbersome or time-consuming compared to using a simple physical switch. This research focuses on leveraging the recent advances in IoT and related technologies to support user interactions with SBEs. We explore how to support flexible and adaptive multimodal interfaces and interactions while providing a consistent user experience in an SBE based on the current context and the available user interface and interaction capabilities. / PHD / The recent advances in sensing, actuation, computing, and communication technologies have brought several rewards to modern society. The incorporation of those technologies into everyday physical objects (or things) has empowered the vision of the Internet of Things (IoT). Things can autonomously collect data about the physical environment, exchange information with other things, and take actions on behalf of humans. Several application domains can benefit from the IoT such as smart buildings, smart cities, security and emergencies, retail, logistics, industrial control, and health care. For decades, building automation, intelligent buildings, and more recently smart buildings have received considerable attention in both academia and industry. We use the term smart built environments (SBE) to describe smart, intelligent, physical, built, architectural spaces ranging from a single room to a whole city. SBEs, as one of the various applications of the IoT, can change the way we experience our homes and workplaces significantly and make interacting with technology almost inevitable. While there has been a considerable research effort to address a variety of challenges associated with the thing-to-thing interaction, human-to-thing interaction related research is limited. Many of the proposed approaches and industry-adopted techniques to support human-to-thing interaction rely on traditional methods. However, SBEs introduce a radically different interaction context. Therefore, adapting the current interaction techniques and/or adopting new ones is crucial for the success and wide adoption of SBEs. This research focuses on leveraging the recent advances in the IoT and related technologies to support user interactions with SBEs. We explore how to support a flexible, adaptive, and multimodal interaction experience between users and SBEs using a variety of user interfaces and proposed interaction techniques.
415

Dairy To Be Great : Enhancing Dairy Farming Practices and Designing an Information Dashboard for Animal Health and Reproduction Data

Krznaric, Dora January 2023 (has links)
This thesis presents a comprehensive research study aimed at designing an information dashboard to address the specific information needs of dairy farmers in relation to animal health and reproduction data. The research focused on answering two key research questions: (1) How can we determine which factors are most relevant in terms of farms productivity and wellbeing of its animals? and (2) How can we visualize the data and farms history in a meaningful way so that the owner can make sense of it and therefore make better decisions for future planning?To answer these questions, extensive user research was conducted within the dairy farming community, involving interviews, literature review and surveys. The findings revealed that dairy farmers required quick access to critical data related to animal health and reproduction to make informed decisions. Applying a user-centered design approach, iterative prototyping and usability testing sessions were conducted to refine the dashboard design based on feedback from farmers. The goal was to create a user-friendly tool that addressed the specific needs of dairy farmers, including clear differentiation between data pertaining to individual animals versus the entire herd.The outcome of this research was the development of an information dashboard that successfully met the information needs of dairy farmers. The dashboard provided easy access to essential data, empowering farmers to make informed decisions regarding animal health and reproduction.Further testing and refinement of the dashboard design are recommended to ensure its effectiveness and usability in real-world farming scenarios. Additionally, future investigations could explore the inclusion of breeding value information in the dashboard. This research contributes to the transformation of the dairy farming landscape, offering farmers enhanced information management capabilities and improved decision-making processes.
416

Framework for Embodied Telepresence: A Meeting Case Study

Park, Juwon 02 February 2023 (has links)
Current video conferencing tools lack a sense of presence. Telepresence can improve the current video conferencing by providing feeling of presence at a different location from remote location. Most recent telepresence systems are built with the devices that are not accessible and uncomfortable for the daily meeting purpose. This work proposes a framework for embodied telepresence system that suits a daily meeting case the best. Based on our new telepresence framework, a new system architecture and design requirements are constructed. The system architecture shows how the telepresence system needs to be structured, and a design requirement helps to understand the needs of the system. With this framework we were able to implement a user friendly and accessible telepresence system. Our telepresence system enables users to control the telepresence robot with smartphone controller. The controller has four features: (1) Smartphone orientation control, (2) Position save and playback, (3) Local smart light bulb control, and (4) visual cue. At the end, our work evaluates the developed telepresence system by measuring the performances of given tasks to the participants. The evaluation shows that our system provides a sense of presence to both remote and local users. However, the proposed telepresence framework and system requires farther improvements to provide better usability. / Master of Science / During the pandemic, video conferencing tools like Zoom, Microsoft Teams, or Google Meet showed the advantages of having meetings and working remotely. However, these tools do not provide a sense of presence and the necessary level of control of what can be seen from a remote user's point of view. Therefore, researchers investigated and developed various tools that can give remote users a sense of presence at a location where a face-to-face meeting is taking place. We call this a telepresence tool. Our systematic review of the current telepresence tool results that most of the telepresence tools use devices that are not familiar and hard to access for general users. Additionally, they do not consider the local users feeling about remote user's presence at the face-to-face meeting (local site). Therefore, in this paper, we propose a general guideline or framework to help build a telepresence tool that overcomes the current telepresence tools' problems. Our telepresence tool, developed based on our proposed framework, uses a smartphone to control the telepresence robot that represents a remote user at the local site. A remote user can control the local site light bulb, save the telepresence robot's position and place it back, and show the user is away or present at the meeting. The evaluation of our telepresence system shows that our system provides a sense of presence to both remote and local users
417

Design and Evaluation of 3D Multiple Object Selection Techniques

Lucas, John Finley 27 April 2005 (has links)
Few researchers have addressed the important issue of three-dimensional multiple object selection (MOS) in immersive Virtual Environments (VEs). We have developed a taxonomy of the MOS task as a framework for exploring the design space of these techniques. In this thesis, we describe four techniques for selecting multiple objects in immersive VEs. Of the four techniques, two are serial (where only one object can be indicated per operation), and two are parallel (where one or more objects may be indicated per operation). Within each of the two categories we also investigated two metaphors of interaction: a 3D spatial metaphor and the pen and tablet metaphor. Two usability studies were used to evaluate the four techniques, iterate their designs, and gain a deeper understanding of the design space of MOS techniques. The results from our studies show that parallel MOS techniques can select objects faster than serial techniques as the number of target objects increase. We also show that effective techniques for MOS in immersive VEs can be created using both pen and tablet and 3D metaphors. / Master of Science
418

Implementation of Machine Learning Algorithm for Radar-Based Hand Gesture Recognition

Haidari, Ihsan, Shen, Jiantao January 2024 (has links)
Hand gesture recognition (HGR) is the process of identifying and interpreting hand gestures to control or interact with electronic devices. In this project, a Frequency Modulated Continuous Wave (FMCW) radar-based HGR is developed utilising Range-Doppler maps (RDMs). For this purpose, a Convolutional Neural Network (CNN) is implemented to classify different hand gestures. Each gesture that is fed to the network, contains a maximum of 12 frames, merged into a single image with a duration of 3 seconds. The dataset for training, validation, and offline test contains five different hand gestures along with Out-of-Distribution (OOD) samples, totalling 3235 data. The dataset was gathered in a confined environment with two participants, within a distance ranging from 0.2 m to 0.5 m. During training, the proposed system attained an accuracy of 95.91%, and 95.83% during training and validation, respectively. The system was also evaluated offline, achieving an accuracy of 96.99%.  One objective of this project was to incorporate real-time functionality. In real-time testing, the system achieved 95% accuracy with a prediction time of 25 ms.
419

System-Assisted Pharmaceutical Validation

Berglin, Rebecka January 2022 (has links)
A large part of the injuries that occur in healthcare are medication-related injuries. This mainlyaffects individual patients, but it also constitutes a major cost to society. At the UppsalaUniversity Hospital, there is currently a warning system in connection with the prescription ofmedicines. However, this system does not work optimally. The Uppsala Region therefore wantsto develop and implement a more advanced warning system, with the hope of reducing risksrelated to the use of medicines.In this thesis, a prototype for the new warning system is presented. It is based on the users'needs which have been identified by conducting interviews and observations in line with usercentered design. The prototype has been developed as part of an iterative process wheredifferent stages have been continuously evaluated for optimal user experience. The project hasalso identified many challenges, primarily related to current IT systems that are used at thehospital, but also laws and regulations that need to be considered when continuing to developthe system in the future.
420

Development of a peer-to-peer web application for sales of used course literature with focus on usability

Kujanpää, Jesper, Neij, Sofia, Hedrén, Lovisa, Åstrand, Benjamin, Dahlquist, Hugo, Simander, Olof, Baker, Oscar, Molla, Sherwan January 2024 (has links)
This bachelor thesis investigates the development of a peer-to-peer web application thatfocuses on improving the secondhand book handle of course literature between students atLinköpings University. The focus is on usability such as navigability and user interface de-sign. Through a combination of surveys, prototypes, and user tests, this study investigatesthe preferences and behaviors of university students concerning buying and selling usedcourse literature. To discover insights and what strategies enhance a peer-to-peer webapplication for secondhand books, readers are invited to explore the extensive analysisand results detailed in this report. A prototype of the web application was developed and iteratively improved throughuser feedback, focusing on improving the user experience by optimizing the navigabilityand interface design to meet the users’ expectations. Key findings include the criticalrole of the navigation bar and drop-down menus in user navigability, and the importanceof clear, engaging introductory text on the landing page for user engagement. Designconsistency was notably improved through a simplified color scheme and well-integratedsearch functionalities, although suggestions for reducing the search feature’s sensitivity tospelling errors were highlighted as a potential improvement.

Page generated in 0.0624 seconds