• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 427
  • 128
  • 105
  • 69
  • 32
  • 23
  • 15
  • 11
  • 10
  • 10
  • 8
  • 7
  • 7
  • 5
  • 4
  • Tagged with
  • 958
  • 958
  • 335
  • 334
  • 240
  • 198
  • 175
  • 166
  • 149
  • 142
  • 137
  • 111
  • 110
  • 104
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Gaze-supported Interaction with Smart Objects through an Augmented Reality User Interface

Kaaman, Albert, Bornemark, Kalle January 2017 (has links)
Smarta enheter blir allt vanligare och teknologierna de använder blir allt mer avancerade. Som en följd av detta uppstår bekväma och effektiva lösningar till vardagliga problem. En stor mängd sammankopplade smarta enheter leder dock till system som är svåra att förstå och att använda. Detta ställer krav på lösningar som hjälper användare att interagera med enheter på ett intuitivt och effektivt sätt. En teknik som under de senaste åren blivit allt mer kommersiellt brukbar och som kan användas i detta syfte är augmented reality. Vidare så är spårning av ögonpositioner ett lovande tillvägagångssätt för att navigera virtuella menyer.Denna uppsats har som syfte att utvärdera hur ögon- och huvudrörelser kan kombineras för att låta användare på ett intuitivt sätt interagera med ett gränssnitt i augmented reality. För att uppnå detta tas två interaktionsmetoder fram som använder ögon- och huvudrörelser på något olika sätt. För att utvärdera deras prestanda och användbarhet så utförs ett experiment där deltagarna navigerar en uppsättning menyer både med hjälp av de framtagna metoderna och en beprövad referensmetod.Resultaten från experimentet visar att referensmetoden både är den snabbaste och den minst felbenägna ut av de tre utvärderade metoderna. Trots detta så föredrar deltagarna en av de framtagna metoderna, och båda dessa metoder uppnår adekvata resultat. Vidare så visar den kvantitativa datan inte på några mätbara skillnader mellan de framtagna metoderna. Däremot så uppnår en av dem högre resultat i de subjektiva utvärderingarna. / Smart devices are becoming increasingly common and technologically advanced. As aresult, convenient and effective approaches to everyday problems are emerging. However,a large amount of interconnected devices result in systems that are difficult to understandand use. This necessitates solutions that help users interact with their devices in anintuitive and effective way. One such possible solution is augmented reality, which hasbecome a viable commercial technology in recent years. Furthermore, tracking the positionof users’ eyes to navigate virtual menus is a promising approach that allows for interestinginteraction techniques.In this thesis, we explore how eye and head movements can be combined in an effortto provide intuitive input to an augmented reality user interface. Two novel interactiontechniques that combine these modalities slightly differently are developed. To evaluatetheir performance and usability, an experiment is conducted in which the participantsnavigate a set of menus using both our proposed techniques and a baseline technique.The results of the experiment show that out of the three evaluated techniques, thebaseline is both the fastest and the least error-prone. However, participants prefer oneof the proposed techniques over the baseline, and both of these techniques perform adequately.Furthermore, the quantitative data shows no measurable differences betweenthe proposed techniques, although one of them receives a higher score in the subjectiveevaluations.
552

Development of a User Interface for Remote Driving of Vehicles

Kronberg, Gabriel, Andersson, Marcus January 2020 (has links)
Remote driving is essential for the transition frommanually steered vehicles to autonomous vehicles. Autonomousvehicles may not be able to handle difficult traffic situations atfirst and in such situations a human operator needs to interveneand drive remotely. To avoid further complications it is crucialthat the user interface for the remote driving is intuitive in orderto maintain safety. When taking the step from humans drivingin the vehicle to the vehicle driving itself there are a few bumpsin the road. One major problem is what happens if the vehiclefinds itself in a situation that it has yet to be programmed tounderstand. The purpose of this project is therefore to developthe missing link. A user interface with torque feedback, basedon vehicle dynamics and steering servo designing, is developed inpython in order to obtain an intuitive driving experience for theuser. The performance of the developed user interface is evaluatedby means of an experiment where a test course is set up and thentested by subjects. The conclusions that can be drawn from thisproject are that a user can maneuver a vehicle by a separate setof steering wheel and pedals, and that it is more intuitive whentorque feedback is turned on. / Fjärrkörning är avgörande för övergången från manuellt styrda fordon till autonoma fordon. Autonoma fordon kommer antagligen inte kunna hantera svåra trafik- situationer till en början och i sådana situationer behöver en mänsklig operatör ingripa och köra via fjärrsytrning. För  att undvika ytterligare komplikationer är det avgörande att  användargränssnittet för fjärrkörningen är intuitivt för att bibehålla säkerheten. När man tar steget från att människor kör fordon till att de kör sig själva finns det några hinder på vägen. Ett stort problem är vad som händer om fordonet befinner sig i en situation som det ännu inte har programmerats för att förstå. Syftet med detta projekt är därför att utveckla den saknade länken. Ett användargränssnitt med vridmomentåterkoppling, baserat på fordonsdynamik och styrservo-design, utvecklas i python för att ge en intuitiv körupplevelse för användaren. Prestandan för det utvecklade användargränssnittet utvärderas med hjälp av ett experiment där ett testspår skapas och sedan testas av försökspersoner. Slutsatserna som kan dras från detta projekt är att en användare kan köra ett fordon via ett separat styrsystem bestående av en ratt med tillhörande pedaler, samt att det känns mer intuitivt när vridmomentåterkoppling är påslaget. / Kandidatexjobb i elektroteknik 2020, KTH, Stockholm
553

Testing Privacy and Security of Voice Interface Applications in the Internet of Things Era

Shafei, Hassan, 0000-0001-6844-5100 04 1900 (has links)
Voice User Interfaces (VUI) are rapidly gaining popularity, revolutionizing user interaction with technology through the widespread adoption in devices such as desktop computers, smartphones, and smart home assistants, thanks to significant advancements in voice recognition and processing technologies. Over a hundred million users now utilize these devices daily, and smart home assistants have been sold in massive numbers, owing to their ease and convenience in controlling a diverse range of smart devices within the home IoT environment through the power of voice, such as controlling lights, heating systems, and setting timers and alarms. VUI enables users to interact with IoT technology and issue a wide range of commands across various services using their voice, bypassing traditional input methods like keyboards or touchscreens. With ease, users can inquire in natural language about the weather, stock market, and online shopping and access various other types of general information.However, as VUI becomes more integrated into our daily lives, it brings to the forefront issues related to security, privacy, and usability. Concerns such as the unauthorized collection of user data, the potential for recording private conversations, and challenges in accurately recognizing and executing commands across diverse accents, leading to misinterpretations and unintended actions, underscore the need for more robust methods to test and evaluate VUI services. In this dissertation, we delve into voice interface testing, evaluation for privacy and security associated with VUI applications, assessment of the proficiency of VUI in handling diverse accents, and investigation into access control in multi-user environments. We first study the privacy violations of the VUI ecosystem. We introduced the definition of the VUI ecosystem, where users must connect the voice apps to corresponding services and mobile apps to function properly. The ecosystem can also involve multiple voice apps developed by the same third-party developers. We explore the prevalence of voice apps with corresponding services in the VUI ecosystem, assessing the landscape of privacy compliance among Alexa voice apps and their companion services. We developed a testing framework for this ecosystem. We present the first study conducted on the Alexa ecosystem, specifically focusing on voice apps with account linking. Our designed framework analyzes both the privacy policies of these voice apps and their companion services or the privacy policies of multiple voice apps published by the same developers. Using machine learning techniques, the framework automatically extracts data types related to data collection and sharing from these privacy policies, allowing for a comprehensive comparison. Next, researchers studied the voice apps' behavior to conduct privacy violation assessments. An interaction approach with voice apps is needed to extract the behavior where pre-defined utterances are input into the simulator to simulate user interaction. The set of pre-defined utterances is extracted from the skill's web page on the skill store. However, the accuracy of the testing analysis depends on the quality of the extracted utterances. An utterance or interaction that was not captured by the extraction process will not be detected, leading to inaccurate privacy assessment. Therefore, we revisited the utterance extraction techniques used by prior works to study the skill's behavior for privacy violations. We focused on analyzing the effectiveness and limitations of existing utterance extraction techniques. We proposed a new technique that improved prior work extraction techniques by utilizing the union of these techniques and human interaction. Our proposed technique makes use of a small set of human interactions to record all missing utterances, then expands that to test a more extensive set of voice apps. We also conducted testing on VUI with various accents to study by designing a testing framework that can evaluate VUI on different accents to assess how well VUI implemented in smart speakers caters to a diverse population. Recruiting individuals with different accents and instructing them to interact with the smart speaker while adhering to specific scripts is difficult. Thus, we proposed a framework known as AudioAcc, which facilitates evaluating VUI performance across diverse accents using YouTube videos. Our framework uses a filtering algorithm to ensure that the extracted spoken words used in constructing these composite commands closely resemble natural speech patterns. Our framework is scalable; we conducted an extensive examination of the VUI performance across a wide range of accents, encompassing both professional and amateur speakers. Additionally, we introduced a new metric called Consistency of Results (COR) to complement the standard Word Error Rate (WER) metric employed for assessing ASR systems. This metric enables developers to investigate and rewrite skill code based on the consistency of results, enhancing overall WER performance. Moreover, we looked into a special case related to the access control of VUI in multi-user environments. We proposed a framework for automated testing to explore the access control weaknesses to determine whether the accessible data is of consequence. We used the framework to assess the effectiveness of voice access control mechanisms within multi-user environments. Thus, we show that the convenience of using voice systems poses privacy risks as the user's sensitive data becomes accessible. We identify two significant flaws within the access control mechanisms proposed by the voice system, which can exploit the user's private data. These findings underscore the need for enhanced privacy safeguards and improved access control systems within online shopping. We also offer recommendations to mitigate risks associated with unauthorized access, shedding light on securing the user's private data within the voice systems. / Computer and Information Science
554

NeuroGaze in Virtual Reality: Assessing an EEG and Eye Tracking Interface against Traditional Virtual Reality Input Devices

Barbel, Wanyea 01 January 2024 (has links) (PDF)
NeuroGaze is a novel Virtual Reality (VR) interface that integrates electroencephalogram (EEG) and eye tracking technologies to enhance user interaction within virtual environments (VEs). Diverging from traditional VR input devices, NeuroGaze allows users to select objects in a VE through gaze direction and cognitive intent captured via EEG signals. The research assesses the performance of the NeuroGaze system against conventional input devices such as VR controllers and eye gaze combined with hand gestures. The experiment, conducted with 20 participants, evaluates task completion time, accuracy, cognitive load through the NASA-TLX surveys, and user preference through a post-evaluation survey. Results indicate that while NeuroGaze presents a learning curve, evidenced by longer average task durations, it potentially offers a more accurate selection method with lower cognitive load, as suggested by its lower error rate and significant differences in the physical demand and temporal NASA-TLX subscale scores. This study highlights the viability of incorporating biometric inputs for more accessible and less demanding VR interactions. Future work aims to explore a multimodal EEG-Functional near-infrared spectroscopy (fNIRS) input device, further develop machine learning models for EEG signal classification, and extend system capabilities to dynamic object selection, highlighting the progressive direction for the use of Brain-Computer Interfaces (BCI) in virtual environments.
555

Near-eye GUI in VR

Nauclér Löfving, Johan January 2023 (has links)
VR systems are rapidly becoming more advanced, soon making the Vergence Accommodation Conflict a non-issue. The prospect of having commercially available VR systems without VAC issues invites designers and academics to consider what impact this may have on GUI design in VR. In an effort to provide a foundation for future studies on GUI in VR, this paper explores how a near-eye GUI affects the player experience and suggests what areas should be further studied based on the results.
556

Cleared for Takeoff

Berglin, Rebecka January 2024 (has links)
This thesis project, conducted in collaboration with Scandinavian Airlines (SAS), investigates how safety-critical internal systems can be designed to enhance usability and user experience through an examination of the Aerodrome Approval system at SAS. Employing a research-through-design approach and utilizing heuristic evaluations, semi-structured interviews, contextual inquiries, and a redesign process, several guidelines for improving usability and user experience have been identified. Key insights reveal that optimizing login functionalities can enhance security and role-specific access, thereby reducing errors and improving the user experience. Consistency in design elements and adherence to standards play a critical role in usability, aiding in error prevention and improving system navigation efficiency. Additionally, effective strategies for error prevention, such as contextual warnings tailored to specific conflicts, help maintain workflow efficiency and prevent user fatigue, whereas ensuring a balanced and timely presentation of information is essential to prevent information overload while still ensuring access to critical data. The project illustrates how multiple usability principles are interconnected yet sometimes conflicting and emphasizes the need to further investigate safety-critical internal systems to a broader extent to be able to identify more generalizable design guidelines in the future.
557

Exploration of Context-Aware Application Authoring Leveraging Artificial Intelligence

Fengming He (18848125) 24 June 2024 (has links)
<p dir="ltr">Recognition of human behavior and object status plays a significant role in context-aware applications. Although researchers have explored methods to detect users’ activities, there still exists a research gap where end-users are not able to build personalized applications that accurately recognize their activities in the physical environment. Therefore, in this thesis, to fill the gap, we explore different in-situ context-aware Augmented Reality (AR) applications that leverage hand-object recognition and support users to rapidly author context-aware applications by referring to their activities. We first explore the possibility that end-users develop AR applications with customized freehand interactions and introduce GesturAR, an end-to-end authoring tool that supports users to create freehand AR applications through embodied demonstration and visual programming. Next, we explore hand interactions with physical objects and propose ARnnotate, an AR system enabling end-users to create custom data. To further study context-aware applications with haptic feedback, we present Ubi Edge, an AR authoring tool that allows end-users to customize edges on daily objects as tangible user interface (TUI) inputs to control varied digital functions. We develop an integrated AR device and a vision-based detection workflow to track 3D edges and detect the touch interaction between fingers and edges. Finally, to enable end-users to apply AR applications in different physical environments, we propose AdapTUI, a system that can automatically adapt the geometric-based TUIs in various environments. In summary, our contribution lies in the development of several context-aware AR applications that effectively sense users’ activities and facilitate users’ authoring process intuitively and conveniently.</p>
558

Creation of a Time-Series Data Cleaning Toolbox

Kovács, Márton January 2024 (has links)
A significant drawback of currently used data cleaning methods includes a reliance on domain knowledge or a background in data science, and with the vast number of possible solutions to this problem, the step of data cleaning may be entirely foregone when developing a machine learning (ML) model. Since skipping this stage altogether results in a lower performance for ML models, a general-purpose time-series data cleaning user interface (UI) was developed in Python [1], with a target user base of people unfamiliar with data cleaning. Following the development, the UI was tested on time-series datasets available in online repositories, and a comparison between the estimation performance between ML models trained on original datasets and datasets cleaned through the UI was carried out. This comparison showed that the use of the UI can result in significant improvements to the performance of ML models; however, the degree of said improvement is highly dataset dependent. / En betydande nackdel med de närvarande metoderna som används för datarensning är att lita på domänkunskap eller en bakgrund inom datavetenskap. Med det stora antalet möjliga lösningar på detta problem kan datarensning steget helt utelämnas när en maskininlärningsmodell (ML) utvecklas. Eftersom att hoppa över det här steget resulterar i en lägre prestanda för ML-modeller, utvecklades ett allmänt användargränssnitt för datarensning av tidsserier (UI) i Python [1] som kan bli använda av personer som inte är bekanta med datarensning. Användargränssnittet testades på tidsseriedatauppsättningar som finns tillgängliga i onlinearkiv, och en jämförelse av uppskattningsprestanda mellan ML-modeller som tränats på ursprungliga datauppsättningar och datauppsättningar som rensats via användargränssnittet genomfördes. Denna jämförelse visade att användningen av användargränssnittet kan resultera i betydande förbättringar av ML-modellernas prestanda men förbättringsgraden är datamängdsberoende.
559

Communicating a Child's Learning Progress in a Digital Interface

Poucette, Petter January 2020 (has links)
The educational digital games are both used at school and at home by children. However, if used at home, how should parent's be able to tell a child learning progress in the game when they have limited knowledge in children's education? The purpose of this thesis is enhance the knowledge on how to design a digital interface that communicates the learning progress of a child to its parents. This by establishing a suggested framework on this subject. To be able to create the framework the Design thinking method were used. First suggestions, based on literature and interviews with teachers, for the design and framework were created. Then a Hight fidelity prototype of the digital interface based in the suggestions were designed. And lastly the prototype where tested in a Usability test. The usability test showed what in the suggestions that worked. The parts that worked were summarized in the suggested framework that the study where successful in creating. The framework contained both suggestion in how to visually represent a learning progress and how to communicate it. / Pedagogiska digitala spel används både i skolan och hemma av barn. Men om de används hemma, hur ska föräldrar kunna förstå vad ett barns lärandeutveckling i spelet när de har begränsad kunskap om barns utbildning? Syftet med denna uppsats är att öka kunskapen om hur man utformar ett digitalt gränssnitt som kommunicerar ett barns kunskapsutveckling till sina föräldrar. Detta genom att fastställa en föreslagen ram för detta ämne. För att kunna skapa ramverket användes Design thinking metoden. Först skapades förslag till design och ramverk, baserade på litteratur och intervjuer med lärare. Sedan designades en High fidelity prototyp av det digitala gränssnittet baserat på förslagen. Och slutligen testades prototypen i ett användbarhetstest. Användbarhetstestet visade vad i förslagen som fungerade. Delarna som fungerade sammanfattades i det föreslagna ramverket som studien lyckades skapa. Ramverket innehöll både förslag på hur man visuellt ska representera en inlärningsframsteg och hur man kommunicerar den.
560

Uma abordagem baseada em modelos para construção automática de interfaces de usuário para Sistemas de Informação / A model-based approach to user interfaces automatic building for information systems

COSTA, Sofia Larissa da 15 June 2011 (has links)
Made available in DSpace on 2014-07-29T14:57:49Z (GMT). No. of bitstreams: 1 Sofia Larissa da Costa.pdf: 2367903 bytes, checksum: ef9af67f05d333ef7a30026e0e9b9052 (MD5) Previous issue date: 2011-06-15 / Building user interfaces for Information Systems (IS) involves modeling and coding appearance (presentation) and behavioral (interaction) aspects. This work presents a modelbased approach to building these interfaces using tools for automatic transformation of models and for interface code generation. The proposed approach applies the concept of Interface Stereotype, introduced in this work, which identifies, in a high level of abstraction, features of user interface (UI) appearance and behavior, independently of the underlying IS application. A taxonomy of interface elements is proposed as the basis for stereotype definition, along with a interface behavior specification mechanism, which allows expressing actions and restrictions on the stereotypes by precise, objective and independently from the interface implementation platform. It is also proposed a architecture for a software component which manages model-based user interfaces building. The architecture defines how this component can be integrated in IS development process. The approach for model-based user interface development proposed in this work brings benefits in effort and cost construction terms, facilitating the maintenance and the evolution of user interface of IS. Futhermore, the use of stereotypes promotes consistency and standardization of both presentation and behavior of interfaces, improving usability of IS. / A construção de interfaces de usuário para Sistemas de Informação (SI) envolve modelagem e codificação de aspectos de aparência (apresentação) e comportamento (forma de interação). Este trabalho propõe uma abordagem baseada em modelos para construção dessas interfaces com o apoio de ferramentas de transformação automática de modelos e de geração de código de interface. A abordagem utiliza o conceito de Estereótipo de Interface, introduzido neste trabalho, que identifica, em alto nível de abstração, características de aparência e comportamento de interfaces, independentemente da aplicação do Sistema de Informação subjacente. Uma taxonomia para elementos de interface é proposta como base para a definição de estereótipos, juntamente com um mecanismo para especificação do comportamento da interface, que permite expressar ações e restrições sobre estereótipos de interface de maneira precisa, objetiva e independente da plataforma de implementação da interface. Também é proposta uma arquitetura para um componente de software que gerencia a construção de interfaces baseada em modelos. A arquitetura define como este componente pode ser integrado ao processo de desenvolvimento de SI. A abordagem para construção baseada em modelos proposta neste trabalho traz benefícios em termos de esforço e custo de construção facilitando a manutenção e a evolução de interfaces de usuário em SI. Além disso, o uso de estereótipos promove a consistência e a padronização, tanto da apresentação quanto do comportamento das interfaces, melhorando a usabilidade de SI.

Page generated in 0.0487 seconds