• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 427
  • 128
  • 105
  • 69
  • 32
  • 23
  • 16
  • 11
  • 10
  • 10
  • 8
  • 7
  • 7
  • 5
  • 4
  • Tagged with
  • 960
  • 960
  • 336
  • 334
  • 241
  • 198
  • 175
  • 166
  • 149
  • 143
  • 137
  • 111
  • 110
  • 104
  • 102
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Interactive Machine Learning for Refinement and Analysis of Segmented CT/MRI Images

Sarigul, Erol 07 January 2005 (has links)
This dissertation concerns the development of an interactive machine learning method for refinement and analysis of segmented computed tomography (CT) images. This method uses higher-level domain-dependent knowledge to improve initial image segmentation results. A knowledge-based refinement and analysis system requires the formulation of domain knowledge. A serious problem faced by knowledge-based system designers is the knowledge acquisition bottleneck. Knowledge acquisition is very challenging and an active research topic in the field of machine learning and artificial intelligence. Commonly, a knowledge engineer needs to have a domain expert to formulate acquired knowledge for use in an expert system. That process is rather tedious and error-prone. The domain expert's verbal description can be inaccurate or incomplete, and the knowledge engineer may not correctly interpret the expert's intent. In many cases, the domain experts prefer to do actions instead of explaining their expertise. These problems motivate us to find another solution to make the knowledge acquisition process less challenging. Instead of trying to acquire expertise from a domain expert verbally, we can ask him/her to show expertise through actions that can be observed by the system. If the system can learn from those actions, this approach is called learning by demonstration. We have developed a system that can learn region refinement rules automatically. The system observes the steps taken as a human user interactively edits a processed image, and then infers rules from those actions. During the system's learn mode, the user views labeled images and makes refinements through the use of a keyboard and mouse. As the user manipulates the images, the system stores information related to those manual operations, and develops internal rules that can be used later for automatic postprocessing of other images. After one or more training sessions, the user places the system into its run mode. The system then accepts new images, and uses its rule set to apply postprocessing operations automatically in a manner that is modeled after those learned from the human user. At any time, the user can return to learn mode to introduce new training information, and this will be used by the system to updates its internal rule set. The system does not simply memorize a particular sequence of postprocessing steps during a training session, but instead generalizes from the image data and from the actions of the human user so that new CT images can be refined appropriately. Experimental results have shown that IntelliPost improves the segmentation accuracy of the overall system by applying postprocessing rules. In tests two different CT datasets of hardwood logs, the use of IntelliPost resulted in improvements of 1.92% and 9.45%, respectively. For two different medical datasets, the use of IntelliPost resulted in improvements of 4.22% and 0.33%, respectively. / Ph. D.
132

A Taxonomy of Usability Characteristics in Virtual Environments

Gabbard, Joseph L. 18 December 1997 (has links)
Despite intense and wide-spread research in both virtual environments (VEs) and usability, the exciting new technology of VEs has not yet been closely coupled with the important characteristic of usability --- a necessary coupling if VEs are to reach their full potential. Although numerous methods exist for usability evaluation of interactive computer applications, these methods have well-known limitations, especially for evaluating VEs. Thus, there is a great need to develop usability evaluation methods and criteria <i>specifically</i> for VEs. Our goal is to increase awareness of the need for usability engineering of VEs and to lay a scientific foundation for developing high-impact methods for usability engineering of VEs. The first step in our multi-year research plan has been accomplished, yielding a comprehensive multi-dimensional taxonomy of usability characteristics specifically for VEs. This taxonomy was developed by collecting and synthesizing information from literature, conferences, World Wide Web (WWW) searches, investigative research visits to top VE facilities, and interviews of VE researchers and developers. The taxonomy consists of four main areas of usability issues: <i> Users and User Tasks in VEs</i>, <i>The Virtual Model</i>, <i>VE User Interface Input Mechanisms</i>, and <i>VE User Interface Presentation Components</i>. Each of these issues is progressively disclosed and presented at various levels of detail, including specific usability suggestions and context-driven discussion that include a number of references. The taxonomy is a thorough classification, enumeration, and discussion of usability issues in VEs that can be used by VE researchers and developers for usability assessment or simply design. The author can be reached through <a href=http://csgrad.cs.vt.edu/~jgabbard/>http://csgrad.cs.vt.edu/~jgabbard/</a> / Master of Science
133

Feed Me: an in-situ Augmented Reality Annotation Tool for Computer Vision

Ilo, Cedrick K. 02 July 2019 (has links)
The power of today's technology has enabled the combination of Computer Vision (CV) and Augmented Reality (AR) to allow users to interface with digital artifacts between indoor and outdoor activities. For example, AR systems can feed images of the local environment to a trained neural network for object detection. However, sometimes these algorithms can misclassify an object. In these cases, users want to correct the model's misclassification by adding labels to unrecognized objects, or re-classifying recognized objects. Depending on the number of corrections, an in-situ annotation may be a tedious activity for the user. This research will focus on how in-situ AR annotation can aid CV classification and what combination of voice and gesture techniques are efficient and usable for this task. / Master of Science / The power of today’s technology has allowed the ability of new inventions such as computer vision and Augmented Reality to work together seamlessly. The reason why computer scientists rave so much about computer vision is that it can enable a computer to see the world as humans do. With the rising popularity of Niantic’s Pokemon Go, Augmented Reality has become a new research area that researchers around the globe have taken part in to make it more stable and as useful as its next of kin virtual reality. For example, Augmented Reality can support users in gaining a better understanding of their environment by overlaying digital content into their field of view. Combining Computer Vision with Augmented Reality could aid the user further by detecting, registering, and tracking objects in the environment. However, sometimes a Computer Vision algorithm can falsely detect an object in a scene. In such cases, we wish to use Augmented Reality as a medium to update the Computer Vision’s object detection algorithm in-situ, meaning in place. With this idea, a user will be able to annotate all the objects within the camera’s view that were not detected by the object detection model and update any in-accurate classification of the objects. This research will primarily focus on visual feedback for in-situ annotation and the user experience of the Feed Me voice and gesture interface.
134

GUCCI: Ground station Uplink Command and Control Interpreter

Kedia, Namrata Rajiv 01 August 2016 (has links)
For a successful CubeSat mission, it is imperative to schedule events in a fashion that will generate maximum useful science data. Intuitive uplink commanding software is required for the Lower Atmosphere/Ionosphere Coupling Experiment (LAICE) CubeSat to ensure best results. The ground station up-link software is created with this aim in mind. This will make the operation center for the LAICE project efficient. This will also help in evaluating the effect of a particular schedule on LAICE instrument interface board (LIIB) before sending the commands to it. The interactive User Interface (UI) that makes the entire process intuitive guides the user to create an uplink schedule without any human error. The control software creates the command sequence taking in to account all the limitations and specification of the systems and instruments on LAICE. These data are backed up in an efficient format in Virginia Tech’s database for future processing. This web-based application ensures a smooth scheduling process without any errors. Assistive flight-ready software is provided on the flight computer on the LAICE CubeSat to upload the correct uplink sequence to the LIIB. / Master of Science
135

The Effects of Multimedia Interface Design on Original Learning and Retention

Ramsey, Theresa D. 11 December 1996 (has links)
The goal of this research was to compare the learning outcomes of three methods of instruction: a text-based instructional system and two multimedia systems. The two multimedia systems used different interface designs. The first multimedia system used a topic-oriented interface which is somewhat standard in multimedia design. The second multimedia system presented a problem solving context and simulated an industrial setting where the user played the role of an industrial engineer. All three methods presented analogous information about Time Study Analysis, a work measurement technique used by industrial engineers. A between subjects experimental design with two independent measures examined two domains of learning: verbal information and intellectual skills. This design was used for two sessions to examine original learning and retention components of learning. Original learning was measured immediately following the instructional treatment. Retention was measured two weeks after treatment. Thirty subjects of similar backgrounds (undergraduates in Industrial and Systems Engineering) participated in the experiment’s two sessions. Post-tests were used to measure verbal information and intellectual skills domains of learning during each session. A combined score for both domains was calculated. The scores were analyzed using ANOVA (analysis of variance). No significant differences were found between the three instructional methods for the two domains or the combined score during either the original learning session or the retention session of the experiment. / Master of Science
136

The Rising Pitch Metaphor: An Empirical Study.

Rigas, Dimitrios I., Alty, James L. January 2005 (has links)
No / This paper describes a set of experiments that investigated the use of rising pitch notes to communicate graphical information to visually impaired users. The information communicated in the experiments included coordinate locations within a 40×40 graphical grid, the navigation of an auditory cursor within the graphical grid, the communication of simple graphical shapes and their size. The five simple shapes communicated were rectangles, squares, circles, horizontal and vertical lines. Stereophony, timbre, rhythms, and short tunes were used in addition to the rising pitch metaphor to aid disambiguation. Results suggested that the rising pitch approach enabled visually impaired users to understand the graphical information communicated in the absence of any visual aid. The paper concludes with a discussion of the use of rising pitch metaphor to communicate graphical information.
137

Conversational Generative AI Interface Design : Exploration of a hybrid Graphical User Interface and Conversational User Interface for interaction with ChatGPT

Ribeiro, Renato January 2024 (has links)
This study explores the motivations, challenges, and design opportunities associated with using ChatGPT. The research employs an user-centred design approach to understand user interactions with ChatGPT and propose design concepts. Key motivations for using ChatGPT include its practical utility, ability to provide personalized answers, assistive capabilities, and role as an idea-sparring partner. However, users face challenges such as navigating large amounts of text, understanding how to prompt effectively, and dealing with ChatGPT’s lack of nuanced understanding. Consequently, this project proposes a redesign incorporating interactive features and Graphical User Interface changes to tackle these challenges. The findings suggest that the proposed concepts could significantly improve navigation and glanceability and facilitate the overviewing of past interactions. This research contributes to the field of interaction design by providing insights into the use of conversational generative AI and suggesting improvements for future applications.
138

A Serious Game for Children with Autism Spectrum Disorder

Ornelas Barajas, Alejandra January 2017 (has links)
In this thesis, we propose a Serious Game (SG) for children with the Autism Spectrum Disorder (ASD) that builds on the concept of LEGO®-Based Therapy that is aimed at improving social and cognitive skills. The proposed SG is composed of building blocks augmented with electronic modules that connect to a computing device that provides visual feedback. We investigate the effects of using the proposed computer SG by comparing it to a non-computer block-game during two empirical studies, one following an unstructured play approach and a second one with structured play by assigning roles to the players. For the first study, the proposed system showed an improvement in social interaction, collaborative play and exercise performance, as well as a decrease in solitary play. For the second study, the proposed system showed an improvement in social interaction, positive vocalizations and exploratory behavior. There was also a marked preference towards the proposed game. Furthermore, we perceived a decrease on the assistance needed when using the proposed system during both studies. Our results suggest that the proposed system can be a useful play therapy tool aimed for young children with ASD.
139

Player-Driven UI Design for FPS-Games

Flensburg, Allan, Nilsson, Simon January 2020 (has links)
This paper explores the appeal of customizable user interfaces (UI) in video games, and the choices players make when this option is available to them. In the video game industry at present, players aren't given much choice in regards to the UI, even though it is usually a vital element that will support them throughout their whole experience. To determine the value of customizable UIs, players were provided a testing environment with tools that allowed them to modify their UI, and quantitative data was collected during this test. A qualitative study has also been conducted with a focus on the players attitude towards the subject. The results of the study show a high favor for UI customization among the players. It does however show that players are split on several aspects within the topic and further research is required. This can hopefully lead to developers adapting more uses of user experience (UX) and implementing UI customization within their games.
140

Rearticulating the Zoomable User Interface

Simoneaux, Brent A. 16 August 2011 (has links)
No description available.

Page generated in 0.0961 seconds