Spelling suggestions: "subject:"used interface."" "subject:"use interface.""
71 |
Hybrid and Coordinated 3D Interaction in Immersive Virtual EnvironmentsWang, Jia 29 April 2015 (has links)
Through immersive stereoscopic displays and natural user interfaces, virtual reality (VR) is capable of offering the user a sense of presence in the virtual space, and has been long expected to revolutionize how people interact with virtual content in various application scenarios. However, with many technical challenges solved over the last three decades to bring low cost and high fidelity to VR experiences, we still do not see VR technology used frequently in many seemingly suitable applications. Part of this is due to the lack of expressiveness and efficiency of traditional “simple and reality-based� 3D user interfaces (3DUIs). The challenge is especially obvious when complex interaction tasks with diverse requirements are involved, such as editing virtual objects from multiple scales, angles, perspectives, reference frames, and dimensions. A common approach to overcome such problems is through hybrid user interface (HUI) systems that combine complementary interface elements to leverage their strengths. Based on this method, the first contribution of this dissertation is the proposal of Force Extension, an interaction technique that seamlessly integrates position-controlled touch and rate-controlled force input for efficient multi-touch interaction in virtual environments. Using carefully designed mapping functions, it is capable of offering fluid transitions between the two contexts, as well as simulating shear force input realistically for multi-touch gestures. The second contribution extends the HUI concept into immersive VR by introducing a Hybrid Virtual Environment (HVE) level editing system that combines a tablet and a Head-Mounted Display (HMD). The HVE system improves user performance and experience in complex high-level world editing tasks by using a “World-In-Miniature� and 2D GUI rendered on a multi-touch tablet device to compensate for the interaction limitations of a traditional HMD- and wand-based VR system. The concept of Interaction Context (IC) is introduced to explain the relationship between tablet interaction and the immersive interaction, and four coordination mechanisms are proposed to keep the perceptual, functional, and cognitive flow continuous during IC transitions. To offer intuitive and realistic interaction experiences, most immersive 3DUIs are centered on the user’s virtual avatar, and obey the same physics rules of the real world. However, this design paradigm also employs unnecessary limitations that hinders the performance of certain tasks, such as selecting objects in cluttered space, manipulating objects in six degrees of freedom, and inspecting remote spaces. The third contribution of this dissertation proposes the Object Impersonation technique, which breaks the common assumption that one can only immerse in the VE from a single avatar, and allows the user to impersonate objects in the VE and interact from their perspectives and reference frames. This hybrid solution of avatar- and object-based interaction blurs the line between travel and object selection, creating a unique cross-task interaction experience in the immersive environment. Many traditional 3DUIs in immersive VR use simple and intuitive interaction paradigms derived from real world metaphors. But they can be just as limiting and ineffective as in the real world. Using the coordinated HUI or HVE systems presented in this dissertation, one can benefit from the complementary advantages of multiple heterogeneous interfaces (Force Extension), VE representations (HVE Level Editor), and interaction techniques (Object Impersonation). This advances traditional 3D interaction into the more powerful hybrid space, and allows future VR systems to be applied in more application scenarios to provide not only presence, but also improved productivity in people’s everyday tasks.
|
72 |
Leveraging User Testing to Address Learnability Issues for Teachers Using ASSISTmentsBodah, Joshua 19 April 2013 (has links)
The goal of this thesis is to demonstrate how user testing can be used to identify and remediate learnability issues of a web application. Experimentation revolved around ASSISTments (www.assistments.org), an intelligent tutoring web application in which teachers create virtual classrooms where they can assign problem sets to their students and gain valuable data which can be used to make informed decisions. Recent log analysis uncovered very low task completion rates for new users on tasks that were intended to be trivial. Suspecting that this could be due to poor user interface design, user tests were conducted to help identify usability problems. Sessions were analyzed, and changes were made between each user test to address issues found. Feedback from user testing led to the implementation of an embedded support system. This support system consisted of a splash page which gave an overview of how the system should be used and a collection of context-sensitive tooltips which tried to give the user instructions on what to do as well as explain various parts of the interface. A randomized control trial was performed to measure the effectiveness of the embedded support. 69 participants were shown one of two interfaces: one with embedded support and one without. Task completion rates were analyzed for each of the groups. We found that the support system was able to influence which links a user clicked. However, although the support system was intended to address poor task completion rates, users in the conditions had similar task completion rates regardless of whether the support system was enabled.
|
73 |
Tilt and Multitouch Input for Tablet Play of Real-Time Strategy GamesFlanagan, Nevin 09 April 2014 (has links)
We are studying the use of tilt-enabled handheld touchscreen devices as an interface for top-down strategy games. We will explore how using different input modes (tilt and touch) compare for certain tasks in terms of efficiency and comfort. Real-time and turn-based strategy games are a popular form of electronic gaming, though these games currently have only minor representation on tablets. This genre of game requires both a wide variety of input and the display of a wealth of information. We are exploring whether, with suitable interface developments, this genre can become as accessible on tablet devices as on traditional computers. These interface approaches may also prove useful for expanding the presence of other game genres in the mobile space.
|
74 |
Design and test of a multi-camera based orthorectified airborne imaging systemBecklinger, Nicole Lynn 01 May 2010 (has links)
Airborne imaging platforms have been applied to such diverse areas as surveillance, natural disaster monitoring, cartography and environmental research. However, airborne imaging data can be expensive, out of date, or difficult to interpret. This work introduces an Orthorectified Airborne Imaging (OAI) system designed to provide near real time images in Google Earth. The OAI system consists of a six camera airborne image collection system and a ground based image processing system. Images and position data are transmitted from the air to the ground station using a point to point (PTP) data link antenna connection. Upon reaching the ground station, image processing software combines the six individual images into a larger stitched image. Stitched images are processed to remove distortions and then rotated so that north is pointed up (orthorectified). Because the OAI images are very large, they must be broken down into a series of progressively higher resolution tiles called an image pyramid before being loaded into Google Earth. A KML programming technique called a super overlay is used to load the image pyramid into Google Earth. A program and Graphical User Interface created in C# create the KML super overlay files according to user specifications. Image resolution and the location of the area being imaged relitive to the aircraft are functions of altitude and the position of the imaging cameras. Placement of OAI images in Google Earth allows the user to take advantage of the place markers, street names, and navigation features native to the Google Earth environment.
|
75 |
Adaptive User Interfaces for Mobile Computing DevicesBridle, Robert Angus, robert.bridle@gmail.com January 2008 (has links)
This thesis examines the use of adaptive user interface elements on a mobile phone and presents two adaptive user interface approaches. The approaches attempt to increase the efficiency with which a user interacts with a mobile phone, while ensuring the interface remains predictable to a user.
¶
An adaptive user interface approach is presented that predicts the menu item a user will select. When a menu is opened, the predicted menu item is highlighted instead of the top-most menu item. The aim is to maintain the layout of the menu and to save the user from performing scrolling key presses. A machine learning approach is used to accomplish the prediction task. However, learning in the mobile phone environment produces several difficulties. These are limited availability of training examples, concept drift and limited computational resources. A novel learning approach is presented that addresses these difficulties. This learning approach addresses limited training examples and limited computational resources by employing a highly restricted hypothesis space. Furthermore, the approach addresses concept drift by determining the hypothesis that has been consistent for the longest run of training examples into the past. Under certain concept drift restrictions, an analysis of this approach shows it to be superior to approaches that use a fixed window of training examples. An experimental evaluation on data collected from several users interacting with a mobile phone was used to assess this learning approach in practice. The results of this evaluation are reported in terms of the average number of key presses saved. The benefit of menu-item prediction can clearly be seen, with savings of up to three key presses on every menu interaction.
¶
An extension of the menu-item prediction approach is presented that removes the need to manually specify a restricted hypothesis space. The approach uses a decision-tree learner to generate hypotheses online and uses the minimum description length principle to identify the occurrence of concept shifts. The identification of concept shifts is used to guide the hypothesis generation process. The approach is compared with the original menu-item prediction approach in which hypotheses are manually specified. Experimental results using the same datasets are reported.
¶
Another adaptive user interface approach is presented that induces shortcuts on a mobile phone interface. The approach is based on identifying shortcuts in the form of macros, which can automate a sequence of actions. A means of specifying relevant action sequences is presented, together with several learning approaches for predicting which shortcut to present to a user. A small subset of the possible shortcuts on a mobile phone was considered. This subset consisted of shortcuts that automated the actions of making a phone call or sending a text message. The results of an experimental evaluation of the shortcut prediction approaches are presented. The shortcut prediction process was evaluated in terms of predictive accuracy and stability, where stability was defined as the rate at which predicted shortcuts changed over time. The importance of stability is discussed, and is used to question the advantages of using sophisticated learning approaches for achieving adaptive user interfaces on mobile phones. Finally, several methods for combining accuracy and stability measures are presented, and the learning approaches are compared with these methods.
|
76 |
Usability Modelling For Requirements EngineeringAdikari, Sisira, n/a January 2008 (has links)
For over two decades user-centric methods and techniques have been proposed to assist the
production of usable, useful, and desirable software products. Despite these approaches,
usability problems are still identified in finished software products creating problems at
systems acceptance, rework and impacting end user experience. Part of the reason for these
continuing problems is that user-centric approaches are not part of the traditional software
engineering process. The literature review shows that software engineering and human-computer
interaction are largely different communities.
The aim of this thesis is to investigate whether the incorporation of user modelling and
usability modelling into software requirements specifications would improve design quality
and usability of software products. This research study used a Design Science dominant
mixed research methodology consisting of case study and action research for creating,
analysing and evaluating artefacts for improving the effectiveness of user-centred design and
usability of software artefacts. Using the functional specification of an existing system in a
government agency, ten designers created screen and interaction designs. The specification
was then enhanced with usability specifications and the designers redeveloped their designs
in the light of the enhanced specification. Both designs were subject to pre-defined usability
tests and designers described their design experience as they worked.
The results of the research demonstrated that enhancing traditional software requirements
specifications with additional specifications of user modelling and usability modelling made
a positive difference to both designer perception as well as design quality of user interface
artefacts. The theoretical and practical values of these findings are explored.
|
77 |
Searching by browsingCox, Kevin Ross, n/a January 1994 (has links)
Information retrieval (IR) is an important part of many tasks performed by people when they
use computers. However, most IR research and theory isolates the IR component from the
tasks performed by users. This is done by expressing user needs as a query performed on a
database. In contrast this dissertation investigates the design and evaluation of information
retrieval systems where the information retrieval mechanisms remain embedded in the user
tasks.
While there are a many different types of user tasks performed with computers we can specify
common requirements for the IR needed in most tasks. There are both user interface and
machine processing requirements. For user interfaces it is desirable if users interact directly
with information databases, keep control of the interaction and are able to perform IR in a
timely manner. Machine processing has to be within the capabilities of machines yet must fit
with human perceptions and has to be efficient in both storage and computation.
Given the overall requirements, the dissertation gives a particular implementation for how to
embed IR in tasks. The implementation uses a vector representation for objects and organises
the objects in a near neighbour data structure. Near neighbours are defined within the context
of the tasks the users wish to achieve. While the implementation could use many different
finding mechanisms, it emphasises a constructive solution building approach with localised
browsing in the database. It is shown how the IR implementation fits with the overall task
activities of the user.
Much of the dissertation examines how to evaluate embedded IR. Embedded IR requires
testing users' task performance in both real experiments and thought experiments.
Implementation is tested by finding known objects, by validating the machine representations
and their correspondence with human perceptions and by testing the machine performance of
the implementation.
Finally implications and extensions of the work arc explored by looking at the practicality of
the approach, other methods of investigation and the possibility of building dynamic learning
systems that improve with use.
|
78 |
A user-interface for whole-body MRI data for oncological evaluations.Olsson, Sandra January 2010 (has links)
<p>Hospitals have limited budgets, making the cost of an examination important. A whole-body MRI scan is much less expensive than a PET-CT scan, making the MRI desirable in cases when the result from the MR machine will be sufficient. Also, unlike CT, MRI does not rely on ionizing radiation, which is known to increase the risk of developing cancer.</p><p>To make the most out of the MRI results, an efficient visualization of the data is important. The goal of this project was to develop an application that would facilitate radiologists’ evaluation of whole-body MRI data of lymphoma patients. This was achieved by introducing a fused image between two types of MRI images, offering simplified loading of all the study MRI data and creating a rotatable maximum intensity projection from which points can be selected and zoomed to in other types of images.</p><p>Unfortunately the loading of the data and some parts of the interaction is somewhat slow, which is something that needs to be addressed before this application could become a possibly useful tool for the radiologists.</p>
|
79 |
Proceedings of the Fourth PHANTOM Users Group WorkshopSalisbury, J. Kenneth, Srinivasan, Mandayam A. 04 November 1999 (has links)
This Report contains the proceedings of the Fourth Phantom Users Group Workshop contains 17 papers presented October 9-12, 1999 at MIT Endicott House in Dedham Massachusetts. The workshop included sessions on, Tools for Programmers, Dynamic Environments, Perception and Cognition, Haptic Connections, Collision Detection / Collision Response, Medical and Seismic Applications, and Haptics Going Mainstream. The proceedings include papers that cover a variety of subjects in computer haptics including rendering, contact determination, development libraries, and applications in medicine, path planning, data interaction and training.
|
80 |
Scroll Placement and HandednessDamien M. Berahzer 2005 April 1900 (has links)
This study explored how individuals categorized on handedness (being left or right hand dominant) reacted to having the vertical scroll bar of a web browser relocated to the left side of the screen. The relocation of the vertical scroll bar served as an alternative to the relocation of the prominent left aligned main navigation menu for most websites. Fifteen participants were recruited for the study. Each participant interacted with two versions of a web site in a modified browser to complete a set of ten short tasks. Participants completed tasks by interacting with a traditional and non-traditional vertical browser alignment. Left and right-handed participants were determined to be strikingly different in operation. Vertical scroll relocation produced some interesting results and responses.
|
Page generated in 0.0869 seconds