Spelling suggestions: "subject:"used centerface"" "subject:"used 1interface""
61 |
A User Centered Design and Prototype of a Mobile Reading Device for the Visually ImpairedKeefer, Robert B. 10 June 2011 (has links)
No description available.
|
62 |
Imagining a NeoFreudian Mind Interface: A Normative Model of Medical Humanities ResearchTiller, Samuel Perry 29 July 2019 (has links)
This thesis argues for a new theory of medical humanities practice and research, known as Mind Interface Theory. It begins with the claim that Sigmund Freud expanded medical metaphysics considerably in "A General Introduction to Psychoanalysis," and that this expansion affords the possibility of thinking of the mind as a user interface. Capitalizing on this affordance, the work then introduces mind interface theory as one possible imagining of Freud's metaphysical system, separate from his well-known theory, psychoanalysis. More specifically, it uses his discussion of dreamwork to reveal reprocessing as mind interface's mechanism of healing, before utilizing this reprocessing principle to orient the medical humanities' research, providing a theoretical framework for increased collaboration between humanists and physicians and a foundation for two distinct modes of activist scholarship: product-based and process-based. / Master of Arts / This thesis participates in medical humanities scholarship by advocating for a specific theory of the field that stems from a reading of Sigmund Freud’s Introduction to Psychoanalysis, a brief series of lectures which were written down for public consumption. Instead of using psychoanalysis itself to form a theory of the medical humanities, my work abstracts the broader suppositions on which psychoanalytic interpretation is rooted. This broader framework I call Freud’s medical metaphysics and define as the assumptions about causation and disease which form the basis for his philosophy of medical treatment. In making this distinction, I can more ably build my own theory of a mind interface on the fact that the basic structure of the metaphysics advocated in the lectures implies a vision of the mind that can be likened to a modern user interface. Through conceiving of the mind in terms of a user interface, I use mind interface theory to frame treatment in such a way as to promote a humanist theory of healing. The purport of the method is that humanists can assist patients through helping them utilize signs, language, and symbols to reprocess their experience. The advocation of this method is then applied to current threads in medical humanities scholarship to suggest that efforts in the field would be best served if they were directed towards studying the artifacts of patient populations for narrative and rhetorical strategies which were effective with coping with a specific illness and fostering an environment where patients are encouraged to produce such artifacts.
|
63 |
A Voice-based Multimodal User Interface for VTQuestSchneider, Thomas W. 14 June 2005 (has links)
The original VTQuest web-based software system requires users to interact using a mouse or a keyboard, forcing the users' hands and eyes to be constantly in use while communicating with the system. This prevents the user from being able to perform other tasks which require the user's hands or eyes at the same time. This restriction on the user's ability to multitask while using VTQuest is unnecessary and has been eliminated with the creation of the VTQuest Voice web-based software system. VTQuest Voice extends the original VTQuest functionality by providing the user with a voice interface to interact with the system using the Speech Application Language Tags (SALT) technology. The voice interface provides the user with the ability to navigate through the site, submit queries, browse query results, and receive helpful hints to better utilize the voice system. Individuals with a handicap that prevents them from using their arms or hands, users who are not familiar with the mouse and keyboard style of communication, and those who have their hands preoccupied need alternative communication interfaces which do not require the use of their hands. All of these users require and benefit from a voice interface being added onto VTQuest. Through the use of the voice interface, all of the system's features can be accessed exclusively with voice and without the use of a user's hands. Using a voice interface also frees the user's eyes from being used during the process of selecting an option or link on a page, which allows the user to look at the system less frequently. VTQuest Voice is implemented and tested for operation on computers running Microsoft Windows using Microsoft Internet Explorer with the correct SALT and Adobe Scalable Vector Graphics (SVG) Viewer plug-ins installed. VTQuest Voice offers a variety of features including an extensive grammar and out-of-turn interaction, which are flexible for future growth. The grammar offers ways in which users may begin or end a query to better accommodate the variety of ways users may phrase their queries. To accommodate for abbreviations of building names and alternate pronunciations of building names, the grammar also includes nicknames for the buildings. The out-of-turn interaction combines multiple steps into one spoken sentence thereby shortening the interaction and also making the process more natural for the user. The addition of a voice interface is recommended for web applications which a user may need to use his or her eyes and hands to multitask. Additional functionality which can be added later to VTQuest Voice is touch screen support and accessibility from cell phones, Personal Digital Assistants (PDAs), and other mobile devices. / Master of Science
|
64 |
Tangible User Interface for CAVE based on Augmented Reality TechniqueKim, Ji-Sun 20 January 2006 (has links)
This thesis presents a new 3-dimensional (3D) user interface system for a Cave Automated Virtual Environment (CAVE) application, based on Virtual Reality (VR), Augmented Reality (AR), and Tangible User Interface (TUI). We explore fundamental 3D interaction tasks with our user interface for the CAVE system. User interface (UI) is comprised of a specific set of components, including input/output devices and interaction techniques. Our approach is based on TUIs using ARToolKit, which is currently the most popular toolkit for use in AR projects. Physical objects (props) are used as input devices instead of any tethered electromagnetic trackers. An off-the-shelf webcam is used to get tracking input data. A unique pattern marker is attached to the prop, which is easily and simply tracked by ARToolKit. Our interface system is developed on CAVE infrastructure, which is a semi-immersive environment. All virtual objects are directly manipulated with props, each of which corresponds to a certain virtual object. To navigate, the user can move the background itself, while virtual objects remain in place. The user can actually feel the prop's movement through the virtual space. Thus, fundamental 3D interaction tasks such as object selection, object manipulation, and navigation are performed with our interface. To feel immersion, the user is allowed to wear stereoscopic glasses with a head tracker. This is the only tethered device for our work. Since our interface is based on tangible input tools, seamless transition between one and two-handed operation is provided. We went through three design phases to achieve better task performance. In the first phase, we conducted the pilot study, focusing on the question whether or not this approach is applicable to 3D immersive environments. After the pilot study, we redesigned props and developed ARBox. ARBox is used for as interaction space while the CAVE system is only used for display space. In this phase, we also developed interaction techniques for fundamental 3D interaction tasks. Our summative user evaluation was conducted with ARDesk, which is redesigned after our formative user evaluation. Two user studies aim to get user feedback and to improve interaction techniques as well as interface tools' design. The results from our user studies show that our interface can be intuitively and naturally applied to 3D immersive environments even though there are still some issues with our system design. This thesis shows that effective interactions in a CAVE system can be generated using AR technique and tangible objects. / Master of Science
|
65 |
A Semantics-based User Interface Model for Content Annotation, Authoring and ExplorationKhalili, Ali 26 January 2015 (has links)
The Semantic Web and Linked Data movements with the aim of creating, publishing and interconnecting machine readable information have gained traction in the last years.
However, the majority of information still is contained in and exchanged using unstructured documents, such as Web pages, text documents, images and videos.
This can also not be expected to change, since text, images and videos are the natural way in which humans interact with information.
Semantic structuring of content on the other hand provides a wide range of advantages compared to unstructured information.
Semantically-enriched documents facilitate information search and retrieval, presentation, integration, reusability, interoperability and personalization.
Looking at the life-cycle of semantic content on the Web of Data, we see quite some progress on the backend side in storing structured content or for linking data and schemata.
Nevertheless, the currently least developed aspect of the semantic content life-cycle is from our point of view the user-friendly manual and semi-automatic creation of rich semantic content.
In this thesis, we propose a semantics-based user interface model, which aims to reduce the complexity of underlying technologies for semantic enrichment of content by Web users.
By surveying existing tools and approaches for semantic content authoring, we extracted a set of guidelines for designing efficient and effective semantic authoring user interfaces.
We applied these guidelines to devise a semantics-based user interface model called WYSIWYM (What You See Is What You Mean) which enables integrated authoring, visualization and exploration of unstructured and (semi-)structured content.
To assess the applicability of our proposed WYSIWYM model, we incorporated the model into four real-world use cases comprising two general and two domain-specific applications.
These use cases address four aspects of the WYSIWYM implementation:
1) Its integration into existing user interfaces,
2) Utilizing it for lightweight text analytics to incentivize users,
3) Dealing with crowdsourcing of semi-structured e-learning content,
4) Incorporating it for authoring of semantic medical prescriptions.
|
66 |
A GRAPHICAL USER INTERFACE MIMO CHANNEL SIMULATORPanagos, Adam G., Kosbar, Kurt 10 1900 (has links)
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California / Multiple-input multiple-output (MIMO) communication systems are attracting attention because their channel capacity can exceed single-input single-output systems, with no increase in bandwidth. While MIMO systems offer substantial capacity improvements, it can be challenging to characterize and verify their channel models. This paper describes a software MIMO channel simulator with a graphical user interface that allows the user to easily investigate a number of MIMO channel characteristics for a channel recently proposed by the 3rd Generation Partnership Project (3GPP).
|
67 |
A Programmable PCM Data Simulator for Microcomputer HostsCunningham, Larry E. 11 1900 (has links)
International Telemetering Conference Proceedings / October 29-November 02, 1990 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Modem microcomputers are proving to be viable hosts for telemetry functions, including data simulators. A specialized high-performance hardware architecture for generating and processing simulator data can be implemented on an add-in card for the microcomputer. Support software implemented on the host provides a simple, high-quality human interface with a high degree of user programmability.
Based on this strategy, the Physical Science Laboratory at New Mexico State University (PSL) is developing a Programmable PCM Data Simulator for microcomputer hosts.
Specifications and hardware/software architectures for PSL’s Programmable PCM Data Simulator are discussed, as well as its interactive user interface.
|
68 |
Programų vartotojo sąsajos automatinis testavimas pagrįstas UML modeliais / Program user interface automated testing based on UML modelsJasaitis, Robertas 13 August 2010 (has links)
Pagrindinis darbo tikslas - realizuoti programinę įrangą, gebančią automatiškai
testuoti vartotojo sąsają ir testinius atvejus generuojančią iš duotų UML modelių. Kuriama programinė įranga turėtų būti realizuota naudojant Java technologiją. Taigi šio darbo tyrimo sritis apims vartotojo sąsajos testavimo automatizavimo būdų analizę.
UML modeliai tapo ypač populiaria priemone modeliuojant programinės įrangos architektūrą. UML modeliai šiais laikais naudojami ne tik įprastoms klasių diagramoms, veiklos diagramoms, sekų diagramoms modeliuoti ir pan., bet tampa vis populiaresnia priemone ir yra taikomi daugelio kitų projektavimo uždavinių sprendimui. Vartotojo sąsajos atvaizdavimas UML diagramomis vis dar nėra populiarus būdas, nors literatūroje vis dažniau sutinkame siūlymų naudoti šias technologijas. Taigi literatūroje atsirandantys straipsniai, vykdomos konferencijos ir panašaus pobūdžio įvykiai byloja, kad ateityje ši technologija neaplenks ir vartojo sąsajos modeliavimo proceso.
Rinkoje jaučiamas vartotojo sąsajos automatinio testavimo sistemos, kuri gebėtų testinius atvejus generuoti iš UML modelių ir automatiškai vykdyti testavimą, trūkumas. Tokia sistema galėtų būti naudojama efektyvesniam testavimui atlikti. Tokia sistema gebėtų testuoti vartotojo sąsają greičiau ir tiksliau. Greitis pasiekiamas tuo, kad automatinio testavimo procese nedalyvauja žmogus, o visą testavimą atlieka įrenginio procesorius. Testuojant sistemą skirtingose platformose testavimą galima atlikti... [toliau žr. visą tekstą] / In many cases, testing is an essential, but time and resource consuming activity in the software development process. In the case of model-based development, test construction and test execution can be partially automated. As the application size is constantly growing, the need for automated testing frameworks comes into place, particularly frameworks for automated testing of user interaction and graphical user interface.
This document describes an implementation of the GUI test generator framework based on UML models where specific UML activity diagrams are used for test case generation. It is not a usual case to use UML activity diagrams for UI modeling. However the existing stereotypes of activity diagram elements are not suitable for UI modeling. With usual activity diagram it is complicated to define buttons, containers, pages and other UI elements in the diagram and find differences between them. Even more complicated is to model the navigation of the testing application. Using this approach the UI can be defined in a set of UI elements along with a set of UI navigation elements. This is an optimal and suitable approach in most cases.
This document describes an implementation of the automated GUI tests runner framework as well. This framework is able to run the given application in test mode using the previously generated test cases. The framework collects all the information about each test case results and provides it to the tester.
Future improvements:
Find the... [to full text]
|
69 |
Usability Studies with Virtual and Traditional Computer Aided Design EnvironmentsAhmed, Syed Adeel 15 December 2006 (has links)
For both the CAVETM and the adaptable technology possessed by the University of New Orleans, crystal eye glasses are used to produce a stereoscopic view, and an ascension flock of birds tracking system is employed for tracking of the user's head position and position of a wand in 3D space. It is argued that with these immersive technologies along the use of gestures and hand movements should provide a more natural interface with the immersive virtual environment. This allows a more rapid and efficient set of actions to recognize geometry, interaction with a spatial environment, the ability to find errors, or navigate through an environment. The wand interface is used to provide an improved means of interaction. This study quantitatively measures the differences in interaction when compared with traditional human computer interfaces. This work uses competitive usability in four different Benchmarks: 1) navigation, 2) error detection/correction, 3) spatial awareness, and 4) a “shopping list†of error identifications. This work expands on [Butler & Satter's, 2005] work by conducting tests in the CAVETM system, rather than principally employing workbench technology. During testing, the testers are given some time to “play around†with the CAVETM environment for familiarity before undertaking a specific exercise. The testers are then instructed regarding tasks to be completed, and are asked to work quickly without sacrificing accuracy. The research team timed each task, counted errors, and recorded activity on evaluation sheets for each Benchmark test. At the completion of the testing scenarios involving Benchmarks 1, 2, 3, or 4, the subjects were given a survey document and asked to respond by checking boxes to communicate their subjective opinions.
|
70 |
Dynamic vs Static user-interface : Which one is easier to learn? And will it make you more efficient?Augustsson, Christopher January 2019 (has links)
Excel offers great flexibility and allows non-programmers to create complex functionality – but at the same time, it can become very nested with cells pointing to other cells, especially if there have been many changes over a more extended period. This has happened to ICS – a small company who has its focus on calibration, out of an array of different things relating to material testing. The system they have for field calibrations today have been overly complicated and hard to maintain and consists of multiple Excel spreadsheets. The conclusion has been that a new system needs to be developed – but question how, remains. By creating a prototype using modern web-technologies, this study has evaluated if a web application can meet the specific functional requirements ICS have and if it is a suitable solution for a new system. The prototype was put under manual user test, and the results find that the prototype meets all the requirements, meaning that a webapplication could work as a replacement. During the user tests, this study has also evaluated the differences in learnability and efficiency of users, between the static user interface of the current Excel-based system and the dynamic user interface of the web-based prototype. The users have performed a calibration with both systems, and parameters such as time to completion or number of errors made have been recorded. By comparing the test results from both systems, this study has concluded that a dynamic user interface is more likely to improve learnability for novice users, but have a low impact on efficiency for expert users.
|
Page generated in 0.0813 seconds