• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 423
  • 42
  • 41
  • 10
  • 7
  • 7
  • 6
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 626
  • 626
  • 510
  • 454
  • 195
  • 189
  • 105
  • 104
  • 96
  • 65
  • 63
  • 58
  • 56
  • 50
  • 46
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

An efficient haptic interface for a variable displacement pump controlled excavator

Elton, Mark David 05 1900 (has links)
Human-machine interfaces influence both operator effectiveness and machine efficiency. Further immersion of the operator into the machine’s working environment gives the operator a better feel for the status of the machine and its working conditions. With this knowledge, operators can more efficiently control machines. The use of multi-modal HMIs involving haptics, sound, and visual feedback can immerse the operator into the machine’s environment and provide assistive clues about the state of the machine. This thesis develops a realistic excavator model that mimics a mini-excavator’s dynamics and soil interaction during digging tasks. A realistic graphical interface is written that exceeds the quality of current academic simulators. The graphical interface and new HMI are placed together with a model of the excavator’s mechanical and hydraulic dynamics into an operator workstation. Two coordinated control schemes are developed on an haptic display for a mini-excavator and preliminary tests are run to measure increases in operator effectiveness and machine efficiency.
502

Haptic cinema: an art practice on the interactive digital media tabletop

Chenzira, Ayoka 31 January 2011 (has links)
Common thought about cinema calls to mind an audience seated in a darkened theatre watching projected moving images that unfold a narrative onto a single screen. Cinema is much more than this. There is a significant history of artists experimenting with the moving image outside of its familiar setting in a movie theatre. These investigations are often referred to as "expanded cinema". This dissertation proposes a genre of expanded cinema called haptic cinema, an approach to interactive narrative that emphasizes material object sensing, identification and management; viewer's interaction with material objects; multisequential narrative; and the presentation of visual and audio information through multiple displays to create a sensorially rich experience for viewers. The interactive digital media tabletop is identified as one platform on which to develop haptic cinema. This platform supports a subgenre of haptic cinema called tabletop cinema. Expanded cinema practices are analyzed for their contributions to haptic cinema. Based on this theoretical and artistic research, the thesis claims that haptic cinema contributes to the historical development of expanded cinema and interactive cinema practices. I have identified the core properties of a haptic cinema practice during the process of designing, developing and testing a series of haptic cinema projects. These projects build on and make use of methods and conventions from tangible interfaces, tangible narratives and tabletop computing.
503

Konzeption und Umsetzung eines Werkzeugs zur Definition von Navigationsflüssen mittels Dienstannotationen

Martens, Felix 25 October 2010 (has links) (PDF)
Die Diplomarbeit stellt einen innovativen und leichtgewichtigen Modellierungsansatz zur Beschreibung interaktiver, dienstbasierter Anwendungen auf Basis von Dienstannotationen vor.
504

Overcoming Limitations of Serial Audio Search

Hidalgo, Isabela Cordeiro Ribeiro Moura 01 January 2012 (has links)
The typical approach for finding audio recordings, such as music and sound effects, in a database is to enter some textual information into a search field. The results appear summarized in a list of textual descriptions of the audio files along with a function for playing back the recordings. Exploring such a list sequentially is a time-consuming and tedious way to search for sounds. This research evaluates whether searching for audio information can become more effective with a user interface capable of presenting multiple audio streams simultaneously. A prototype audio player was developed with a user interface suitable for both search and browsing of a hierarchically organized audio collection. The audio recordings are presented either serially (serial output mode) or simultaneously (parallel output mode), spatially distributed in both vertical and horizontal planes. Users select individual recordings by simply pointing at its source location with a remote control. Two within-subjects experiments were conducted to compare the performance of the audio player's output modes in audio search tasks. The experiments differ in the maximum number of audio recordings played simultaneously - either four or six. In both experiments, search tasks were performed about 25% faster using parallel audio output than using serial output. Over 80% of participants preferred searching parallel output. The results indicate that using parallel output can be a valuable improvement to the current methods of audio search, which typically use only serial output.
505

Konzepte der Anwendungsentwicklung für und mit Multi-Touch

Freitag, Georg 16 March 2015 (has links) (PDF)
Mit dem Aufkommen natürlicher Benutzerschnittstellen zum Erreichen einer möglichst intuitiven Interaktion mit Computern wird auch über die Bedeutung der Gestaltungsaspekte LOOK und FEEL der darzustellenden Benutzeroberflächen neu verhandelt. Dies bedeutet für den Entwurf und die Entwicklung neuer Anwendungen, die bisherigen Vorgehensmodelle, Werkzeuge und Interaktionen zu überdenken und hinsichtlich der neuen Herausforderungen zu überprüfen. Als Leitmotiv der vorliegenden Arbeit dient der Ansatz: Ähnliches wird durch Ähnliches entwickelt, der sich am Beispielfall der Multi-Touch-Technologie konkret mit dem Forschungsraum der natürlichen Benutzerschnittstellen auseinandersetzt. Anhand der drei aufeinander aufbauenden Aspekte Modell, Werkzeug und Interaktion wird die besondere Stellung des FEELs betont und diskutiert. Die Arbeit konzentriert sich dabei besonders auf die Phase des Prototypings, in der neue Ideen entworfen und später (weiter-) entwickelt werden. Die Arbeit nähert sich dabei dem Thema schrittweise an, vom Abstrakten hin zum Konkreten. Hierzu wird zunächst ein neu entwickeltes Vorgehensmodell vorgestellt, um auf die Besonderheiten des FEELs im Entwicklungsprozess natürlicher Benutzerschnittstellen eingehen zu können. Das Modell verbindet Ansätze agiler und klassischer Modelle, wobei die Iteration und die Entwicklung von Prototypen eine besondere Stellung einnehmen. Ausgehend vom neu vorgestellten Modell werden zwei Einsatzbereiche abgeleitet, die entsprechend des Leitmotivs der Arbeit mit zu konzipierenden Multi-Touch-Werkzeugen besetzt werden. Dabei wird besonderer Wert darauf gelegt, den Entwickler in die Rolle des Nutzers zu versetzen, indem die beiden Aktivitäten Umsetzung und Evaluation am selben Gerät stattfinden und fließend ineinander übergehen. Während das für den Entwurf erstellte Konzept TIQUID die Nachbildung von Verhalten und Abhängigkeiten mittels gestengesteuerter Animation ermöglicht, stellt das Konzept LIQUID dem Entwickler eine visuelle Programmiersprache zur Umsetzung des FEELs zur Verfügung. Die Bewertungen der beiden Werkzeuge erfolgte durch drei unabhängige Anwendungstests, welche die Einordnung im Entwicklungsprozess, den Vergleich mit alternativen Werkzeugen sowie die bevorzugte Interaktionsart untersuchten. Die Resultate der Evaluationen zeigen, dass die vorab gesteckten Zielstellungen einer einfachen Verwendung, der schnellen und umgehenden Darstellung des FEELs sowie die gute Bedienbarkeit mittels der Multi-Touch-Eingabe erfüllt und übertroffen werden konnten. Den Abschluss der Arbeit bildet die konkrete Auseinandersetzung mit der Multi-Touch-Interaktion, die für Entwickler und Nutzer die Schnittstelle zum FEEL der Anwendung ist. Die bisher auf die mittels Berührung beschränkte Interaktion mit dem Multi-Touch-Gerät wird im letzten Abschnitt der Arbeit mit Hilfe eines neuartigen Ansatzes um einen räumlichen Aspekt erweitert. Aus dieser Position heraus ergeben sich weitere Sichtweisen, die einen neuen Aspekt zum Verständnis von nutzerorientierten Aktivitäten beitragen. Diese, anhand einer technischen Umsetzung erprobte Vision neuer Interaktionskonzepte dient als Ansporn und Ausgangspunkt zur zukünftigen Erweiterung des zuvor entwickelten Vorgehensmodells und der konzipierten Werkzeuge. Der mit dieser Arbeit erreichte Stand bietet einen gesicherten Ausgangspunkt für die weitere Untersuchung des Fachgebietes natürlicher Benutzerschnittstellen. Neben zahlreichen Ansätzen, die zur vertiefenden Erforschung motivieren, bietet die Arbeit mit den sehr konkreten Umsetzungen TIQUID und LIQUID sowie der Erweiterung des Interaktionsraumes Schnittstellen an, um die erzielten Forschungsergebnisse in die Praxis zu übertragen. Eine fortführende Untersuchung des Forschungsraumes mit Hilfe alternativer Ansätze ist dabei ebenso denkbar wie der Einsatz einer zu Multi-Touch alternativen Eingabetechnologie.
506

Are icons pictures or logographical words? Statistical, behavioral, and neuroimaging measures of semantic interpretations of four types of visual information

Huang, Sheng-Cheng 12 July 2012 (has links)
This dissertation is composed of three studies that use statistical, behavioral, and neuroimaging methods to investigate Chinese and English speakers’ semantic interpretations of four types of visual information including icons, single Chinese characters, single English words, and pictures. The goal is to examine whether people cognitively process icons as logographical words. By collecting survey data from 211 participants, the first study investigated how differently these four types of visual information can express specific meanings without ambiguity on a quantitative scale. In the second study, 78 subjects participated in a behavioral experiment that measured how fast people could correctly interpret the meaning of these four types of visual information in order to estimate the differences in reaction times needed to process these stimuli. The third study employed functional magnetic resonance imaging (fMRI) with 20 participants selected from the second study to identify brain regions that were needed to process these four types of visual information in order to determine if the same or different neural networks were required to process these stimuli. Findings suggest that 1) similar to pictures, icons are statistically more ambiguous than English words and Chinese characters to convey the immediate semantics of objects and concepts; 2) English words and Chinese characters are more effective and efficient than icons and pictures to convey the immediate semantics of objects and concepts in terms of people’s behavioral responses, and 3) according to the neuroimaging data, icons and pictures require more resources of the brain than texts, and the pattern of neural correlates under the condition of reading icons is different from the condition of reading Chinese characters. In conclusion, icons are not cognitively processed as logographical words like Chinese characters although they both stimulate the semantic system in the brain that is needed for language processing. Chinese characters and English words are more evolved and advanced symbols that are less ambiguous, more efficient and easier for a literate brain to understand, whereas graphical representations of objects and concepts such as icons and pictures do not always provide immediate and unambiguous access to meanings and are prone to various interpretations. / text
507

Touchscreen interfaces for machine control and education

Kivila, Arto 20 September 2013 (has links)
The touchscreen user interface is an inherently dynamic device that is becoming ubiquitous. The touchscreen’s ability to adapt to the user’s needs makes it superior to more traditional haptic devices in many ways. Most touchscreen devices come with a very large array of sensors already included in the package. This gives engineers the means to develop human-machine interfaces that are very intuitive to use. This thesis presents research that was done to develop a best touchscreen interface for driving an industrial crane for novice users. To generalize the research, testing also determined how touchscreen interfaces compare to the traditional joystick in highly dynamic tracking situations using a manual tracking experiment. Three separate operator studies were conducted to investigate touchscreen control of cranes. The data indicates that the touchscreen interfaces are superior to the traditional push-button control pendent and that the layout and function of the graphical user interface on the touchscreen plays a roll in the performance of the human operators. The touchscreen interface also adds great promise for allowing users to navigate through interactive textbooks. Therefore, this thesis also presents developments directed at creating the next generation of engineering textbooks. Nine widgets were developed for an interactive mechanical design textbook that is meant to be delivered via tablet computers. Those widgets help students improve their technical writing abilities, introduce them to tools they can use in product development, as well as give them knowledge in how some dynamical systems behave. In addition two touchscreen applications were developed to aid the judging of a mechanical design competition.
508

Task-Centric User Interfaces

Lafreniere, Benjamin J. January 2014 (has links)
Software applications for design and creation typically contain hundreds or thousands of commands, which collectively give users enormous expressive power. Unfortunately, rich feature sets also take a toll on usability. Current interfaces to feature-rich software address this dilemma by adopting menus, toolbars, and other hierarchical schemes to organize functionality—approaches that enable efficient navigation to specific commands and features, but do little to reveal how to perform unfamiliar tasks. We present an alternative task-centric user interface design that explicitly supports users in performing unfamiliar tasks. A task-centric interface is able to quickly adapt itself to the user’s intended goal, presenting relevant functionality and required procedures in task-specific customized interfaces. To achieve this, task-centric interfaces (1) represent tasks as first-class objects in the interface; (2) allow the user to declare their intended goal (or infer it from the user’s actions); (3) restructure the interface to provide step-by-step scaffolding for the current goal; and (4) provide additional knowledge and guidance within the application’s interface. Our inspiration for task-centric interfaces comes from a study we conducted, which revealed that a valid use case for feature-rich software is to perform short, targeted tasks that use a small fraction of the application’s full functionality. Task-centric interfaces provide explicit support for this use. We developed and tested our task-centric interface approach by creating AdaptableGIMP, a modified version of the GIMP image editor, and Workflows, an iteration on AdaptableGIMP’s design based on insights from a semi-structured interview study and a think-aloud study. Based on a two-session study of Workflows, we show that task-centric interfaces can successfully support a guided-and-constrained problem solving strategy for performing unfamiliar tasks, which enables faster task completion and reduced cognitive load as compared to current practices. We also provide evidence that task-centric interfaces can enable a higher-level form of application learning, in which the user associates tasks with relevant keywords, as opposed to low-level commands and procedures. This keyword learning has potential benefits for memorability, because the keywords themselves are descriptive of the task being learned, and scalability, because a few keywords can map to an arbitrarily complex set of commands and procedures. Finally, our findings suggest a range of different ways that the idea of task-centric interfaces could be further developed.
509

Exploring user interface challenges in supporting activity-based knowledge work practices

Voida, Stephen 19 May 2008 (has links)
The venerable desktop metaphor is beginning to show signs of strain in supporting modern knowledge work. Traditional desktop systems were not designed to support the sheer number of simultaneous windows, information resources, and collaborative contexts that have become commonplace in contemporary knowledge work. Even though the desktop has been slow to evolve, knowledge workers still consistently manage multiple tasks, collaborate effectively among colleagues or clients, and manipulate information most relevant to their current task by leveraging the spatial organization of their work area. The potential exists for desktop workspaces to better support these knowledge work practices by leveraging the unifying construct of activity. Semantically-meaningful activities, conceptualized as a collection of tools (applications, documents, and other resources) within a social and organizational context, offer an alternative orientation for the desktop experience that more closely corresponds to knowledge workers' objectives and goals. In this research, I unpack some of the foundational assumptions of desktop interface design and propose an activity-centered model for organizing the desktop interface based on empirical observations of real-world knowledge work practice, theoretical understandings of cognition and activity, and my own experiences in developing two prototype systems for extending the desktop to support knowledge work. I formalize this analysis in a series of key challenges for the research and development of activity-based systems. In response to these challenges, I present the design and implementation of a third research prototype, the Giornata system, that emphasizes activity as a primary organizing principle in GUI-based interaction, information organization, and collaboration. I conclude with two evaluations of the system. First, I present findings from a longitudinal deployment of the system among a small group of representative knowledge workers; this deployment constitutes one of the first studies of how activity-based systems are adopted and appropriated in a real-world context. Second, I provide an assessment of the technologies that enable and those that pose barriers to the development of activity-based computing systems.
510

Assigning related categories to user queries

He, Miao. January 2006 (has links)
Thesis (M.S.)--State University of New York at Binghamton, Department of Computer Science, Thomas J. Watson School of Engineering and Applied Science, 2006. / Includes bibliographical references.

Page generated in 0.0782 seconds