• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 56
  • 56
  • 27
  • 25
  • 20
  • 14
  • 13
  • 12
  • 11
  • 11
  • 9
  • 9
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

OA-Graphs: Orientation Agnostic Graphs for improving the legibility of simple visualizations on horizontal displays

Alallah, Fouad Shoie 07 April 2011 (has links)
Horizontal displays, such as tabletop systems, are emerging as the de facto platform for engaging participants in collaborative tasks. Despite significant efforts in improving the interactivity of information on such systems, very little research has been invested in understanding how groups of people view data visualizations in such environments. Numerous studies introduced different techniques to support viewing visualization for groups of people, such as duplicating or reorienting the visual displays. However, when visualizations compete for pixels on the display, prior solutions do not work effectively. In this thesis, I explore whether orientation on horizontal displays impacts the legibility of simple visualizations such as graphs. I have found that users are best at reading a graph when it is right side up, and takes them 20% less time than when it is read upside down. The main objective of this thesis was to investigate whether the readability and understandability of simple graphs can be improved. I have introduced the Orientation Agnostic Graph (OA-Graph) which is legible regardless of orientation. The OA-Graph uses a radial layout which has several interesting properties such as implicit orientation, points equidistant to center, and flexible rearrangement. OA-Graphs perform better than graphs that are presented upside down. I have converted several popular types of graphs into their OA counterpart for improved legibility on tabletop systems. Guidelines are presented that describe how other visualizations can be converted to being orientation agnostic.
22

OA-Graphs: Orientation Agnostic Graphs for improving the legibility of simple visualizations on horizontal displays

Alallah, Fouad Shoie 07 April 2011 (has links)
Horizontal displays, such as tabletop systems, are emerging as the de facto platform for engaging participants in collaborative tasks. Despite significant efforts in improving the interactivity of information on such systems, very little research has been invested in understanding how groups of people view data visualizations in such environments. Numerous studies introduced different techniques to support viewing visualization for groups of people, such as duplicating or reorienting the visual displays. However, when visualizations compete for pixels on the display, prior solutions do not work effectively. In this thesis, I explore whether orientation on horizontal displays impacts the legibility of simple visualizations such as graphs. I have found that users are best at reading a graph when it is right side up, and takes them 20% less time than when it is read upside down. The main objective of this thesis was to investigate whether the readability and understandability of simple graphs can be improved. I have introduced the Orientation Agnostic Graph (OA-Graph) which is legible regardless of orientation. The OA-Graph uses a radial layout which has several interesting properties such as implicit orientation, points equidistant to center, and flexible rearrangement. OA-Graphs perform better than graphs that are presented upside down. I have converted several popular types of graphs into their OA counterpart for improved legibility on tabletop systems. Guidelines are presented that describe how other visualizations can be converted to being orientation agnostic.
23

Wo ist das Gefühl?

Freitag, Georg, Wacker , Markus 27 May 2014 (has links) (PDF)
"Das Programm sieht ja nicht nur gut aus, es macht auch genau das was ich will!" - solche oder ähnliche Aussagen liest man oft, wenn Software-Programme von Anwendern beurteilt werden. Was Nutzer damit beschreiben ist weitestgehend als Look & Feel einer Anwendung bekannt. Der Begriff Look bezieht sich dabei auf die visuellen Bestandteile der Anwendung, wie die genutzten Medienelemente und deren Layout. Das Themenfeld Feel umfasst das interaktive Verhalten einer Anwendung, die auf Eingaben des Nutzers reagiert (Feedback) oder bereits vorher Hinweise auf die eigene Verwendbarkeit gibt (Feed-Forward). Allgemein gilt, je interaktiver eine Anwendung, desto wichtiger ist das "Gefühl" im Look & Feel. Als Beispiel dienen die sogenannten natürlichen Benutzerschnittstellen (NUI), wie die sich in den letzten Jahren enorm verbreitende Form des Multi-Touches. Bei dieser interagiert der Nutzer direkt mit der Anwendung ohne separate Eingabegeräte als Vermittler seiner Aktionen. Eine weitere Charakteristik dieser Benutzerschnittstellen ist deren intuitive Verwendbarkeit. Dies bedeutet, dass sich während der Interaktion mit den Programmen deren Strukturen und Funktionen von selbst erschließen. Um dies zu gewährleisten ist die sorgsame Gestaltung des Feels von Beginn der Entwicklung an bedeutsam. Umso überraschender ist das Ergebnis unseres Vergleichs aktueller Prototyping-Werkzeuge für Benutzeroberflächen, die den Aspekt Feel oftmals nicht oder nur unzureichend berücksichtigen und stattdessen das Aussehen (Look) einer Anwendung fokussieren. In unserer kürzlich erschienenen Arbeit "Look without Feel - A Basal Gap in the Multi-Touch Prototyping Process", die wir auf der Konferenz "Mensch und Computer 2013" in Bremen präsentierten und die mit dem Honorable Mention Paper Award ausgezeichnet wurde, untersuchten wir diesen Sachverhalt für den Prototyping-Prozess von Multi-Touch Anwendungen genauer.
24

Toward semantic model generation from sketch and multi-touch interactions

Hsiao, Chih-Pin 07 January 2016 (has links)
Designers usually start their design process by exploring and evolving their ideas rapidly through sketching since this helps them to make numerous attempts at creating, practicing, simulating, and representing ideas. Creativity inherent in solving the ill-defined problems (Eastman, 1969) often emerges when designers explore potential solutions while sketching in the design process (Schön, 1992). When using computer programs such as CAD or Building Information Modeling (BIM) tools, designers often preplan the tasks prior to executing commands instead of engaging in the process of designing. Researchers argue that these programs force designers to focus on how to use a tool (i.e. how to execute series of commands) rather than how to explore a design, and thus hinder creativity in the early stages of the design process (Goel, 1995; Dorta, 2007). Since recent design and documentation works have been computer-generated using BIM software, transitions between ideas in sketches and those in digital CAD systems have become necessary. By employing sketch interactions, we argue that a computer system can provide a rapid, flexible, and iterative method to create 3D models with sufficient data for facilitating smooth transitions between designers’ early sketches and BIM programs. This dissertation begins by describing the modern design workflows and discussing the necessary data to be exchanged in the early stage of design. It then briefly introduces the modern cognitive theories, including embodiment (Varela, Rosch, & Thompson, 1992), situated action (Suchman, 1986), and distributed cognition (Hutchins, 1995). It continues by identifying problems in current CAD programs used in the early stage of the design process, using these theories as lenses. After reviewing modern attempts, including sketch tools and design automation tools, we describe the design and implementation of a sketch and multi-touch program, SolidSketch, to facilitate and augment our abilities to work on ill-defined problems in the early stage of design. SolidSketch is a parametric modeling program that enables users to construct 3D parametric models rapidly through sketch and multi-touch interactions. It combines the benefits of traditional design tools, such as physical models and pencil sketches (i.e. rapid, low-cost, and flexible methods), with the computational power offered by digital modeling tools, such as CAD. To close the gap between modern BIM and traditional sketch tools, the models created with SolidSketch can be read by other BIM programs. We then evaluate the programs with comparisons to the commercial CAD programs and other sketch programs. We also report a case study in which participants used the system for their design explorations. Finally, we conclude with the potential impacts of this new technology and the next steps for ultimately bringing greater computational power to the early stages of design.
25

Graphical User Interfaces for Multi-Touch Displays supporting Public Exploration and Guided Storytelling of Astronomical Visualizations / Grafiska användargränssnitt för multifunktionsdisplayer som stöder publik utforskning av astronomiska visualiseringar

Johansson, Hanna, Khullar, Sofie January 2018 (has links)
This report presents the development and implementation of a graphical user interface (GUI) for multi-touch displays as well as an application programming interface (API) for guided storytelling of astronomical visualizations. The GUI and the API is built using web technologies and the GUI is rendered in an OpenGL environment. The API is meant to provide the infrastructure needed to create different stories for the public, based on astronomical data. Both the resulting GUI and the API is developed such that it can be further developed and customized for different purposes.
26

A musculoskeletal model of the human hand to improve human-device interaction

January 2014 (has links)
abstract: Multi-touch tablets and smart phones are now widely used in both workplace and consumer settings. Interacting with these devices requires hand and arm movements that are potentially complex and poorly understood. Experimental studies have revealed differences in performance that could potentially be associated with injury risk. However, underlying causes for performance differences are often difficult to identify. For example, many patterns of muscle activity can potentially result in similar behavioral output. Muscle activity is one factor contributing to forces in tissues that could contribute to injury. However, experimental measurements of muscle activity and force for humans are extremely challenging. Models of the musculoskeletal system can be used to make specific estimates of neuromuscular coordination and musculoskeletal forces. However, existing models cannot easily be used to describe complex, multi-finger gestures such as those used for multi-touch human computer interaction (HCI) tasks. We therefore seek to develop a dynamic musculoskeletal simulation capable of estimating internal musculoskeletal loading during multi-touch tasks involving multi digits of the hand, and use the simulation to better understand complex multi-touch and gestural movements, and potentially guide the design of technologies the reduce injury risk. To accomplish these, we focused on three specific tasks. First, we aimed at determining the optimal index finger muscle attachment points within the context of the established, validated OpenSim arm model using measured moment arm data taken from the literature. Second, we aimed at deriving moment arm values from experimentally-measured muscle attachments and using these values to determine muscle-tendon paths for both extrinsic and intrinsic muscles of middle, ring and little fingers. Finally, we aimed at exploring differences in hand muscle activation patterns during zooming and rotating tasks on the tablet computer in twelve subjects. Towards this end, our musculoskeletal hand model will help better address the neuromuscular coordination, safe gesture performance and internal loadings for multi-touch applications. / Dissertation/Thesis / Doctoral Dissertation Mechanical Engineering 2014
27

Braille-based Text Input for Multi-touch Screen Mobile Phones

Fard, Hossein Ghodosi, Chuangjun, Bie January 2011 (has links)
ABSTRACT: “The real problem of blindness is not the loss of eyesight. The real problem is the misunderstanding and lack of information that exist. If a blind person has proper training and opportunity, blindness can be reduced to a physical nuisance.”- National Federation of the Blind (NFB) Multi-touch screen is a relatively new and revolutionary technology in mobile phone industry. Being mostly software driven makes these phones highly customizable for all sorts of users including blind and visually impaired people. In this research, we present new interface layouts for multi-touch screen mobile phones that enable visionless people to enter text in the form of Braille cells. Braille is the only way for these people to directly read and write without getting help from any extra assistive instruments. It will be more convenient and interesting for them to be provided with facilities to interact with new technologies using their language, Braille. We started with a literature review on existing eyes-free text entry methods and also text input devices, to find out their strengths and weaknesses. At this stage we were aiming at identifying the difficulties that unsighted people faced when working with current text entry methods. Then we conducted questionnaire surveys as the quantitative method and interviews as the qualitative method of our user study to get familiar with users’ needs and expectations. At the same time we studied the Braille language in detail and examined currently available multi-touch mobile phone feedbacks. At the designing stage, we first investigated different possible ways of entering a Braille “cell” on a multi-touch screen, regarding available input techniques and also considering the Braille structure. Then, we developed six different alternatives of entering the Braille cells on the device; we laid out a mockup for each and documented them using Gestural Modules Document and Swim Lanes techniques. Next, we prototyped our designs and evaluated them utilizing Pluralistic Walkthrough method and real users. Next step, we refined our models and selected the two bests, as main results of this project based on good gestural interface principles and users’ feedbacks. Finally, we discussed the usability of our elected methods in comparison with the current method visually impaired use to enter texts on the most popular multi-touch screen mobile phone, iPhone. Our selected designs reveal possibilities to improve the efficiency and accuracy of the existing text entry methods in multi-touch screen mobile phones for Braille literate people. They also can be used as guidelines for creating other multi-touch input devices for entering Braille in an apparatus like computer.
28

Multi-touch in control systems : Two case studies

Nord, Malin, Vestgöte, Henrik January 2010 (has links)
During the last thirty years the progress of multi-touch technology has been a hot topic of discussion. Despite this, it has not been deployed in anything more advanced than commercials, games and illustrations. We believe that the time has come for the technology to become a broader and more advanced field. It should even be feasible to introduce the multi-touch technology into important environments e.g. control rooms. Two project based case studies, involving multi-touch in different aspects, will be described and discussed respectively. The first case study discusses the introduction of a Microsoft Surface as a collaboration tool in a control room environment. A prototype was built and evaluated to see how well it could work in a stressful and complex area where collaboration between colleagues is vital. The second case study describes the development and possible deployment of a smaller multi-touch screen that would work as an extra input to the control system. Its purpose is to facilitate the navigation in a control system for the operators, thereby easing their cognitive load and making the control room a more comfortable working place. The research of the case studies was based on interviews with operators and developers. From the research result appliance methods and designs were developed, and prototypes were constructed out of the best ones. The prototypes were then analyzed and tested for later evaluation and discussion. To see, whether or not the new multi-touch prototypes would function well in a control system. The objective of this thesis is to attempt to introduce multi-touch technology in control systems
29

Wo ist das Gefühl?: Auf das Aussehen fokussierte Gestaltung interaktiver Anwendungen fim frühen Entwicklungsprozess

Freitag, Georg, Wacker, Markus 27 May 2014 (has links)
"Das Programm sieht ja nicht nur gut aus, es macht auch genau das was ich will!" - solche oder ähnliche Aussagen liest man oft, wenn Software-Programme von Anwendern beurteilt werden. Was Nutzer damit beschreiben ist weitestgehend als Look & Feel einer Anwendung bekannt. Der Begriff Look bezieht sich dabei auf die visuellen Bestandteile der Anwendung, wie die genutzten Medienelemente und deren Layout. Das Themenfeld Feel umfasst das interaktive Verhalten einer Anwendung, die auf Eingaben des Nutzers reagiert (Feedback) oder bereits vorher Hinweise auf die eigene Verwendbarkeit gibt (Feed-Forward). Allgemein gilt, je interaktiver eine Anwendung, desto wichtiger ist das "Gefühl" im Look & Feel. Als Beispiel dienen die sogenannten natürlichen Benutzerschnittstellen (NUI), wie die sich in den letzten Jahren enorm verbreitende Form des Multi-Touches. Bei dieser interagiert der Nutzer direkt mit der Anwendung ohne separate Eingabegeräte als Vermittler seiner Aktionen. Eine weitere Charakteristik dieser Benutzerschnittstellen ist deren intuitive Verwendbarkeit. Dies bedeutet, dass sich während der Interaktion mit den Programmen deren Strukturen und Funktionen von selbst erschließen. Um dies zu gewährleisten ist die sorgsame Gestaltung des Feels von Beginn der Entwicklung an bedeutsam. Umso überraschender ist das Ergebnis unseres Vergleichs aktueller Prototyping-Werkzeuge für Benutzeroberflächen, die den Aspekt Feel oftmals nicht oder nur unzureichend berücksichtigen und stattdessen das Aussehen (Look) einer Anwendung fokussieren. In unserer kürzlich erschienenen Arbeit "Look without Feel - A Basal Gap in the Multi-Touch Prototyping Process", die wir auf der Konferenz "Mensch und Computer 2013" in Bremen präsentierten und die mit dem Honorable Mention Paper Award ausgezeichnet wurde, untersuchten wir diesen Sachverhalt für den Prototyping-Prozess von Multi-Touch Anwendungen genauer.
30

Setpad: A Sketch-based Tool For Exploring Discrete Math Set Problems

Cossairt, Travis 01 January 2012 (has links)
We present SetPad, a new application prototype that lets computer science students explore discrete math problems by sketching set expressions using pen-based input. Students can manipulate the expressions interactively with the tool via pen or multi-touch interface. Likewise, discrete mathematics instructors can use SetPad to display and work through set problems via a projector to better demonstrate the solutions to the students. We discuss the implementation and feature set of the application, as well as results from both an informal perceived usefulness evaluation for students taking a computer science foundation exam in addition to a formal user study measuring the effectiveness of the tool when solving set proof problems. The results indicate that SetPad was well received, allows for efficient solutions to proof problems, and has the potential to have a positive impact when used as as an individual student application or an instructional tool.

Page generated in 0.0386 seconds