• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Group reaching over digital tabletops with digital arm embodiments

2014 August 1900 (has links)
In almost all collaborative tabletop tasks, groups require coordinated access to the shared objects on the table’s surface. The physical social norms of close-proximity interactions built up over years of interacting around other physical bodies cause people to avoid interfering with other people (e.g., avoiding grabbing the same object simultaneously). However, some digital tabletop situations require the use of indirect input (e.g., when using mice, and when supporting remote users). With indirect input, people are no longer physically embodied during their reaching gestures, so most systems provide digital embodiments – visual representations of each person – to provide feedback to both the person who is reaching and to the other group members. Tabletop arm embodiments have been shown to better support group interactions than simple visual designs, providing awareness of actions to the group. However, researchers and digital tabletop designers know little of how the design of digital arm embodiments affects the fundamental group tabletop interaction of reaching for objects. Therefore, in this thesis, we evaluate how people coordinate their interactions over digital tabletops when using different types of embodiments. Specifically, in a series of studies, we investigate how the visual design (what they look like) and interaction design (how they work) of digital arm embodiments affects a group’s coordinative behaviours in an open- ended parallel tabletop task. We evaluated visual factors of size, transparency, and realism (through pictures and videos of physical arms), as well as interaction factors of input and augmentations (feedback of interactions), in both a co-located and distributed environment. We found that the visual design had little effect on a group’s ability to coordinate access to shared tabletop items, that embodiment augmentations are useful to support group coordinative actions, and that there are large differences when the person is not physically co-present. Our results demonstrate an initial exploration into the design of digital arm embodiments, providing design guidelines for future researchers and designers to use when designing the next generation of shared digital spaces.
2

Sketch Recognition on Mobile Devices

Lucchese, George 1987- 14 March 2013 (has links)
Sketch recognition allows computers to understand and model hand drawn sketches and diagrams. Traditionally sketch recognition systems required a pen based PC interface, but powerful mobile devices such as tablets and smartphones can provide a new platform for sketch recognition systems. We describe a new sketch recognition library, Strontium (SrL) that combines several existing sketch recognition libraries modified to run on both personal computers and on the Android platform. We analyzed the recognition speed and accuracy implications of performing low-level shape recognition on smartphones with touch screens. We found that there is a large gap in recognition speed on mobile devices between recognizing simple shapes and more complex ones, suggesting that mobile sketch interface designers limit the complexity of their sketch domains. We also found that a low sampling rate on mobile devices can affect recognition accuracy of complex and curved shapes. Despite this, we found no evidence to suggest that using a finger as an input implement leads to a decrease in simple shape recognition accuracy. These results show that the same geometric shape recognizers developed for pen applications can be used in mobile applications, provided that developers keep shape domains simple and ensure that input sampling rate is kept as high as possible.
3

3D from 2D touch

Holz, Christian January 2013 (has links)
While interaction with computers used to be dominated by mice and keyboards, new types of sensors now allow users to interact through touch, speech, or using their whole body in 3D space. These new interaction modalities are often referred to as "natural user interfaces" or "NUIs." While 2D NUIs have experienced major success on billions of mobile touch devices sold, 3D NUI systems have so far been unable to deliver a mobile form factor, mainly due to their use of cameras. The fact that cameras require a certain distance from the capture volume has prevented 3D NUI systems from reaching the flat form factor mobile users expect. In this dissertation, we address this issue by sensing 3D input using flat 2D sensors. The systems we present observe the input from 3D objects as 2D imprints upon physical contact. By sampling these imprints at very high resolutions, we obtain the objects' textures. In some cases, a texture uniquely identifies a biometric feature, such as the user's fingerprint. In other cases, an imprint stems from the user's clothing, such as when walking on multitouch floors. By analyzing from which part of the 3D object the 2D imprint results, we reconstruct the object's pose in 3D space. While our main contribution is a general approach to sensing 3D input on 2D sensors upon physical contact, we also demonstrate three applications of our approach. (1) We present high-accuracy touch devices that allow users to reliably touch targets that are a third of the size of those on current touch devices. We show that different users and 3D finger poses systematically affect touch sensing, which current devices perceive as random input noise. We introduce a model for touch that compensates for this systematic effect by deriving the 3D finger pose and the user's identity from each touch imprint. We then investigate this systematic effect in detail and explore how users conceptually touch targets. Our findings indicate that users aim by aligning visual features of their fingers with the target. We present a visual model for touch input that eliminates virtually all systematic effects on touch accuracy. (2) From each touch, we identify users biometrically by analyzing their fingerprints. Our prototype Fiberio integrates fingerprint scanning and a display into the same flat surface, solving a long-standing problem in human-computer interaction: secure authentication on touchscreens. Sensing 3D input and authenticating users upon touch allows Fiberio to implement a variety of applications that traditionally require the bulky setups of current 3D NUI systems. (3) To demonstrate the versatility of 3D reconstruction on larger touch surfaces, we present a high-resolution pressure-sensitive floor that resolves the texture of objects upon touch. Using the same principles as before, our system GravitySpace analyzes all imprints and identifies users based on their shoe soles, detects furniture, and enables accurate touch input using feet. By classifying all imprints, GravitySpace detects the users' body parts that are in contact with the floor and then reconstructs their 3D body poses using inverse kinematics. GravitySpace thus enables a range of applications for future 3D NUI systems based on a flat sensor, such as smart rooms in future homes. We conclude this dissertation by projecting into the future of mobile devices. Focusing on the mobility aspect of our work, we explore how NUI devices may one day augment users directly in the form of implanted devices. / Die Interaktion mit Computern war in den letzten vierzig Jahren stark von Tastatur und Maus geprägt. Neue Arten von Sensoren ermöglichen Computern nun, Eingaben durch Berührungs-, Sprach- oder 3D-Gestensensoren zu erkennen. Solch neuartige Formen der Interaktion werden häufig unter dem Begriff "natürliche Benutzungsschnittstellen" bzw. "NUIs" (englisch natural user interfaces) zusammengefasst. 2D-NUIs ist vor allem auf Mobilgeräten ein Durchbruch gelungen; über eine Milliarde solcher Geräte lassen sich durch Berührungseingaben bedienen. 3D-NUIs haben sich jedoch bisher nicht auf mobilen Plattformen durchsetzen können, da sie Nutzereingaben vorrangig mit Kameras aufzeichnen. Da Kameras Bilder jedoch erst ab einem gewissen Abstand auflösen können, eignen sie sich nicht als Sensor in einer mobilen Plattform. In dieser Arbeit lösen wir dieses Problem mit Hilfe von 2D-Sensoren, von deren Eingaben wir 3D-Informationen rekonstruieren. Unsere Prototypen zeichnen dabei die 2D-Abdrücke der Objekte, die den Sensor berühren, mit hoher Auflösung auf. Aus diesen Abdrücken leiten sie dann die Textur der Objekte ab. Anhand der Stelle der Objektoberfläche, die den Sensor berührt, rekonstruieren unsere Prototypen schließlich die 3D-Ausrichtung des jeweiligen Objektes. Neben unserem Hauptbeitrag der 3D-Rekonstruktion stellen wir drei Anwendungen unserer Methode vor. (1) Wir präsentieren Geräte, die Berührungseingaben dreimal genauer als existierende Geräte messen und damit Nutzern ermöglichen, dreimal kleinere Ziele zuverlässig mit dem Finger auszuwählen. Wir zeigen dabei, dass sowohl die Haltung des Fingers als auch der Benutzer selbst einen systematischen Einfluss auf die vom Sensor gemessene Position ausübt. Da existierende Geräte weder die Haltung des Fingers noch den Benutzer erkennen, nehmen sie solche Variationen als Eingabeungenauigkeit wahr. Wir stellen ein Modell für Berührungseingabe vor, das diese beiden Faktoren integriert, um damit die gemessenen Eingabepositionen zu präzisieren. Anschließend untersuchen wir, welches mentale Modell Nutzer beim Berühren kleiner Ziele mit dem Finger anwenden. Unsere Ergebnisse deuten auf ein visuelles Modell hin, demzufolge Benutzer Merkmale auf der Oberfläche ihres Fingers an einem Ziel ausrichten. Bei der Analyse von Berührungseingaben mit diesem Modell verschwinden nahezu alle zuvor von uns beobachteten systematischen Effekte. (2) Unsere Prototypen identifizieren Nutzer anhand der biometrischen Merkmale von Fingerabdrücken. Unser Prototyp Fiberio integriert dabei einen Fingerabdruckscanner und einen Bildschirm in die selbe Oberfläche und löst somit das seit Langem bestehende Problem der sicheren Authentifizierung auf Berührungsbildschirmen. Gemeinsam mit der 3D-Rekonstruktion von Eingaben ermöglicht diese Fähigkeit Fiberio, eine Reihe von Anwendungen zu implementieren, die bisher den sperrigen Aufbau aktueller 3D-NUI-Systeme voraussetzten. (3) Um die Flexibilität unserer Methode zu zeigen, implementieren wir sie auf einem großen, berührungsempfindlichen Fußboden, der Objekttexturen bei der Eingabe ebenfalls mit hoher Auflösung aufzeichnet. Ähnlich wie zuvor analysiert unser System GravitySpace diese Abdrücke, um Nutzer anhand ihrer Schuhsolen zu identifizieren, Möbelstücke auf dem Boden zu erkennen und Nutzern präzise Eingaben mittels ihrer Schuhe zu ermöglichen. Indem GravitySpace alle Abdrücke klassifiziert, erkennt das System die Körperteile der Benutzer, die sich in Kontakt mit dem Boden befinden. Aus der Anordnung dieser Kontakte schließt GravitySpace dann auf die Körperhaltungen aller Benutzer in 3D. GravitySpace hat daher das Potenzial, Anwendungen für zukünftige 3D-NUI-Systeme auf einer flachen Oberfläche zu implementieren, wie zum Beispiel in zukünftigen intelligenten Wohnungen. Wie schließen diese Arbeit mit einem Ausblick auf zukünftige interaktive Geräte. Dabei konzentrieren wir uns auf den Mobilitätsaspekt aktueller Entwicklungen und beleuchten, wie zukünftige mobile NUI-Geräte Nutzer in Form implantierter Geräte direkt unterstützen können.
4

Descriptive Study on the Use of Bimanual and Same-hand Multifinger Interaction on a Multitouch Display

Zerega Bravo, Rafael January 2013 (has links)
Multitouch technology allows the users to use both their hands and multiple fingers to manipulate digital content directly on the screen. This paper attempts to analyze the actual convenience of bimanual and multifinger manipulation on a multitouch display by conducting three observational experiments and studying how a group of volunteers use their hands and fingers when interacting with digital content on a touchscreen surface. In addition, the participants had to fill in a questionnaire where they give some additional insights on how they experienced the use of multitouch-based interface during the experiments. The results suggest that when participants were performing tasks in which they were instructed to manipulate the digital content as fast as they could, a high percentage of them resorted to the use of at least some level of bimanual manipulation of the digital content. However, when participants were told to perform the tasks calmly the big majority of participants decided to move the objects by using only one hand (unimanual). Same-hand multifinger manipulation was also used by a high percentage of participants when moving several objects simultaneously. Nevertheless, in all three experiments the most common way of moving objects across the screen was by dragging them one at a time (sequential move). Finally, in relation to the personal assessment made by the participants, a total of 70% feel that the possibility for engaging in bimanual interaction, that multitouch interface offers, is a clear benefit and advantage over traditional keyboard and mouse. However, 40% of the respondents feel that the use of mouse still is a more effective and natural form of interaction than multitouch technology.
5

Letecká hra pro Android / Flight Game for Android

Šabata, David January 2013 (has links)
This work deals with flight game development on Android platform. Firstly the possibilities of native development and development using Libgdx library will be discussed. Then flight mechanics of a real aircraft and simplified mechanics used in flight games will be explained. The work will also summarize current trends in mobile flight game controls and will propose a new control method based on touch input. Using this method a flight game will be designed and implemented. In the end of this work the process of testing and publishing will discussed, as well as possibilities of further development.

Page generated in 0.0328 seconds