• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Space to Think: Sensemaking and Large, High-Resolution Displays

Andrews, Christopher 14 September 2011 (has links)
Display technology has developed significantly over the last decade, and it is becoming increasingly feasible to construct large, high-resolution displays. Prior work has shown a number of key performance advantages for these displays that can largely be attributed to the replacement of virtual navigation (e.g., panning and zooming) with physical navigation (e.g., moving, turning, glancing). This research moves beyond the question of performance or efficiency and examines ways in which the large, high-resolution display can support the cognitive demanding task of sensemaking. The core contribution of this work is to show that the physical properties of large, high- resolution displays create a fundamentally different environment from conventional displays, one that is inherently spatial, resulting in support for a greater range of embodied resources. To support this, we describe a series of studies that examined the process of sensemaking on one of these displays. These studies illustrate how the display becomes a cognitive partner of the the analyst, encouraging the use of the space for the externalization of the analyst's thought process or findings. We particularly highlight how the flexibility of the space sup- ports the use of incremental formalism, a process of gradually structuring information as understanding grows. Building on these observations, we have developed a new sensemaking environment called Analyst's Workspace (AW), which makes use of a large, high-resolution display as a core component of its design. The primary goal of AW is to provide an environment that unifies the activities of foraging and synthesis into a single investigative thread. AW addresses this goal through the use of an integrated spatial environment in which full text documents serve as primary sources of information, investigative tools for pursuing leads, and sensemaking artifacts that can be arranged in the space to encode information about relationships between events and entities. This work also provides a collection of design principles that fell out of the development of AW, and that we hope can guide future development of analytic tools on large, high-resolution displays. / Ph. D.
2

A Multiscale Interaction Technique for Large, High-Resolution Displays

Peck, Sarah M. 08 July 2008 (has links)
The decreasing price of displays has enabled exploration of ever-larger high-resolution displays. Previous research has shown that as the display grows larger, users prefer to physically navigate, which has proven benefits. However, increasing the display size so radically creates a new difficulty in interaction. The paradigm has changed from sitting at a desktop computer to taking users' physical navigation into account and designing more mobile interactions. Currently, when users move, they change the scale at which they are viewing information without changing the interaction scale. This is a problem because tasks change at different levels of visual scale. Mulitscale interaction aims to exploit users’ movement by linking it to interaction, changing the interaction scale depending on users’ distance from the display. This work accomplishes three things: first, we define the design space of multiscale interaction; secondly, through a case study, we explore the design issues for a specific area of the design space; lastly, we evaluate one application through a user study that compares it to two other interaction types. We wanted to know, do users in fact benefit from the linkage of physical navigation with interaction? Results show a trend of a natural link between user distance and interaction scale, even with the other techniques that did not enforce this link. In addition, multiscale interaction benefits from the link by having more consistent performance. They also show that while participants using multiscale interaction tend to move more, they benefit from this additional movement, unlike with the other interaction types. / Master of Science
3

Walk-Centric User Interfaces for Mixed Reality

Santos Lages, Wallace 31 July 2018 (has links)
Walking is a natural part of our lives and is also becoming increasingly common in mixed reality. Wireless headsets and improved tracking systems allow us to easily navigate real and virtual environments by walking. In spite of the benefits, walking brings challenges to the design of new systems. In particular, designers must be aware of cognitive and motor requirements so that walking does not negatively impact the main task. Unfortunately, those demands are not yet fully understood. In this dissertation, we present new scientific evidence, interaction designs, and analysis of the role of walking in different mixed reality applications. We evaluated the difference in performance of users walking vs. manipulating a dataset during visual analysis. This is an important task, since virtual reality is increasingly being used as a way to make sense of progressively complex datasets. Our findings indicate that neither option is absolutely better: the optimal design choice should consider both user's experience with controllers and user's inherent spatial ability. Participants with reasonable game experience and low spatial ability performed better using the manipulation technique. However, we found that walking can still enable higher performance for participants with low spatial ability and without significant game experience. In augmented reality, specifying points in space is an essential step to create content that is registered with the world. However, this task can be challenging when information about the depth or geometry of the target is not available. We evaluated different augmented reality techniques for point marking that do not rely on any model of the environment. We found that triangulation by physically walking between points provides higher accuracy than purely perceptual methods. However, precision may be affected by head pointing tremors. To increase the precision, we designed a new technique that uses multiple samples to obtain a better estimate of the target position. This technique can also be used to mark points while walking. The effectiveness of this approach was demonstrated with a controlled augmented reality simulation and actual outdoor tests. Moving into the future, augmented reality will eventually replace our mobile devices as the main method of accessing information. Nonetheless, to achieve its full potential, augmented reality interfaces must support the fluid way we move in the world. We investigated the potential of adaptation in achieving this goal. We conceived and implemented an adaptive workspace system, based in the study of the design space and through user contextual studies. Our final design consists in a minimum set of techniques to support mobility and integration with the real world. We also identified a set of key interaction patterns and desirable properties of adaptation-based techniques, which can be used to guide the design of the next-generation walking-centered workspaces. / Ph. D. / Until recently, walking with virtual and augmented reality headsets was restricted by issues such as excessive weight, cables, tracking limitations, etc. As those limits go away, walking is becoming more common, making the user experience closer to the real world. If well explored, walking can also make some tasks easier and more efficient. Unfortunately, walking reduces our mental and motor performance and its consequences in interface design are not fully understood. In this dissertation, we present studies of the role of walking in three areas: scientific visualization in virtual reality, marking points in augmented reality, and accessing information in augmented reality. We show that although walking reduces our ability to perform those tasks, careful design can reduce its impact in a meaningful way.
4

Personalized Interaction with High-Resolution Wall Displays

von Zadow, Ulrich 05 June 2018 (has links) (PDF)
Fallende Hardwarepreise sowie eine zunehmende Offenheit gegenüber neuartigen Interaktionsmodalitäten haben in den vergangen Jahren den Einsatz von wandgroßen interaktiven Displays möglich gemacht, und in der Folge ist ihre Anwendung, unter anderem in den Bereichen Visualisierung, Bildung, und der Unterstützung von Meetings, erfolgreich demonstriert worden. Aufgrund ihrer Größe sind Wanddisplays für die Interaktion mit mehreren Benutzern prädestiniert. Gleichzeitig kann angenommen werden, dass Zugang zu persönlichen Daten und Einstellungen — mithin personalisierte Interaktion — weiterhin essentieller Bestandteil der meisten Anwendungsfälle sein wird. Aktuelle Benutzerschnittstellen im Desktop- und Mobilbereich steuern Zugriffe über ein initiales Login. Die Annahme, dass es nur einen Benutzer pro Bildschirm gibt, zieht sich durch das gesamte System, und ermöglicht unter anderem den Zugriff auf persönliche Daten und Kommunikation sowie persönliche Einstellungen. Gibt es hingegen mehrere Benutzer an einem großen Bildschirm, müssen hierfür Alternativen gefunden werden. Die daraus folgende Forschungsfrage dieser Dissertation lautet: Wie können wir im Kontext von Mehrbenutzerinteraktion mit wandgroßen Displays personalisierte Schnittstellen zur Verfügung stellen? Die Dissertation befasst sich sowohl mit personalisierter Interaktion in der Nähe (mit Touch als Eingabemodalität) als auch in etwas weiterer Entfernung (unter Nutzung zusätzlicher mobiler Geräte). Grundlage für personalisierte Mehrbenutzerinteraktion sind technische Lösungen für die Zuordnung von Benutzern zu einzelnen Interaktionen. Hierzu werden zwei Alternativen untersucht: In der ersten werden Nutzer via Kamera verfolgt, und in der zweiten werden Mobilgeräte anhand von Ultraschallsignalen geortet. Darauf aufbauend werden Interaktionstechniken vorgestellt, die personalisierte Interaktion unterstützen. Diese nutzen zusätzliche Mobilgeräte, die den Zugriff auf persönliche Daten sowie Interaktion in einigem Abstand von der Displaywand ermöglichen. Einen weiteren Teil der Arbeit bildet die Untersuchung der praktischen Auswirkungen der Ausgabe- und Interaktionsmodalitäten für personalisierte Interaktion. Hierzu wird eine qualitative Studie vorgestellt, die Nutzerverhalten anhand des kooperativen Mehrbenutzerspiels Miners analysiert. Der abschließende Beitrag beschäftigt sich mit dem Analyseprozess selber: Es wird das Analysetoolkit für Wandinteraktionen GIAnT vorgestellt, das Nutzerbewegungen, Interaktionen, und Blickrichtungen visualisiert und dadurch die Untersuchung der Interaktionen stark vereinfacht. / An increasing openness for more diverse interaction modalities as well as falling hardware prices have made very large interactive vertical displays more feasible, and consequently, applications in settings such as visualization, education, and meeting support have been demonstrated successfully. Their size makes wall displays inherently usable for multi-user interaction. At the same time, we can assume that access to personal data and settings, and thus personalized interaction, will still be essential in most use-cases. In most current desktop and mobile user interfaces, access is regulated via an initial login and the complete user interface is then personalized to this user: Access to personal data, configurations and communications all assume a single user per screen. In the case of multiple people using one screen, this is not a feasible solution and we must find alternatives. Therefore, this thesis addresses the research question: How can we provide personalized interfaces in the context of multi-user interaction with wall displays? The scope spans personalized interaction both close to the wall (using touch as input modality) and further away (using mobile devices). Technical solutions that identify users at each interaction can replace logins and enable personalized interaction for multiple users at once. This thesis explores two alternative means of user identification: Tracking using RGB+depth-based cameras and leveraging ultrasound positioning of the users' mobile devices. Building on this, techniques that support personalized interaction using personal mobile devices are proposed. In the first contribution on interaction, HyDAP, we examine pointing from the perspective of moving users, and in the second, SleeD, we propose using an arm-worn device to facilitate access to private data and personalized interface elements. Additionally, the work contributes insights on practical implications of personalized interaction at wall displays: We present a qualitative study that analyses interaction using a multi-user cooperative game as application case, finding awareness and occlusion issues. The final contribution is a corresponding analysis toolkit that visualizes users' movements, touch interactions and gaze points when interacting with wall displays and thus allows fine-grained investigation of the interactions.
5

Personalized Interaction with High-Resolution Wall Displays

von Zadow, Ulrich 14 May 2018 (has links)
Fallende Hardwarepreise sowie eine zunehmende Offenheit gegenüber neuartigen Interaktionsmodalitäten haben in den vergangen Jahren den Einsatz von wandgroßen interaktiven Displays möglich gemacht, und in der Folge ist ihre Anwendung, unter anderem in den Bereichen Visualisierung, Bildung, und der Unterstützung von Meetings, erfolgreich demonstriert worden. Aufgrund ihrer Größe sind Wanddisplays für die Interaktion mit mehreren Benutzern prädestiniert. Gleichzeitig kann angenommen werden, dass Zugang zu persönlichen Daten und Einstellungen — mithin personalisierte Interaktion — weiterhin essentieller Bestandteil der meisten Anwendungsfälle sein wird. Aktuelle Benutzerschnittstellen im Desktop- und Mobilbereich steuern Zugriffe über ein initiales Login. Die Annahme, dass es nur einen Benutzer pro Bildschirm gibt, zieht sich durch das gesamte System, und ermöglicht unter anderem den Zugriff auf persönliche Daten und Kommunikation sowie persönliche Einstellungen. Gibt es hingegen mehrere Benutzer an einem großen Bildschirm, müssen hierfür Alternativen gefunden werden. Die daraus folgende Forschungsfrage dieser Dissertation lautet: Wie können wir im Kontext von Mehrbenutzerinteraktion mit wandgroßen Displays personalisierte Schnittstellen zur Verfügung stellen? Die Dissertation befasst sich sowohl mit personalisierter Interaktion in der Nähe (mit Touch als Eingabemodalität) als auch in etwas weiterer Entfernung (unter Nutzung zusätzlicher mobiler Geräte). Grundlage für personalisierte Mehrbenutzerinteraktion sind technische Lösungen für die Zuordnung von Benutzern zu einzelnen Interaktionen. Hierzu werden zwei Alternativen untersucht: In der ersten werden Nutzer via Kamera verfolgt, und in der zweiten werden Mobilgeräte anhand von Ultraschallsignalen geortet. Darauf aufbauend werden Interaktionstechniken vorgestellt, die personalisierte Interaktion unterstützen. Diese nutzen zusätzliche Mobilgeräte, die den Zugriff auf persönliche Daten sowie Interaktion in einigem Abstand von der Displaywand ermöglichen. Einen weiteren Teil der Arbeit bildet die Untersuchung der praktischen Auswirkungen der Ausgabe- und Interaktionsmodalitäten für personalisierte Interaktion. Hierzu wird eine qualitative Studie vorgestellt, die Nutzerverhalten anhand des kooperativen Mehrbenutzerspiels Miners analysiert. Der abschließende Beitrag beschäftigt sich mit dem Analyseprozess selber: Es wird das Analysetoolkit für Wandinteraktionen GIAnT vorgestellt, das Nutzerbewegungen, Interaktionen, und Blickrichtungen visualisiert und dadurch die Untersuchung der Interaktionen stark vereinfacht. / An increasing openness for more diverse interaction modalities as well as falling hardware prices have made very large interactive vertical displays more feasible, and consequently, applications in settings such as visualization, education, and meeting support have been demonstrated successfully. Their size makes wall displays inherently usable for multi-user interaction. At the same time, we can assume that access to personal data and settings, and thus personalized interaction, will still be essential in most use-cases. In most current desktop and mobile user interfaces, access is regulated via an initial login and the complete user interface is then personalized to this user: Access to personal data, configurations and communications all assume a single user per screen. In the case of multiple people using one screen, this is not a feasible solution and we must find alternatives. Therefore, this thesis addresses the research question: How can we provide personalized interfaces in the context of multi-user interaction with wall displays? The scope spans personalized interaction both close to the wall (using touch as input modality) and further away (using mobile devices). Technical solutions that identify users at each interaction can replace logins and enable personalized interaction for multiple users at once. This thesis explores two alternative means of user identification: Tracking using RGB+depth-based cameras and leveraging ultrasound positioning of the users' mobile devices. Building on this, techniques that support personalized interaction using personal mobile devices are proposed. In the first contribution on interaction, HyDAP, we examine pointing from the perspective of moving users, and in the second, SleeD, we propose using an arm-worn device to facilitate access to private data and personalized interface elements. Additionally, the work contributes insights on practical implications of personalized interaction at wall displays: We present a qualitative study that analyses interaction using a multi-user cooperative game as application case, finding awareness and occlusion issues. The final contribution is a corresponding analysis toolkit that visualizes users' movements, touch interactions and gaze points when interacting with wall displays and thus allows fine-grained investigation of the interactions.

Page generated in 0.1694 seconds