• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 11
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 1
  • 1
  • Tagged with
  • 100
  • 100
  • 90
  • 39
  • 13
  • 11
  • 11
  • 11
  • 11
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

A reconfigurable tactile display based on polymer MEMS technology

Wu, Xiaosong 25 March 2008 (has links)
This research focuses on the development of polymer microfabrication technologies for the realization of two major components of a pneumatic tactile display: a microactuator array and a complementary microvalve (control) array. The concept, fabrication, and characterization of a kinematically-stabilized polymeric microbubble actuator (¡°endoskeletal microbubble actuator¡±) were presented. A systematic design and modeling procedure was carried out to generate an optimized geometry of the corrugated diaphragm to satisfy membrane deflection, force, and stability requirements set forth by the tactile display goals. A refreshable Braille cell as a tactile display prototype has been developed based on a 2x3 endoskeletal microbubble array and an array of commercial valves. The prototype can provide both a static display (which meets the displacement and force requirement of a Braille display) and vibratory tactile sensations. Along with the above capabilities, the device was designed to meet the criteria of lightness and compactness to permit portable operation. The design is scalable with respect to the number of tactile actuators while still being simple to fabricate. In order to further reduce the size and cost of the tactile display, a microvalve array can be integrated into the tactile display system to control the pneumatic fluid that actuates the microbubble actuator. A piezoelectrically-driven and hydraulically-amplified polymer microvalve has been designed, fabricated, and tested. An incompressible elastomer was used as a solid hydraulic medium to convert the small axial displacement of a piezoelectric actuator into a large valve head stroke while maintaining a large blocking force. The function of the microvalve as an on-off switch for a pneumatic microbubble tactile actuator was demonstrated. To further reduce the cost of the microvalve, a laterally-stacked multilayer PZT actuator has been fabricated using diced PZT multilayer, high aspect ratio SU-8 photolithography, and molding of electrically conductive polymer composite electrodes.
92

Visual based finger interactions for mobile phones

Kerr, Simon 15 March 2010 (has links)
Vision based technology such as motion detection has long been limited to the domain of powerful processor intensive systems such as desktop PCs and specialist hardware solutions. With the advent of much faster mobile phone processors and memory, a plethora of feature rich software and hardware is being deployed onto the mobile platform, most notably onto high powered devices called smart phones. Interaction interfaces such as touchscreens allow for improved usability but obscure the phone’s screen. Since the majority of smart phones are equipped with cameras, it has become feasible to combine their powerful processors, large memory capacity and the camera to support new ways of interacting with the phone which do not obscure the screen. However, it is not clear whether or not these processor intensive visual interactions can in fact be run at an acceptable speed on current mobile handsets or whether they will offer the user a better experience than the current number pad and direction keys present on the majority of mobile phones. A vision based finger interaction technique is proposed which uses the back of device camera to track the user’s finger. This allows the user to interact with the mobile phone with mouse based movements, gestures and steering based interactions. A simple colour thresholding algorithm was implemented in Java, Python and C++. Various benchmarks and tests conducted on a Nokia N95 smart phone revealed that on current hardware and with current programming environments only native C++ yields results plausible for real time interactions (a key requirement for vision based interactions). It is also shown that different lighting levels and background environments affects the accuracy of the system with background and finger contrast playing a large role. Finally a user study was conducted to ascertain the overall user’s satisfaction between keypad interactions and the finger interaction techniques concluding that the new finger interaction technique is well suited to steering based interactions and in time, mouse style movements. Simple navigation is better suited to the directional keypad.
93

Evaluation of Text-Based and Image-Based Representations for Moving Image Documents

Goodrum, Abby A. (Abby Ann) 08 1900 (has links)
Document representation is a fundamental concept in information retrieval (IR), and has been relied upon in textual IR systems since the advent of library catalogs. The reliance upon text-based representations of stored information has been perpetuated in conventional systems for the retrieval of moving images as well. Although newer systems have added image-based representations of moving image documents as aids to retrieval, there has been little research examining how humans interpret these different types of representations. Such basic research has the potential to inform IR system designers about how best to aid users of their systems in retrieving moving images. One key requirement for the effective use of document representations in either textual or image form is thedegree to which these representations are congruent with the original documents. A measure of congruence is the degree to which human responses to representations are similar to responses produced by the document being represented. The aim of this study was to develop a model for the representation of moving images based upon human judgements of representativeness. The study measured the degree of congruence between moving image documents and their representations, both text and image based, in a non-retrieval environment with and without task constraints. Multidimensional scaling (MDS) was used to examine the dimensional dispersions of human judgements for the full moving images and their representations.
94

Role-dependent information displaying design and implementation using augmented reality

Sánchez Montoya, Trinidad January 2021 (has links)
This thesis project aims to study the design of an Augmented Reality solution for the industry that benefits the user. Three industry-related roles are studied individually, the shop floor worker role, the line manager role, and the maintenance worker role. In order to do this, two different information display approaches; having access to a personal source of information, and having distributed sources of information linked to the working space, are implemented, analyzed, and compared, from acognitive ergonomics perspective. Insight on how the different design decisions can affect the outcome of the Augmented Reality system in each case, is extracted from performance, usability, perceived workload, and user experience evaluations of each of the approaches for each of the roles. The design and creation methodology for information system and computing research is followed for this project. A literature review is performed in order to define and understand the addressed problem, solutions are proposed in an iterative process, culminating in the implementation of the final idea, which is then evaluated by a group of test subjects. These evaluations target cognitive ergonomics assessments of the different design approaches and their results are collected and analyzed in order to draw conclusions and present the project’s findings. The obtained results point to a set of different strengths and weaknesses for each of the Augmented Reality approaches implemented for each of the industry roles considered. For the shop floor and maintenance workers, distributed approaches for information display can be more exciting and engaging, but they can also increase task completion time, in comparison to information displayed in personal panels. However, the results point to line managers can possibly benefit more from the use of personal panels. / <p>Det finns övrigt digitalt material (t.ex. film-, bild- eller ljudfiler) eller modeller/artefakter tillhörande examensarbetet somska skickas till arkivet.</p>
95

A Cross-Culture Study of Color Preferences on a Computer Screen Between Thai and American Students

Whattananarong, Krisana 05 1900 (has links)
The purpose of this investigation was to determine the color preference of Thai and American students for text and background computer color combinations. The primary purpose of this study was to determine if there were differences between Thai and American students' computer color combination preferences.
96

Universal graph literacy: understanding how blind and low vision students can satisfy the common core standards with accessible auditory graphs

Davison, Benjamin Kenneth 08 April 2013 (has links)
Auditory graphs and active point estimation provide an inexpensive, accessible alternative for low vision and blind K-12 students using number lines and coordinate graphs. In the first phase of this research program, a series of four psychophysics studies demonstrated an interactive auditory number line that enables blind, low vision, and sighted people to find small targets with a laptop, headphones, and a mouse or a keyboard. The Fitts' Law studies showed that, given appropriate auditory feedback, blind people can use a mouse. In addition, auditory feedback can generate target response patterns similar to when people use visual feedback. Phase two introduced SQUARE, a novel method for building accessible alternatives to existing education technologies. The standards-driven and teacher-directed approach generated 17 graphing standards for sixth grade mathematics, all of which emphasized point estimation. It also showed that how only few basic behavioral components are necessary for these graphing problems. The third phase evaluated active point estimation tools in terms of training, classroom situations, and a testing situation. This work shows that students can learn to graph in K-12 environments, regardless of their visual impairment. It also provides several technologies used for graphing, and methods to further develop education accessibility research.
97

The buzz: supporting extensively customizable information awareness applications

Eagan, James R. 25 August 2008 (has links)
Increasingly abundant access to data and cheaper display technology costs are creating an exciting opportunity to create new information awareness tools that can present information calmly through peripheral and ambient interfaces. These tools offer the potential to help people better manage their attention and to avoid information overload. Different people, however, have distinct information needs, and customizing these systems is often difficult. Existing interfaces typically provide too coarse or too fine a granularity of customization, resulting in tools that are too rigid or too difficult to configure. We present an extensively customizable information awareness system, The Buzz, that supports end users, tinkerers, and developers at using, modifying, creating, and sharing powerful and flexible customizations. These customizations are powerful in the sense that the user can control abstract behaviors of the system, and flexible in the sense that the complexity of the customization can vary with the power needed to express it. We further chart the broader information awareness customization space through the lens of existing customizable information tools. Through this analysis, we show that this system provides more extensive customization capabilities than other customizable awareness applications, without requiring significant programming.
98

The screen as boundary object in the realm of imagination

Lee, Hyun Jean 09 January 2009 (has links)
As an object at the boundary between virtual and physical reality, the screen exists both as a displayer and as a thing displayed, thus functioning as a mediator. The screen's virtual imagery produces a sense of immersion in its viewer, yet at the same time the materiality of the screen produces a sense of rejection from the viewer's complete involvement in the virtual world. The experience of the screen is thus an oscillation between these two states of immersion and rejection. Nowadays, as interactivity becomes a central component of the relationship between viewers and many artworks, the viewer experience of the screen is changing. Unlike the screen experience in non-interactive artworks, such as the traditional static screen of painting or the moving screen of video art in the 1970s, interactive media screen experiences can provide viewers with a more immersive, immediate, and therefore, more intense experience. For example, many digital media artworks provide an interactive experience for viewers by capturing their face or body though real-time computer vision techniques. In this situation, as the camera and the monitor in the artwork encapsulate the interactor's body in an instant feedback loop, the interactor becomes a part of the interface mechanism and responds to the artwork as the system leads or even provokes them. This thesis claims that this kind of direct mirroring in interactive screen-based media artworks does not allow the viewer the critical distance or time needed for self-reflection. The thesis examines the previous aesthetics of spatial and temporal perception, such as presentness and instantaneousness, and the notions of passage and of psychological perception such as reflection, reflexiveness and auratic experience, looking at how these aesthetics can be integrated into new media screen experiences. Based on this theoretical research, the thesis claims that interactive screen spaces can act as a site for expression and representation, both through a doubling effect between the physical and virtual worlds, and through manifold spatial and temporal mappings with the screen experience. These claims are further supported through exploration of screen-based media installations created by the author since 2003.
99

Video anatomy : spatial-temporal video profile

Cai, Hongyuan 31 July 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / A massive amount of videos are uploaded on video websites, smooth video browsing, editing, retrieval, and summarization are demanded. Most of the videos employ several types of camera operations for expanding field of view, emphasizing events, and expressing cinematic effect. To digest heterogeneous videos in video websites and databases, video clips are profiled to 2D image scroll containing both spatial and temporal information for video preview. The video profile is visually continuous, compact, scalable, and indexing to each frame. This work analyzes the camera kinematics including zoom, translation, and rotation, and categorize camera actions as their combinations. An automatic video summarization framework is proposed and developed. After conventional video clip segmentation and video segmentation for smooth camera operations, the global flow field under all camera actions has been investigated for profiling various types of video. A new algorithm has been designed to extract the major flow direction and convergence factor using condensed images. Then this work proposes a uniform scheme to segment video clips and sections, sample video volume across the major flow, compute flow convergence factor, in order to obtain an intrinsic scene space less influenced by the camera ego-motion. The motion blur technique has also been used to render dynamic targets in the profile. The resulting profile of video can be displayed in a video track to guide the access to video frames, help video editing, and facilitate the applications such as surveillance, visual archiving of environment, video retrieval, and online video preview.
100

Advancing profiling sensors with a wireless approach

Galvis, Alejandro 20 November 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In general, profiling sensors are low-cost crude imagers that typically utilize a sparse detector array, whereas traditional cameras employ a dense focal-plane array. Profiling sensors are of particular interest in applications that require classification of a sensed object into broad categories, such as human, animal, or vehicle. However, profiling sensors have many other applications in which reliable classification of a crude silhouette or profile produced by the sensor is of value. The notion of a profiling sensor was first realized by a Near-Infrared (N-IR), retro-reflective prototype consisting of a vertical column of sparse detectors. Alternative arrangements of detectors have been implemented in which a subset of the detectors have been offset from the vertical column and placed at arbitrary locations along the anticipated path of the objects of interest. All prior work with the N-IR, retro-reflective profiling sensors has consisted of wired detectors. This thesis surveys prior work and advances this work with a wireless profiling sensor prototype in which each detector is a wireless sensor node and the aggregation of these nodes comprises a profiling sensor’s field of view. In this novel approach, a base station pre-processes the data collected from the sensor nodes, including data realignment, prior to its classification through a back-propagation neural network. Such a wireless detector configuration advances deployment options for N-IR, retro-reflective profiling sensors.

Page generated in 0.1141 seconds