Spelling suggestions: "subject:"text entre""
1 |
Comparison of Touchscreen and Physical Keyboard with Nomadic Text EntryRoss, Michael Tyler 07 May 2016 (has links)
Many research projects have been conducted to compare standing text entry with nomadic text entry. Other research projects have compared the input types of touchscreen and physical keyboards while texting. There is few literature that compares the two inputs types during a standing and nomadic text entry. This research was conducted to investigate the differences in error rate and characters per minute for both input types during both text entry conditions. To investigate these differences two devices were used, the iPhone 4 and the Blackberry Curve 9350, to type a phrase during both a standing and walking condition. Both characters per minute and error rate were analyzed. The investigation showed that there were no significant difference in error rate, but there was a significant difference in characters per minute. The touchscreen keyboard performed better in terms of characters per minutes and arguably performed better in accuracy.
|
2 |
Design and Evaluation of Three Alternative Keyboard Layouts for a Five-Key Text Entry TechniqueMillet, Barbara 17 December 2009 (has links)
Despite the increase in popularity of handheld devices, text entry on such devices is becoming more difficult due to reduced form factors that limit display size, input modes, and interaction techniques. In an effort to circumvent these issues, research has found that five-key methods are effective for text entry on devices such as in-car navigation systems, television and gaming controllers, wrist watches, and other small devices. Five-key text entry methods use four directional keys to move a selector over an on-screen keyboard and an Enter key for selection. Although other researchers have described five-key character layouts using alphabetical order and predictive layouts based on digraph frequencies, there is considerable latitude in designing the rest of a comprehensive on-screen keyboard. Furthermore, it might be possible to capitalize on the relative strengths of the alphabetic and predictive layouts by combining them in a hybrid layout. Thus, this research examines the design of alternative keyboard layouts for five-key text entry techniques. Three keyboard layouts (Alphabetical, Predictive, and Hybrid) were selected to represent standard and less familiar arrangements. The analysis centered on a series of controlled experiments conducted on a research platform designed by the author. In this work, when the immediate usability of three alternative keyboard layouts for supporting five-key text entry was investigated, results indicated no statistically significant differences in performance across the tested keyboards. Furthermore, experimental results show that following immediate usability, but still at the onset of learning, there was no overall difference in performance among the three keyboard layouts across four text types. However, the Alphabetical keyboard surpassed both the Predictive and Hybrid keyboards in text entry speed in typing Web addresses. The nonstandard keyboards performed superior to the Alphabetical keyboards in typing Words/Spaces and Sentences, but performed no better in typing Address strings than the Alphabetical. Use of mixed effects modeling suggested that the longitudinal data was best fitted by a quadratic model. Text entry performance on all three layouts improved as a function of practice, demonstrating that participants could learn the unfamiliar layouts to complete text entry tasks. Overall, there was no indication that use of nonstandard layouts impedes performance. In fact, trend in time data suggests that the learning rates were greater for the nonstandard keyboards over the standard layout. Overall, participants preferred the Hybrid layout. In summary, this dissertation focused on creating and validating novel and effective five-key text entry techniques for constrained devices.
|
3 |
Comparison of Text Input and Interaction in a Mobile Learning EnvironmentBurrell, James 01 January 2013 (has links)
Mobile computing devices are increasingly being utilized to support learning activities outside the traditional classroom environment. The text input capabilities of these devices represent a limiting factor for effective support of user-based interaction. The ability to perform continuous character selection and input to complete course exercises is becoming increasingly difficult as these devices become miniaturized to a point where traditional input and output methods are becoming less efficient for continuous text input.
This study investigated the design and performance of a prototype mobile text entry keyboard (MobileType) based on characteristics of the linguistic frequency of character occurrence and increasing key size to minimize visual search time and distance during character selection. The study was designed to compare efficiency, effectiveness, and learning effects of the MobileType to the QWERTY keyboard layouts while performing fixed phrase and course exercise text entry tasks in two separate evaluation sessions. A custom software application was developed for a tablet device to display the two keyboard interfaces and capture text entry interaction and timing information.
The results of this study indicated the QWERTY text entry interface performed faster in terms of efficiency, while the MobileType interface performed better in terms of effectiveness. In addition, there was an observable increase in the efficiency of the MobileType interface between the two task sessions. The results indicated that the MobileType interface was readily learnable relating to learning effect. Future research is recommended to establish if the performance of the MobileType interface could be increased with further participant familiarization after completing multiple sessions, which would validate the design of MobileType as a possible alternative to the QWERTY text entry interface for mobile devices.
|
4 |
EyeSwipe: text entry using gaze paths / EyeSwipe: entrada de texto usando gestos do olharKurauchi, Andrew Toshiaki Nakayama 30 January 2018 (has links)
People with severe motor disabilities may communicate using their eye movements aided by a virtual keyboard and an eye tracker. Text entry by gaze may also benefit users immersed in virtual or augmented realities, when they do not have access to a physical keyboard or touchscreen. Thus, both users with and without disabilities may take advantage of the ability to enter text by gaze. However, methods for text entry by gaze are typically slow and uncomfortable. In this thesis we propose EyeSwipe as a step further towards fast and comfortable text entry by gaze. EyeSwipe maps gaze paths into words, similarly to how finger traces are used on swipe-based methods for touchscreen devices. A gaze path differs from the finger trace in that it does not have clear start and end positions. To segment the gaze path from the user\'s continuous gaze data stream, EyeSwipe requires the user to explicitly indicate its beginning and end. The user can quickly glance at the vicinity of the other characters that compose the word. Candidate words are sorted based on the gaze path and presented to the user. We discuss two versions of EyeSwipe. EyeSwipe 1 uses a deterministic gaze gesture called Reverse Crossing to select both the first and last letters of the word. Considering the lessons learned during the development and test of EyeSwipe 1 we proposed EyeSwipe 2. The user emits commands to the interface by switching the focus between regions. In a text entry experiment comparing EyeSwipe 2 to EyeSwipe 1, 11 participants achieved an average text entry rate of 12.58 words per minute (wpm) with EyeSwipe 1 and 14.59 wpm with EyeSwipe 2 after using each method for 75 minutes. The maximum entry rates achieved with EyeSwipe 1 and EyeSwipe 2 were, respectively, 21.27 wpm and 32.96 wpm. Participants considered EyeSwipe 2 to be more comfortable and faster, while less accurate than EyeSwipe 1. Additionally, with EyeSwipe 2 we proposed the use of gaze path data to dynamically adjust the gaze estimation. Using data from the experiment we show that gaze paths can be used to dynamically improve gaze estimation during the interaction. / Pessoas com deficiências motoras severas podem se comunicar usando movimentos do olhar com o auxílio de um teclado virtual e um rastreador de olhar. A entrada de texto usando o olhar também beneficia usuários imersos em realidade virtual ou realidade aumentada, quando não possuem acesso a um teclado físico ou tela sensível ao toque. Assim, tanto usuários com e sem deficiência podem se beneficiar da possibilidade de entrar texto usando o olhar. Entretanto, métodos para entrada de texto com o olhar são tipicamente lentos e desconfortáveis. Nesta tese propomos o EyeSwipe como mais um passo em direção à entrada rápida e confortável de texto com o olhar. O EyeSwipe mapeia gestos do olhar em palavras, de maneira similar a como os movimentos do dedo em uma tela sensível ao toque são utilizados em métodos baseados em gestos (swipe). Um gesto do olhar difere de um gesto com os dedos em que ele não possui posições de início e fim claramente definidas. Para segmentar o gesto do olhar a partir do fluxo contínuo de dados do olhar, o EyeSwipe requer que o usuário indique explicitamente seu início e fim. O usuário pode olhar rapidamente a vizinhança dos outros caracteres que compõe a palavra. Palavras candidatas são ordenadas baseadas no gesto do olhar e apresentadas ao usuário. Discutimos duas versões do EyeSwipe. O EyeSwipe 1 usa um gesto do olhar determinístico chamado Cruzamento Reverso para selecionar tanto a primeira quanto a última letra da palavra. Levando em consideração os aprendizados obtidos durante o desenvolvimento e teste do EyeSwipe 1 nós propusemos o EyeSwipe 2. O usuário emite comandos para a interface ao trocar o foco entre as regiões do teclado. Em um experimento de entrada de texto comparando o EyeSwipe 2 com o EyeSwipe 1, 11 participantes atingiram uma taxa de entrada média de 12.58 palavras por minuto (ppm) usando o EyeSwipe 1 e 14.59 ppm com o EyeSwipe 2 após utilizar cada método por 75 minutos. A taxa de entrada de texto máxima alcançada com o EyeSwipe 1 e EyeSwipe 2 foram, respectivamente, 21.27 ppm e 32.96 ppm. Os participantes consideraram o EyeSwipe 2 mais confortável e rápido, mas menos preciso do que o EyeSwipe 1. Além disso, com o EyeSwipe 2 nós propusemos o uso dos dados dos gestos do olhar para ajustar a estimação do olhar dinamicamente. Utilizando dados obtidos no experimento mostramos que os gestos do olhar podem ser usados para melhorar a estimação dinamicamente durante a interação.
|
5 |
Improving Support of Conversations by Enhancing Mobile Computer InputLyons, Kenton Michael 13 July 2005 (has links)
Mobile computing is becoming one of the most widely adopted
technologies. There are 1.3 billion mobile phone subscribers
worldwide, and the current generation of phones offers substantial
computing ability. Furthermore, mobile devices are increasingly
becoming integrated into everyday life. With the huge popularity in
mobile computing, it is critical that we examine the human-computer
interaction issues for these devices and explicitly explore supporting
everyday activities. In particular, one very common and important
activity of daily life I am interested in supporting is
conversation. Depending on job type, office works can spend up to 85\%
of their time in interpersonal communication.
In this work, I present two methods that improve a user's ability to
enter information into a mobile computer in conversational situations.
First I examine the Twiddler, a keyboard that has been adopted by the
wearable computing community. The Twiddler is a mobile one-handed
chording keyboard with a keypad similar to a mobile phone. The second
input method is dual-purpose speech, a technique designed to leverage
a user's conversational speech. A dual-purpose speech interaction is
one where speech serves two roles; it is socially appropriate and
meaningful in the context of a human-to-human conversation and
provides useful input to a computer. A dual-purpose speech application
listens to one side of a conversation and provides beneficial services
to the user. Together these input methods provide a user the ability
to enter information while engaged in conversation in a mobile
setting.
|
6 |
Braille-based Text Input for Multi-touch Screen Mobile PhonesFard, Hossein Ghodosi, Chuangjun, Bie January 2011 (has links)
ABSTRACT: “The real problem of blindness is not the loss of eyesight. The real problem is the misunderstanding and lack of information that exist. If a blind person has proper training and opportunity, blindness can be reduced to a physical nuisance.”- National Federation of the Blind (NFB) Multi-touch screen is a relatively new and revolutionary technology in mobile phone industry. Being mostly software driven makes these phones highly customizable for all sorts of users including blind and visually impaired people. In this research, we present new interface layouts for multi-touch screen mobile phones that enable visionless people to enter text in the form of Braille cells. Braille is the only way for these people to directly read and write without getting help from any extra assistive instruments. It will be more convenient and interesting for them to be provided with facilities to interact with new technologies using their language, Braille. We started with a literature review on existing eyes-free text entry methods and also text input devices, to find out their strengths and weaknesses. At this stage we were aiming at identifying the difficulties that unsighted people faced when working with current text entry methods. Then we conducted questionnaire surveys as the quantitative method and interviews as the qualitative method of our user study to get familiar with users’ needs and expectations. At the same time we studied the Braille language in detail and examined currently available multi-touch mobile phone feedbacks. At the designing stage, we first investigated different possible ways of entering a Braille “cell” on a multi-touch screen, regarding available input techniques and also considering the Braille structure. Then, we developed six different alternatives of entering the Braille cells on the device; we laid out a mockup for each and documented them using Gestural Modules Document and Swim Lanes techniques. Next, we prototyped our designs and evaluated them utilizing Pluralistic Walkthrough method and real users. Next step, we refined our models and selected the two bests, as main results of this project based on good gestural interface principles and users’ feedbacks. Finally, we discussed the usability of our elected methods in comparison with the current method visually impaired use to enter texts on the most popular multi-touch screen mobile phone, iPhone. Our selected designs reveal possibilities to improve the efficiency and accuracy of the existing text entry methods in multi-touch screen mobile phones for Braille literate people. They also can be used as guidelines for creating other multi-touch input devices for entering Braille in an apparatus like computer.
|
7 |
New input methods for blind users on wide touch devicesKrot, Andrii January 2016 (has links)
Blind people cannot enter text on touch devices using common input methods. They use special input methods that have lower performance (i.e. lower entry rate and higher error rate). Most blind people have muscle memory from using classic physical keyboards, but the potential of using this memory is not utilized by existing input methods. The goal of the project is to take advantage of this muscle memory to improve the typing performance of blind people on wide touch panels. To this end, four input methods are designed, and a prototype for each one is developed. These input methods are compared with each other and with a standard input method. The results of the comparison show that using input methods designed in this report improves typing performance. The most promising and the least promising approaches are specified.
|
8 |
EyeSwipe: text entry using gaze paths / EyeSwipe: entrada de texto usando gestos do olharAndrew Toshiaki Nakayama Kurauchi 30 January 2018 (has links)
People with severe motor disabilities may communicate using their eye movements aided by a virtual keyboard and an eye tracker. Text entry by gaze may also benefit users immersed in virtual or augmented realities, when they do not have access to a physical keyboard or touchscreen. Thus, both users with and without disabilities may take advantage of the ability to enter text by gaze. However, methods for text entry by gaze are typically slow and uncomfortable. In this thesis we propose EyeSwipe as a step further towards fast and comfortable text entry by gaze. EyeSwipe maps gaze paths into words, similarly to how finger traces are used on swipe-based methods for touchscreen devices. A gaze path differs from the finger trace in that it does not have clear start and end positions. To segment the gaze path from the user\'s continuous gaze data stream, EyeSwipe requires the user to explicitly indicate its beginning and end. The user can quickly glance at the vicinity of the other characters that compose the word. Candidate words are sorted based on the gaze path and presented to the user. We discuss two versions of EyeSwipe. EyeSwipe 1 uses a deterministic gaze gesture called Reverse Crossing to select both the first and last letters of the word. Considering the lessons learned during the development and test of EyeSwipe 1 we proposed EyeSwipe 2. The user emits commands to the interface by switching the focus between regions. In a text entry experiment comparing EyeSwipe 2 to EyeSwipe 1, 11 participants achieved an average text entry rate of 12.58 words per minute (wpm) with EyeSwipe 1 and 14.59 wpm with EyeSwipe 2 after using each method for 75 minutes. The maximum entry rates achieved with EyeSwipe 1 and EyeSwipe 2 were, respectively, 21.27 wpm and 32.96 wpm. Participants considered EyeSwipe 2 to be more comfortable and faster, while less accurate than EyeSwipe 1. Additionally, with EyeSwipe 2 we proposed the use of gaze path data to dynamically adjust the gaze estimation. Using data from the experiment we show that gaze paths can be used to dynamically improve gaze estimation during the interaction. / Pessoas com deficiências motoras severas podem se comunicar usando movimentos do olhar com o auxílio de um teclado virtual e um rastreador de olhar. A entrada de texto usando o olhar também beneficia usuários imersos em realidade virtual ou realidade aumentada, quando não possuem acesso a um teclado físico ou tela sensível ao toque. Assim, tanto usuários com e sem deficiência podem se beneficiar da possibilidade de entrar texto usando o olhar. Entretanto, métodos para entrada de texto com o olhar são tipicamente lentos e desconfortáveis. Nesta tese propomos o EyeSwipe como mais um passo em direção à entrada rápida e confortável de texto com o olhar. O EyeSwipe mapeia gestos do olhar em palavras, de maneira similar a como os movimentos do dedo em uma tela sensível ao toque são utilizados em métodos baseados em gestos (swipe). Um gesto do olhar difere de um gesto com os dedos em que ele não possui posições de início e fim claramente definidas. Para segmentar o gesto do olhar a partir do fluxo contínuo de dados do olhar, o EyeSwipe requer que o usuário indique explicitamente seu início e fim. O usuário pode olhar rapidamente a vizinhança dos outros caracteres que compõe a palavra. Palavras candidatas são ordenadas baseadas no gesto do olhar e apresentadas ao usuário. Discutimos duas versões do EyeSwipe. O EyeSwipe 1 usa um gesto do olhar determinístico chamado Cruzamento Reverso para selecionar tanto a primeira quanto a última letra da palavra. Levando em consideração os aprendizados obtidos durante o desenvolvimento e teste do EyeSwipe 1 nós propusemos o EyeSwipe 2. O usuário emite comandos para a interface ao trocar o foco entre as regiões do teclado. Em um experimento de entrada de texto comparando o EyeSwipe 2 com o EyeSwipe 1, 11 participantes atingiram uma taxa de entrada média de 12.58 palavras por minuto (ppm) usando o EyeSwipe 1 e 14.59 ppm com o EyeSwipe 2 após utilizar cada método por 75 minutos. A taxa de entrada de texto máxima alcançada com o EyeSwipe 1 e EyeSwipe 2 foram, respectivamente, 21.27 ppm e 32.96 ppm. Os participantes consideraram o EyeSwipe 2 mais confortável e rápido, mas menos preciso do que o EyeSwipe 1. Além disso, com o EyeSwipe 2 nós propusemos o uso dos dados dos gestos do olhar para ajustar a estimação do olhar dinamicamente. Utilizando dados obtidos no experimento mostramos que os gestos do olhar podem ser usados para melhorar a estimação dinamicamente durante a interação.
|
9 |
Gaze-typing for Everyday Use: Keyboard Usability Observations and a “Tolerant” Keyboard PrototypeYu, Jiayao January 2018 (has links)
Gaze-typing opens up a new input channel, but its keyboard designs are not ready for everyday use. To investigate the gaze-typing keyboards for such use that are easy to learn, fast to type, and robust to use differences, I analyzed the usability of three widely used gaze-typing keyboards by a user study with typing performance measurements, synthesized the design space of everyday used gaze-typing keyboards under the topics of typing schemes and keyboard letter layouts, feedback, ease of text editing, and system design. In particular, I found gaze-typing keyboards need “tolerant” designs that allow implicit gaze control and balance between input ambiguity and typing efficiency. Therefore, I prototyped a gaze-typing keyboard using a shape-writing scheme meant for everyday typing by gaze gestures, with the adaption on segmenting the gaze locus when writing a word from continuous gaze data stream. The system affords real-time shape-writing in the speed of 11.70 WPM and the error rate of 0.14 evaluated with an experienced user and supports to type 20000+ words from the lexicon. / Blick-skrivande möjliggör en ny inmatningskanal, men dess tangentborddesign är inte än redo för dagligt bruk. För att utforska blick-skriftstangentbord för sådant bruk, som är enkla att lära sig använda, snabba att skriva med, och robusta för olika användning, analyserade jag användbarheten hos tre brett använda blick-skriftstangentbord genom en användarstudie med skrivprestationsmätningar, och syntetiserade ett designutrymme för blick-skriftstangentbord för dagsbruk baserat på teman av typningsscheman och tangentbordslayout, feed-back, användarvänlighet för text redigering, och system design. I synnerhet identifierade jag att blick-skriftstangentbord behöver ha "toleranta" designer som tillåter implicit blickkontroll och balans mellan Inmatningsambiguitet och typningseffektivitet. Därför prototypade jag ett blick-skriftstangentbord som använder ett formskriftsschema som är avsett för vardagligt skrivande med blickgester, och anpassat till att segmentera blickpunkten när du skriver ord från en kontinuerlig ström av blickdata. Systemet erbjuder realtidsformskrivning i hastigheten 11.70 WPM och felfrekvensen 0.14 utvärderat med en erfaren användare, och har stöd för att skriva fler än 20000 ord från lexikonet.
|
10 |
DeepType: A Deep Neural Network Approach to Keyboard-Free TypingBroekhuijsen, Joshua V. 23 February 2023 (has links) (PDF)
Textual data entry is an increasingly-important part of Human-Computer Interaction (HCI), but there is room for improvement in this domain. First, the keyboard -- a foundational text-entry device -- presents ergonomic challenges in terms of comfort and accuracy for even well-trained typists. Second, touch-screen smartphones -- some of the most ubiquitous mobile devices -- lack the physical space required to implement a full-size physical keyboard, and settle for a reduced input that can be slow and inaccurate. This thesis proposes and examines "DeepType" to begin addressing both of these problems in the form of a fully-virtual keyboard, realized through a deep recurrent neural network (DRNN) trained to recognize skeletal movement during typing. This network enables typing data to be extracted without a physical keyboard: a user can type on a flat surface as though on a keyboard, and the movement of their fingers (as recorded via monocular camera and estimated using a pre-trained model) is input into the DeepType network to provide output compatible with that output by a physical keyboard with 91.2% accuracy without any autocorrection. We show that this architecture is computationally feasible and sufficiently accurate for use when tailored to a specific subject, and suggest optimizations that may enable generalization. We also present a novel data capture system used to generate the training dataset for DeepType, including effective hand pose data normalization techniques.
|
Page generated in 0.0925 seconds