• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 30
  • 22
  • 10
  • 9
  • 8
  • 7
  • 7
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Using Workflows to Automate Activities in MDE Tools

Gamboa, Miguel 09 1900 (has links)
Le génie logiciel a pour but de créer des outils logiciels qui permettent de résoudre des problèmes particuliers d’une façon facile et efficace. À cet égard, l’ingénierie dirigée par les modèles (IDM), facilite la création d’outils logiciels, en modélisant et transformant systématiquement des modèles. À cette fin, l’IDM s’appuie sur des workbenches de langage : des environnements de développement intégré (IDE) pour modéliser des langages, concevoir des modèles, les exécuter et les vérifier. Mais l’utilisation des outils est loin d’être efficace. Les activités de l’IDM typiques, telles que la création d’un langage de domaine dédié ou créer une transformation de modèles, sont des activités complexes qui exigent des opérations souvent répétitives. Par conséquent, le temps de développement augmentate inutilement. Le but de ce mémoire est de proposer une approche qui augmente la productivité des modélisateurs dans leurs activités quotidiennes en automatisant le plus possible les tâches à faire dans les outils IDM. Je propose une solution utilisant l’IDM où l’utilisateur définit un flux de travail qui peut être paramétré lors de l’exécution. Cette solution est implémentée dans un IDE pour la modélisation graphique. À l’aide de deux évaluations empiriques, je montre que la productivité des utilisateurs est augmentée et amééliorée. / Software engineering aims to create software tools that allow people to solve particular problems in an easy and efficient way. In this regard, Model-driven engineering (MDE) enables to generate software tools, by systematically modeling and transforming models. In order to do this, MDE relies on language workbenches: Integrated Development Environment (IDE) for engineering modeling languages, designing models executing them and verifying them. However, the usability of these tools is far from efficient. Common MDE activities, such as creating a domain-specific language or developing a model transformation, are nontrivial and often require repetitive tasks. This results in unnecessary risings of development time. The goal of this thesis is to increase the productivity of modelers in their daily activities by automating the tasks performed in current MDE tools. I propose an MDE-based solution where the user defines a reusable workflow that can be parameterized at run-time and executed. This solution is implemented in an IDE for graphical modeling. I also performed two empirical evaluations in which the users’ productivity is improved.
12

An investigation into alternative human-computer interaction in relation to ergonomics for gesture interface design

Chen, Tin Kai January 2009 (has links)
Recent, innovative developments in the field of gesture interfaces as input techniques have the potential to provide a basic, lower-cost, point-and-click function for graphic user interfaces (GUIs). Since these gesture interfaces are not yet widely used, indeed no tilt-based gesture interface is currently on the market, there is neither an international standard for the testing procedure nor a guideline for their ergonomic design and development. Hence, the research area demands more design case studies on a practical basis. The purpose of the research is to investigate the design factors of gesture interfaces for the point-andclick task in the desktop computer environment. The key function of gesture interfaces is to transfer the specific body movement into the cursor movement on the two-dimensional graphical user interface(2D GUI) on a real-time basis, based in particular on the arm movement. The initial literature review identified limitations related to the cursor movement behaviour with gesture interfaces. Since the cursor movement is the machine output of the gesture interfaces that need to be designed, a new accuracy measure based on the calculation of the cursor movement distance and an associated model was then proposed in order to validate the continuous cursor movement. Furthermore, a design guideline with detailed design requirements and specifications for the tilt-based gesture interfaces was suggested. In order to collect the human performance data and the cursor movement distance, a graphical measurement platform was designed and validated with the ordinary mouse. Since there are typically two types of gesture interface, i.e. the sweep-based and the tilt-based, and no commercial tilt-based gesture interface has yet been developed, a commercial sweep-based gesture interface, namely the P5 Glove, was studied and the causes and effects of the discrete cursor movement on the usability was investigated. According to the proposed design guideline, two versions of the tilt-based gesture 3 interface were designed and validated based on an iterative design process. Most of the phenomena and results from the trials undertaken, which are inter-related, were analyzed and discussed. The research has contributed new knowledge through design improvement of tilt-based gesture interfaces and the improvement of the discrete cursor movement by elimination of the manual error compensation. This research reveals that there is a relation between the cursor movement behaviour and the adjusted R 2 for the prediction of the movement time across models expanded from Fitts’ Law. In such a situation, the actual working area and the joint ranges are lengthy and appreciably different from those that had been planned. Further studies are suggested. The research was associated with the University Alliance Scheme technically supported by Freescale Semiconductor Co., U.S.
13

Transformación escalar de la interfaz de operador en teleoperación asistida

Muñoz Morgado, Luis Miguel 03 February 2012 (has links)
Human-machine interaction in teleoperation, through the adequate user interface, allows achieving the level of intelligence necessary to execute complex tasks that cannot be executed by machines or robots alone neither directly by humans. H-R interaction techniques facilitate the execution of such tasks making them more efficient and effective through the improvement of their user interface. Humans have inherent motor limitations (such as physiological tremor) and perceptive limitations (mainly perception of distance and time), which can prevent them from operating smoothly and precisely enough for certain applications. Some studies have already tackled this problem and its effect on the human-machine interaction and teleoperated systems. There are psychomotor models that show that the human manipulation efficiency, in actions such as pointing an object, depends on several factors. Among these models, the most representative corresponds to Fitts’ Law, in which the execution time is a logarithmic function of the size and distance to the object. In teleoperation, and based on these models, a modification of the visual scale in the user’s interface has a direct effect on the task execution time and on the precision that can be achieved. The same occurs with a change in the amplitude of the movement executed by the human operator with respect to that performed by the system. Therefore, scaling the movement between master and slave has a significant effect on the efficiency and effectiveness executing a task. This research work is oriented to the design and development of a method conceived to improve effectiveness thanks to a larger visual and motor efficiency of the human-machine interface. The method is based on the modification of the information flow between human, machine and interface by means of the scaling of both, the human movements and the image of the visualized task. Operation time, hand movements and the need for visual attention can thus be reduced with this computerized assistance. The changes of scale adapt to the task, which positively affects its performance in terms of precision and speed. Therefore, the proposed methodology aims to link the human operator working space to the machine or robot working space through an interface that introduces two scaling processes. A first change of scale is applied between the movement produced by the human operator and the movement produced in the visual interface (for instance, movement of the robot end-effector that is visualized on the computer screen); and a second change oriented to scale the real space of the task over the visual space of the interface. These changes of scale should be adjusted to the objects of interest, which result in a modification of the spatial resolution according to the task to be performed and to the size, shape distance and speed of the objects. Such changes modify the information flow between human and machine according to the characteristics and limitations of both. / La interacció persona-màquina en teleoperació, a través de la interfície d’usuari, permet aconseguir el nivell d’intel•ligència necessari per executar en cooperació tasques complexes que no poden ser realitzades per màquines o robots per si sols o directament per les persones. Les tècniques d’interacció faciliten el desenvolupament d’aquestes tasques fent-les mes eficients i eficaces, mitjançant la millora de qualsevol sistema que incorpori una interfície d’usuari. Les persones posseeixen limitacions motores inherents a la naturalesa humana (com la tremolor fisiològica) i limitacions perceptives (com la percepció de la distància o el temps) que impedeixen realitzar una operació suficientment suau i precisa en certes aplicacions. Alguns estudis tracten aquest fenomen i el seu efecte en els sistemes persona-màquina i sistemes teleoperats. Existeixen models psicomotors que mostren que la eficiència de la manipulació humana en la selecció d’un objecte depèn de determinats factors. Entre aquests models, el més representatiu correspon a la Llei de Fitts, in on el temps d’execució es una funció logarítmica de la mida i la distancia al objecte. En teleoperació, i en base a aquests models psicomotors, es demostra que una modificació en l’escala visual de la interfície té un efecte directe en el temps d’execució d’una tasca i en la precisió assolible. El mateix succeeix amb un canvi en l’amplitud del moviment que realitza l’operador respecte al realitzat pel sistema, de manera que l’escalat del moviment entre mestre i esclau té un efecte significatiu en l’eficiència i eficàcia amb la que s’executa una tasca. Aquest treball d’investigació està orientat al disseny i desenvolupament d’un mètode concebut per millorar l’eficàcia gràcies a una major eficiència visual i motora de la interfície persona-màquina. El mètode es basa en la modificació del flux d’informació entre persona, màquina i interfície mitjançant l’escalat tant del moviment de la persona com de la imatge de la tasca visualitzada. El temps d’operació, els moviments de la mà de la persona i el grau d’atenció poden reduir-se amb aquesta assistència computeritzada. Els canvis d’escala s’adapten a la tasca, afectant positivament el rendiment en termes de precisió i rapidesa. Així doncs, la metodologia proposada està orientada a connectar l’espai de treball de la persona amb l’espai de treball de la màquina o robot a través de la interfície que introdueix dos processos d’escala. Un primer canvi d’escala s’aplica entre el moviment produït per l’operador i el produït a la interfície visual i un segon canvi està orientat a escalar l’espai real de la tasca sobre la interfície visual. Aquests canvis d’escala han de ser ajustats als objectes d’interès, resultant en una modificació de la resolució espacial d’acord amb la tasca a realitzar i la mida, forma, i velocitat dels objectes. Aquests canvis d’escala modifiquen el flux d’informació entre l’operador i la màquina d’acord amb les característiques i limitacions d’ambdós. / La interacción persona-máquina en teleoperación, a través de la interfaz de usuario, permite conseguir el nivel de inteligencia necesario para ejecutar en cooperación tareas complejas que no pueden ser realizadas por máquinas o robots por si solos o directamente por las personas. Las técnicas de interacción facilitan el desarrollo de dichas tareas haciéndolas más eficientes y eficaces, mediante la mejora de cualquier sistema que incorpore una interfaz de usuario. Las personas poseen limitaciones motoras inherentes a la naturaleza humana (como el temblor fisiológico) y limitaciones perceptivas (como la percepción de la distancia o el tiempo) que impiden realizar una operación suficientemente suave y precisa en ciertas aplicaciones. Algunos estudios tratan este fenómeno y su efecto en los sistemas persona-máquina y sistemas teleoperados. Existen modelos psicomotores que muestran que la eficiencia de la manipulación humana en la selección de un objeto depende de determinados factores. Entre estos modelos, el más representativo corresponde a la Ley de Fitts en donde el tiempo de ejecución es una función logarítmica del tamaño y la distancia al objeto. En teleoperación, y en base a estos modelos psicomotores, se demuestra que una modificación en la escala visual de la interfaz tiene un efecto directo en el tiempo de ejecución de una tarea y en la precisión alcanzable. Lo mismo ocurre con un cambio en la amplitud de movimiento que realiza el operador con respecto al realizado por el sistema, con lo que el escalado del movimiento entre maestro y esclavo tiene un efecto significativo en la eficiencia y eficacia con la que se ejecuta una tarea. Este trabajo de investigación está orientado al diseño y desarrollo de un método concebido para mejorar la eficacia gracias a una mayor eficiencia visual y motora de la interfaz persona-máquina. El método se basa en la modificación del flujo de información entre persona, máquina e interfaz mediante el escalado tanto del movimiento de la persona como de la imagen de la tarea visualizada. El tiempo de operación, los movimientos de la mano de la persona y el grado de atención pueden reducirse con esta asistencia computarizada. Los cambios de escala se adaptan a la tarea, afectando positivamente al rendimiento en términos de precisión y rapidez. Por lo tanto, la metodología propuesta está orientada a conectar el espacio de trabajo de la persona con el espacio de trabajo de la máquina o robot a través de la interfaz que introduce dos procesos de escala. Un primer cambio de escala se aplica entre el movimiento producido por el operador y el producido en la interfaz visual y un segundo cambio está orientado a escalar el espacio real de la tarea sobre la interfaz visual. Estos cambios de escala deben ser ajustados a los objetos de interés, resultando en una modificación de la resolución espacial acorde con la tarea a realizar y el tamaño, forma, distancia y velocidad de los objetos. Dichos cambios de escala modifican el flujo de información entre el operador y la máquina acordes con las características y limitaciones de ambos.
14

Assistive force feedback for path following in 3D space for upper limb rehabilitation applications

Swaminathan, Ramya 01 June 2007 (has links)
The primary objective of this research was the design of an easy to use C++ Graphical User Interface (GUI) which helps the user to choose the task that he/she wants to perform. This C++ application provides a platform intended for upper arm rehabilitation applications. The user can choose from different tasks such as: Assistive Function in 3D Space to Traverse a Linear Trajectory, User Controlled Velocity Based Scaling, Fitts' Task in X, Y, Z Directions. According to a study conducted by the scientific journal of the American Academy of Neurology, stroke patients aided by robotic rehabilitation devices gain significant improvement in movement. They also indicate that both initial and long term recovery are greater for patients assisted by robots during rehabilitation. This research aims to provide a haptic interface C++ platform for clinicians and therapists to study human arm motion and also to provide assistance to the user. The user would get to choose andperform repetitive tasks aimed at improving his/her muscle memory. About eight healthy volunteers were chosen to perform a set of preliminary experiments on this haptic integrated C++ platform. These experiments were performed to get an indication of the effectiveness of the assistance functions provided in this C++ application. The eight volunteers performed the Fitts' Task in X, Y and Z directions. The subjects were divided into two groups, where one of the groups was given training without assistance and the other was given training with assistance. The execution time for both the groups was compared and analyzed. The experiments performed were preliminary, however some trends were observed: the people who received training with assistive force feedback took less execution time compared to those who were given training without any assistance. The path following error was also analyzed. These preliminary tests were performed to demonstrate the haptic platform's use as a therapeutic assessment application, a rehabilitation tool and a data collection system for clinicians and researchers.
15

Evaluating Swiftpoint as a Mobile Device for Direct Manipulation Input

Amer, Taher January 2006 (has links)
Swiftpoint is a promising new computer pointing device that is designed primarily for mobile computer users in constrained space. Swiftpoint has many advantages over current pointing devices: it is small, ergonomic, has a digital ink mode, and can be used over a flat keyboard. This thesis aids the development of Swiftpoint by formally evaluating it against two of the most common pointing devices with today's mobile computers: the touchpad, and mouse. Two laws commonly used with pointing devices evaluations, Fitts' Law and the Steering Law, were used to evaluate Swiftpoint. Results showed that Swiftpoint was faster and more accurate than the touchpad. The performance of the mouse was however, superior to both the touchpad and Swiftpoint. Experimental results were reflected in participants' choice for the mouse as their preferred pointing device. However, some participants indicated that their choice was based on their familiarity with the mouse. None of the participants chose the touchpad as their preferred device.
16

Taking Fitts' Slow: The Effects of Delayed Visual Feedback on Human Motor Performance and User Experience

January 2015 (has links)
abstract: ABSTRACT The present studies investigated the separate effects of two types of visual feedback delay – increased latency and decreased updating rate – on performance – both actual (e.g. response time) and subjective (i.e. rating of perceived input device performance) – in 2-dimensional pointing tasks using a mouse as an input device. The first sub-study examined the effects of increased latency on performance using two separate experiments. In the first experiment the effects of constant latency on performance were tested, wherein participants completed blocks of trials with a constant level of latency. Additionally, after each block, participants rated their subjective experience of the input device performance at each level of latency. The second experiment examined the effects of variable latency on performance, where latency was randomized within blocks of trials. The second sub-study investigated the effects of decreased updating rates on performance in the same manner as the first study, wherein experiment one tested the effect of constant updating rate on performance as well as subjective rating, and experiment two tested the effect of variable updating rate on performance. The findings suggest that latency is negative correlated with actual performance as well as subjective ratings of performance, and updating rate is positively correlated with actual performance as well as subjective ratings of performance. / Dissertation/Thesis / Masters Thesis Applied Psychology 2015
17

DESIGNING A 4-DOF ARM MODEL AND CONTROLLER TO SIMULATE COMPLETION OF A FITTS TASK

Hepner, Gabriel A. 27 April 2018 (has links)
No description available.
18

Understanding and Improving Distal Pointing Interaction

Kopper, Regis Augusto Poli 04 August 2011 (has links)
Distal pointing is the interaction style defined by directly pointing at targets from a distance. It follows a laser pointer metaphor and the position of the cursor is determined by the intersection of a vector extending the pointing device with the display surface. Distal pointing as a basic interaction style poses several challenges for the user, mainly because of the lack of precision humans have when using it. The focus of this thesis is to understand and improve distal pointing, making it a viable interaction metaphor to be used in a wide variety of applications. We achieve this by proposing and validating a predictive model of distal pointing that is inspired by Fitts' law, but which contains some unique features. The difficulty of a distal pointing task is best described by the angular size of the target and the angular distance that the cursor needs to go across to reach the target from the input device perspective. The practical impact of this is that the user's relative position to the target should be taken into account. Based on the model we derived, we proposed a set of design guidelines for high-precision distal pointing techniques. The main guideline from the model is that increasing the target size is much more important than reducing the distance to the target. In order to improve distal pointing, we followed the model guidelines and designed interaction techniques that aim at improving the precision of distal pointing tasks. Absolute and Relative Mapping (ARM) distal pointing increases precision by offering the user a toggle which changes the control/display (CD) ratio such that a large movement of the input device is mapped to a small movement of the cursor. Dynamic Control Display Ratio (DyCoDiR) automatically increases distal pointing precision, as the user needs it. DyCoDiR takes into account the user distance to the interaction area and the speed at which the user moves the input device to dynamically calculate an increased CD ratio, making the action more precise the steadier the user tries to be. We performed an evaluation of ARM and DyCoDiR comparing them to basic distal pointing in a realistic context. In this experiment, we also provided variations of the techniques which increased the visual perception of targets through zooming in the area around the cursor when precision was needed. Results from the study show that ARM and DyCoDiR are significantly faster and more accurate than basic distal pointing with tasks that require very high precision. We analyzed user navigation strategies and found that the high precision techniques afford users to remain stationary while performing interactions. However, we also found that individual differences have a strong impact on the decision to walk or not, and that, sometimes, is more important than the technique affordance. We provided a validation for the distal pointing model through the analysis of expected difficulty of distal pointing tasks in light of each technique tested. We propose selection by progressive refinement, a new design concept for distal pointing selection techniques, whose goal is to offer the ability to achieve near perfect accuracy in selection at very cluttered environments. The idea of selection by progressive refinement is to gradually eliminate possible targets from the set of selectable objects until only one object is available for selection. We implemented SQUAD, a selection by progressive refinement distal pointing technique, and performed a controlled experiment comparing it to basic distal pointing. We found that there is a clear tradeoff between immediate selections that require high precision and selections by progressive refinement which always require low precision. We validated the model by fitting the distal pointing data and proposed a new model, which has a linear growth in time, for SQUAD selection. / Ph. D.
19

Analysis of Performance Resulting from the Design of Selected Hand-Held Input Control Devices and Visual Displays

Spencer, Ronald Allen 02 October 2000 (has links)
Since the introduction of graphical user interfaces (GUI), input control devices have become an integral part of desktop computing. When interfacing with GUIs, these input control devices have become the human's primary means of communicating with the computer. Although there have been a number of experiments conducted on pointing devices for desktop machine, there is little research on pointing devices for wearable computer technology. This is surprising because pointing devices are a major component of a wearable computer system, allowing the wearer to select and manipulate objects on the screen. The design of these pointing devices will have a major impact on the ease with which the operator can interact with information being displayed (Card, English, and Burr, 1978). As a result, this research is the first in a series to investigate design considerations for pointing devices and visual displays that will support wearable computer users. Twenty soldiers participated in an experiment using target acquisition software with five pointing devices and two visual displays. The findings of the research strongly support the use of a relative mode-pointing device with rotational characteristics (i.e. trackball or thumbwheel) over other designs. Furthermore, the results also suggest that there is little difference between pointing devices operated with the thumb and index finger for target acquisition tasks. This study has also showed that there were little differences in pointing and homing time for pointing devices across the two visual displays. Finally, the study demonstrated that the Fitts' law model could be applied to hand-operated pointing devices for wearable computers. This is important because it allows the future development of pointing devices to be compared with the devices tested in this research using the Fitts' Law Index of Performance calculations. / Master of Science
20

Controle de movimentos rápidos e precisos direcionados a alvos espaciais / Control of rapid and accurate movements aimed to spatial targets

Okazaki, Victor Hugo Alves 09 March 2009 (has links)
Neste estudo foi analisado o efeito de distância, velocidade, tamanhos do disco e do alvo, e massas do disco e da manopla, sobre o desempenho motor em movimentos requisitando rapidez e precisão. Para tanto, foram analisadas as características cinemáticas da tarefa de projetar um disco a um alvo com movimento de contato balístico, empunhando uma manopla. Os movimentos foram desempenhados sobre uma base plana e filmados com câmera optoeletrônica de alta freqüência. O estudo foi conduzido em seis experimentos com um grupo único de participantes. Os resultados indicaram que os modelos de controle motor que têm sido empregados para analisar a relação velocidade-precisão em tarefas mais simples não foram apropriados para explicar o comportamento da tarefa utilizada. O controle motor na tarefa demonstrou ser dinâmico e flexível, frente às diferentes restrições de movimento. As seguintes estratégias de controle foram sugeridas na explicação dos resultados: sincronização da maior velocidade e do instante de contato manopla-disco, manutenção na proporção entre as fases aceleração-desaceleração, maior inércia e menor impacto para aumentar a estabilidade de movimento, e o controle da velocidade e da precisão em dimensões independentes. A análise das ações articulares demonstrou as particularidades das estratégias utilizadas pelo sistema no movimento em função das variáveis manipuladas. Em conjunto, esta seqüência de experimentos permitiu uma compreensão mais ampla das estratégias de controle motor empregadas em movimentos com alta demanda de velocidade e de precisão / In this study it was analyzed the effect of distance, velocity, disc and target width, disc and manipulandum mass, over motor control of a rapid and accurate movement. For such, it was analyzed the kinematic characteristics of the task of launching a disc to a target using a ballistic movement, performed with a manipulandum. Movements were performed on a flat surface and filmed with a high frequency optoelectronic camera. The study was conducted through six experiments with a single group of participants. Analysis of results indicated that models of motor control that has been used to analyze speed-accuracy tradeoff in simpler tasks were not appropriate to explain the observed behavior in the task used. Motor control in the task showed to be dynamic and flexible, regarding the several constraints manipulated. The follow strategies of control were suggested to explain the results: synchronization of the peak velocity and the instant of disc-manipulandum contact, maintenance of the proportion between the acceleration-deceleration phases, greater inertia and minor impact to increase movement stability, and control of velocity and accuracy of independent dimensions. The prediction of Analysis of joint actions showed the particularities of the strategies used by the system on movement as a function of the manipulated variables. Together, this study sequence of experiments allowed for a deeper comprehension of the control strategies used in the control of rapid and accurate movements

Page generated in 0.02 seconds