Spelling suggestions: "subject:"ici"" "subject:"pici""
61 |
Developing and evaluating the feasibility of an active training game for smart-phones as a tool for promoting executive function in childrenGray, Stuart Iain January 2017 (has links)
Executive function (EF) comprises a series of interrelated cognitive and self-regulatory skills which are required in nearly every facet of everyday life, particularly in novel circumstances. EF skills begin developing from birth and continue to grow well into adulthood but are most crucial for children as they are associated with academic and life success as well as mental and physical health. There is now strong evidence that these skills can be trained through targeted intervention in a diverse range of approaches, such as computer games, physical activity, and social play settings. This thesis presents the process of the design and evaluation of an active EF-training game (BrainQuest) for smart-phones, in participation with end-users: a group of 11-12-year-old children from a local Primary School. The design process placed emphasis on creating an engaging user experience, a phenomenon which has eluded many serious games, by building upon motivational game design theory and satisfying end-user requirements. However, in the pursuit of promoting particular executive functions: working memory; inhibitory control; planning and strategizing, the design integrated aspects of a cognitive assessment while also utilizing a range of alternative approaches for training EF, including physical activity and social play. Following an iterative design process which included many single session prototype evaluations, a mixed methods evaluation was undertaken during a 5-week study with twenty-eight 11-12-year-old school children. The study gathered exploratory qualitative and quantitative evidence regarding the game’s potential benefits which was evaluated by triangulating a range of data sources: multi-observer observations notes, interviews with children and teachers, game performance data and logs, and cognitive assessment outcomes. The analysis describes the statistical relationships between game and executive function ability, before exploring user experiences and evidence of cognitive challenge during gameplay through a series of triangulated case studies and general whole-class observations. The analysis presents the game to be engaging and enjoyable throughout the study and, for most children, able to generate a sustainable challenge. Though there were initial difficulties in understanding the complex game rules and technology, the game became increasingly usable and learnable for the target user group and created opportunities for goal setting. It also encouraged feelings of pride and self-confidence as well as facilitating positive social interactions and requiring regulation of emotion, which are considered to be pathways to developing executive functions (Diamond, 2012). There was also promising initial evidence that the game’s variable difficulty level system was able to challenge executive functions: planning and strategizing, working memory, and inhibitory control. Most notably, the game appeared to support improvements in strategizing ability by demanding increasing strategic complexity in response to evolving and increasingly difficult task demands. Supporting BrainQuest’s cognitive challenge, several statistical relationships emerged between executive function ability and game performance measures. However, the game’s ability to significantly improve cognitive outcomes could not yet be concluded. Nevertheless, these findings have implications for both the future design and evaluation practices undertaken by cognitive training researchers. From a design perspective, less credence should be paid to simply gamifying cognitive assessments while greater emphasis should be placed on integration of formal game design and motivational theories. With regards to evaluation, researchers should understand the importance of establishing first whether CTGs can remain engaging over time as well as the feasibility of their challenge to cognitive functions.
|
62 |
Um ambiente para análise de resultados de avaliações de acessibilidade e usabilidade na Web / An environment for analyzing results of assessments of accessibility and usability on the webAmaral, Leandro Agostini do 06 May 2014 (has links)
A Web apresenta um conteúdo extenso de informações disponibilizado a uma população diversificada de pessoas, as quais podem apresentar as mais diferentes habilidades e exigências. Dessa maneira, garantir a acessibilidade a todo usuário é uma tarefa difícil, mesmo existindo um conjunto extenso de recomendações disponibilizadas pelo World Wide Web Consortium (W3C). Assim, são propostas diferentes ferramentas para avaliação de acessibilidade, que contrapõem os artefatos às diretrizes com a finalidade de obter resultados automatizados, produzindo testes e gerando vários dados, como a localização do problema no código e as falhas especificadas. Para facilitar o processamento desses dados, mediante a disponibilização de uma linguagem comum, o W3C desenvolveu a Evaluation and Report Language (EARL). Dados os problemas acerca das dificuldades em se garantir a acessibilidade para os diferentes perfis de usuários e a necessidade da interpretação dos relatórios em EARL, como colaboração para a avaliação manual, indispensável no contexto de testes de acessibilidade, neste trabalho é proposto um apoio por meio de um Ambiente para Análise de Avaliações de Acessibilidade e Usabilidade na Web (A4U). A partir do estudo de caso realizado, pôde-se validar o ambiente A4U desenvolvido, o qual inclui relatórios de avaliações semiautomáticas, para que o desenvolvedor possa interpretá-los e prosseguir com a avaliação manual de acessibilidade e usabilidade. No âmbito do apoio desenvolvido, foram considerados os avanços em acessibilidade, usabilidade, a correlação entre os dois conceitos e a colaboração no desenvolvimento de uma ferramenta denominada AccessibilityUtil1, a qual se destina ser uma fonte de práticas de acessibilidade advindas de experiências de desenvolvedores, mediante colaboração em um ambiente Web, relacionando-as com as diretrizes de acessibilidade do W3C. A presente pesquisa contribuiu para a consolidação das questões de acessibilidade e usabilidade, a partir do desenvolvimento do A4U, que viabiliza a avaliação humana de acessibilidade e usabilidade, e a inserção de resultados de avaliações gerados por ferramentas semiautomáticas, conduzindo o avaliador a produzir melhorias em ambas as frentes / The Web has a large content of information available to a diverse population of people which may have the more different abilities and requirements. Thus, ensure accessibility to all users is a difficult task, even having an extensive set of recommendations provided by the World Wide Web Consortium (W3C). So, are proposed different tools for accessibility assessment that contrast the artifacts with guidelines in order to obtain automated results, producing tests and generating various data, such as the location of the problem in the code and the specified faults. To facilitate the processing of such data by providing a common language, the W3C developed the Evaluation and Report Language (EARL). Given the issues about the difficulties of ensuring accessibility to the different profiles of users and the necessity of interpreting the EARL reports, as collaboration for indispensable human review in the context of accessibility testing, this paper proposes a support through an Environment for Analyzing results of Assessments of Accessibility and Usability on the Web (A4U). From the case study conducted, it was possible to validate the A4U environment developed, which includes semiautomatic evaluating results, so that the developer can interpret them and proceed with the manual evaluation of accessibility and usability. Developed in support were considered advancements in accessibility, usability, the correlation between the two concepts and collaboration in the development of a tool called AccessibilityUtil, which is intended to be a source of accessibility practices arising from experiences of developers through collaboration a Web environment relating them to the W3C accessibility guidelines. This research contributed to the consolidation of the issues of accessibility and usability, from the development of A4U, which enables the human evaluation of accessibility and usability, and integration of evaluation results generated by semi-automatic tools, leading the reviewer to produce improvements in both fronts
|
63 |
Sensing and interactive intelligence in mobile context aware systemsLovett, Tom January 2013 (has links)
The ever increasing capabilities of mobile devices such as smartphones and their ubiquity in daily life has resulted in a large and interesting body of research into context awareness { the `awareness of a situation' { and how it could make people's lives easier. There are, however, diculties involved in realising and implementing context aware systems in the real world; particularly in a mobile environment. To address these diculties, this dissertation tackles the broad problem of designing and implementing mobile context aware systems in the eld. Spanning the elds of Articial Intelligence (AI) and Human Computer Interaction (HCI), the problem is broken down and scoped into two key areas: context sensing and interactive intelligence. Using a simple design model, the dissertation makes a series of contributions within each area in order to improve the knowledge of mobile context aware systems engineering. At the sensing level, we review mobile sensing capabilities and use a case study to show that the everyday calendar is a noisy `sensor' of context. We also show that its `signal', i.e. useful context, can be extracted using logical data fusion with context supplied by mobile devices. For interactive intelligence, there are two fundamental components: the intelligence, which is concerned with context inference and machine learning; and the interaction, which is concerned with user interaction. For the intelligence component, we use the case of semantic place awareness to address the problems of real time context inference and learning on mobile devices. We show that raw device motion { a common metric used in activity recognition research { is a poor indicator of transition between semantically meaningful places, but real time transition detection performance can be improved with the application of basic machine learning and time series processing techniques. We also develop a context inference and learning algorithm that incorporates user feedback into the inference process { a form of active machine learning. We compare various implementations of the algorithm for the semantic place awareness use case, and observe its performance using a simulation study of user feedback. For the interaction component, we study various approaches for eliciting user feedback in the eld. We deploy the mobile semantic place awareness system in the eld and show how dierent elicitation approaches aect user feedback behaviour. Moreover, we report on the user experience of interacting with the intelligent system and show how performance in the eld compares with the earlier simulation. We also analyse the resource usage of the system and report on the use of a simple SMS place awareness application that uses our system. The dissertation presents original research on key components for designing and implementing mobile context aware systems, and contributes new knowledge to the eld of mobile context awareness.
|
64 |
Users' metaphoric interaction with the InternetHogan, Amy Louise January 2008 (has links)
No description available.
|
65 |
Eye tracking som interaktionsmetod för interaktiva informationsskärmar / Eye tracking as an interaction method for interactive public displaysKlasson, Jonny, Lignell, Johan, Lundqvist, Patrik January 2014 (has links)
Informationsskärmar återfinns idag på en mängd olika platser: i butiker, bibliotek, buss- och tågterminaler och liknande. Informationsskärmar är på frammarsch och spås bli en ännu vanligare syn framöver. Oftast är dessa skärmar helt statiska, det vill säga det går inte att interagera med dem och i de fall de tillåter interaktion handlar det främst om pekskärmar. Pekskärmstekniken har dock vissa tillkortakommanden och det finns ett behov av att undersöka nya sätt att interagera med datorer och informationsskärmar. En teknik som funnits länge i olika former men fram tills nyligen varit relativt otillgänglig för vanliga konsumenter är eye tracking (ögonspårning), en teknik för att spåra ögonrörelser hos människor.Syftet med denna uppsats är att undersöka hur eye tracking kan kombineras med en informationsskärm. Denna uppsats inriktar sig i första hand på hur detta kan användas för kommersiella ändamål; närmare bestämt eye tracking och informationsskärmar i ett slags reklamsammanhang. Detta har innefattat konstruerandet av en prototyp i form av en applikation som illustrerar reklam av en produkt i vilken användare kan få fram information om själva produkten bara genom att titta på dess olika delar. Iterativa tester har därefter genomförts där testdeltagare interagerat med två separata applikationer genom eye tracking. Testdata från dessa tester har tillsammans med teori främst från HCI-området använts för analysering och till sist har en slutsats presenterats.Studien visar att eye tracking har mycket stor potential att vara användbart tillsammans med informationsskärmar men att vissa utmaningar kvarstår med tekniken. Studien redogör även för dessa utmaningar och även möjligheter som finns med eye tracking. Slutligen presenteras även förslag till framtida forskning inom området som kan vara av intresse för dess fortsatta utveckling som HCI-teknik. / Program: Systemvetarutbildningen
|
66 |
Evaluating Head Gestures for Panning 2-D Spatial InformationDerry, Matthew O 01 December 2009 (has links)
New, often free, spatial information applications such as mapping tools, topological imaging, and geographic information systems are becoming increasingly available to the average computer user. These systems, which were once available only to government, scholastic, and corporate institutions with highly skilled operators, are driving a need for new and innovative ways for the average user to navigate and control spatial information intuitively, accurately, and efficiently. Gestures provide a method of control that is well suited to navigating the large datasets often associated with spatial information applications. Several different types of gestures and different applications that navigate spatial data are examined. This leads to the introduction of a system that uses a visual head tracking scheme for controlling of the most common navigation action in the most common type of spatial information application, panning a 2-D map. The proposed head tracking scheme uses head pointing to control the direction of panning. The head tracking control is evaluated against the traditional control methods of the mouse and touchpad, showing a significant performance increase over the touchpad and comparable performance to the mouse, despite limited practice with head tracking.
|
67 |
Realtime computer interaction via eye trackingDubey, Premnath January 2004 (has links)
Through eye tracking technology, scientists have explored the eyes diverse aspects and capabilities. There are many potential applications that benefit from eye tracking. Each benefit from advances in computer technology as this results in improved quality and decreased costs for eye-tracking systems.This thesis presents a computer vision-based eye tracking system for human computer interaction. The eye tracking system allows the user to indicate a region of interest in a large data space and to magnify that area, without using traditional pointer devices. Presented is an iris tracking algorithm adapted from Camshift; an algorithm originally designed for face or hand tracking. Although the iris is much smaller and highly dynamic. the modified Camshift algorithm efficiently tracks the iris in real-time. Also presented is a method to map the iris centroid, in video coordinates to screen coordinates; and two novel calibration techniques, four point and one-point calibration. Results presented show that the accuracy for the proposed one-point calibration technique exceeds the accuracy obtained from calibrating with four points. The innovation behind the one-point calibration comes from using observed eye scanning behaviour to constrain the calibration process. Lastly, the thesis proposes a non-linear visualisation as an eye-tracking application, along with an implementation.
|
68 |
The Design & User Experiences of a Mobile Location-awareness Application: Meet AppWesterlund, Markus January 2010 (has links)
<p>This paper intends to describe the work and result of the design project Meet App. Meet App lets users interact around their current locations in a direct manner. The user experience is evaluated to get an understanding of the usefulness and interaction with this type of design. The project is related to the context-awareness research field where findings put the project in a greater whole. The result indicates usefulness and enjoyment interacting with the application, but because of the low number of participants the findings cannot be validated.</p>
|
69 |
Användarvänlighet i GNOME : en utvärdering av studenters inställningRingnér, Henrik, Elfström, Pål January 2004 (has links)
No description available.
|
70 |
IT-upplevelser i skolledningens arbetsvardagAndersson, Katrin, Aram, Pantea January 2002 (has links)
No description available.
|
Page generated in 0.0452 seconds