• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 8
  • 1
  • Tagged with
  • 18
  • 18
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

An empirical investigation in using multi-modal metaphors to browse internet search results. An investigation based upon experimental browsing platforms to examine usability issues of multi-nodal metaphors to communicate internet-based search engine results.

Ciuffreda, Antonio January 2008 (has links)
This thesis explores the use of multimodality to communicate retrieved results of Internet search engines. The investigation aimed to investigate suitable multimodal metaphors which would increase the level of usability of Internet search engine interfaces and enhance users` experience in the search activity. The study consisted of three experiments based on questionnaires and Internet search activities with a set of text-based and multimodal interfaces. These interfaces were implemented in two browsing platforms, named AVBRO and AVBRO II. In the first experiment, the efficiency of specific multimodal metaphors to communicate additional information of retrieved results was investigated. The experiment also sought to obtain users` views of these metaphors with a questionnaire. An experimental multimodal interface of the AVBRO platform, which communicated additional information with a combination of three 2D graphs and musical stimuli, was used as a basis for the experiment, together with the Google search engine. The results obtained led to the planning of a second experiment. The aim of this experiment was to obtain and compare the level of usability of four different experimental multimodal interfaces and one traditional text-based interface, all implemented in the AVBRO II platform. Effectiveness, efficiency and users` satisfaction were used as criteria to evaluate the usability of these interfaces. In the third and final experiment the usability analysis of a traditional text-based interface and the two most suitable experimental multimodal interfaces of the AVBRO II platform was further investigated. Learnability, errors rate, efficiency, memorability and users` satisfaction were used as criteria to evaluate the usability of these interfaces. The analysis of the results obtained from these experiments provided the basis for a set of design guidelines for the development of usable interfaces based on a multimodal approach.
12

Reconhecimento de fala para navegação em aplicativos móveis para português brasileiro / Brazilian Portuguese Speech Recognition for Navigation on Mobile Device Applications

Triana Gomez, Edwin Miguel 17 June 2011 (has links)
O objetivo do presente trabalho de pesquisa é reduzir o nível de atenção requerido para o uso do sistema Borboleta por meio de reconhecimento de fala na navegação através das funcionalidades do sistema, permitindo ao profissional dar maior atenção ao paciente. A metodologia de desenvolvimento do projeto inclui uma revisão bibliográfica para definir o estado da arte da área, uma pesquisa sobre o software disponível para reconhecimento de fala, uma coleta de dados dos comandos do sistema em português brasileiro para treinar e testar o sistema, uma etapa de projeção e desenvolvimento para definir a arquitetura de integração com o Borboleta, e uma fase de testes para medir a precisão do sistema e seus níveis de usabilidade e aceitação por parte do usuário. / The current document presents research that addresses the goal of reducing the user attention level required by Borboleta operation by providing speech recognition capabilities to augment navigation through the software functions, allowing the professional to pay more attention to the patient. The project methodology is composed of a bibliography revision to establish the state-of-the-art of the field, a review of available speech recognition software, data collection of Brazilian utterances to train and test the system, a design and development stage that defined the system architecture and integration with Borboleta and a testing process to measure the system accuracy, its usability and acceptance level.
13

Um framework para desenvolvimento de interfaces multimodais em aplicações de computação ubíqua / A framework for multimodal interfaces development in ubiquitous computing applications

Inacio Junior, Valter dos Reis 26 April 2007 (has links)
Interfaces multimodais processam vários tipos de entrada do usuário, tais como voz, gestos e interação com caneta, de uma maneira combinada e coordenada com a saída multimídia do sistema. Aplicações que suportam a multimodalidade provêem um modo mais natural e flexível para a execução de tarefas em computadores, uma vez que permitem que usuários com diferentes níveis de habilidades escolham o modo de interação que melhor se adequa às suas necessidades. O uso de interfaces que fogem do estilo convencional de interação baseado em teclado e mouse vai de encontro ao conceito de computação ubíqua, que tem se estabelecido como uma área de pesquisa que estuda os aspectos tecnológicos e sociais decorrentes da integração de sistemas e dispositivos computacionais à ambientes. Nesse contexto, o trabalho aqui reportado visou investigar a implementação de interfaces multimodais em aplicações de computação ubíqua, por meio da construção de um framework de software para integração de modalidades de escrita e voz / Multimodal interfaces process several types of user inputs, such as voice, gestures and pen interaction, in a combined and coordinated manner with the system?s multimedia output. Applications which support multimodality provide a more natural and flexible way for executing tasks with computers, since they allow users with different levels of abilities to choose the mode of interaction that best fits their needs. The use of interfaces that run away from the conventional style of interaction, based in keyboard and mouse, comes together with the concept of ubiquitous computing, which has been established as a research area that studies the social and technological aspects decurrent from the integration os systems and devices into the environments. In this context, the work reported here aimed to investigate the implementation of multimodal interfaces in ubiquitous computing applications, by means of the building of a software framework used for integrating handwriting and speech modalities
14

Um framework para desenvolvimento de interfaces multimodais em aplicações de computação ubíqua / A framework for multimodal interfaces development in ubiquitous computing applications

Valter dos Reis Inacio Junior 26 April 2007 (has links)
Interfaces multimodais processam vários tipos de entrada do usuário, tais como voz, gestos e interação com caneta, de uma maneira combinada e coordenada com a saída multimídia do sistema. Aplicações que suportam a multimodalidade provêem um modo mais natural e flexível para a execução de tarefas em computadores, uma vez que permitem que usuários com diferentes níveis de habilidades escolham o modo de interação que melhor se adequa às suas necessidades. O uso de interfaces que fogem do estilo convencional de interação baseado em teclado e mouse vai de encontro ao conceito de computação ubíqua, que tem se estabelecido como uma área de pesquisa que estuda os aspectos tecnológicos e sociais decorrentes da integração de sistemas e dispositivos computacionais à ambientes. Nesse contexto, o trabalho aqui reportado visou investigar a implementação de interfaces multimodais em aplicações de computação ubíqua, por meio da construção de um framework de software para integração de modalidades de escrita e voz / Multimodal interfaces process several types of user inputs, such as voice, gestures and pen interaction, in a combined and coordinated manner with the system?s multimedia output. Applications which support multimodality provide a more natural and flexible way for executing tasks with computers, since they allow users with different levels of abilities to choose the mode of interaction that best fits their needs. The use of interfaces that run away from the conventional style of interaction, based in keyboard and mouse, comes together with the concept of ubiquitous computing, which has been established as a research area that studies the social and technological aspects decurrent from the integration os systems and devices into the environments. In this context, the work reported here aimed to investigate the implementation of multimodal interfaces in ubiquitous computing applications, by means of the building of a software framework used for integrating handwriting and speech modalities
15

Reconhecimento de fala para navegação em aplicativos móveis para português brasileiro / Brazilian Portuguese Speech Recognition for Navigation on Mobile Device Applications

Edwin Miguel Triana Gomez 17 June 2011 (has links)
O objetivo do presente trabalho de pesquisa é reduzir o nível de atenção requerido para o uso do sistema Borboleta por meio de reconhecimento de fala na navegação através das funcionalidades do sistema, permitindo ao profissional dar maior atenção ao paciente. A metodologia de desenvolvimento do projeto inclui uma revisão bibliográfica para definir o estado da arte da área, uma pesquisa sobre o software disponível para reconhecimento de fala, uma coleta de dados dos comandos do sistema em português brasileiro para treinar e testar o sistema, uma etapa de projeção e desenvolvimento para definir a arquitetura de integração com o Borboleta, e uma fase de testes para medir a precisão do sistema e seus níveis de usabilidade e aceitação por parte do usuário. / The current document presents research that addresses the goal of reducing the user attention level required by Borboleta operation by providing speech recognition capabilities to augment navigation through the software functions, allowing the professional to pay more attention to the patient. The project methodology is composed of a bibliography revision to establish the state-of-the-art of the field, a review of available speech recognition software, data collection of Brazilian utterances to train and test the system, a design and development stage that defined the system architecture and integration with Borboleta and a testing process to measure the system accuracy, its usability and acceptance level.
16

Guibuilder multimodal : um framework para a geração de interfaces multimodais com o apoio de interaction design patterns

Fragoso, Ygara Lúcia Souza Melo 01 November 2012 (has links)
Submitted by Alison Vanceto (alison-vanceto@hotmail.com) on 2016-10-04T12:38:10Z No. of bitstreams: 1 DissYLSM.pdf: 5457505 bytes, checksum: 5b644703e5553e5b7a06e6ad6e976868 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2016-10-04T18:00:12Z (GMT) No. of bitstreams: 1 DissYLSM.pdf: 5457505 bytes, checksum: 5b644703e5553e5b7a06e6ad6e976868 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2016-10-04T18:00:21Z (GMT) No. of bitstreams: 1 DissYLSM.pdf: 5457505 bytes, checksum: 5b644703e5553e5b7a06e6ad6e976868 (MD5) / Made available in DSpace on 2016-10-04T18:13:36Z (GMT). No. of bitstreams: 1 DissYLSM.pdf: 5457505 bytes, checksum: 5b644703e5553e5b7a06e6ad6e976868 (MD5) Previous issue date: 2012-11-01 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / The interaction between humans and the computers has improved substantially during time through the evolution of interfaces of interaction. The possibility of users to interact with machines through several modalities of communication, and in a natural way, can increase the level of interest from the user and ensure the success of the application. However, the literature of the area of multimodality has shown that developing such interfaces is not a simple task, mainly for non-experienced or recently graduated professionals, since each designer’s modality of interaction has its complexity in technical terms, as acquisition and adaptation with new tools, languages, possible actions and etc. Moreover it is necessary to verify which modalities (voice, touch and gestures) can be used in the application, how to combine these modalities in a way that the stronger point of one completes the weak point of the other and vice versa, and also knowing in what context the final user will be involved. The GuiBuilder Multimodal was developed aiming to try providing the basic needs in implementing an interface that uses voice, touch and gesture. The framework promotes an interface development through the WYSIWYG (What You See Is What You Get) model, where the designer just sets some parameters so the component is multimodal. During the interface creation phase, agents supervise what the designer does and supply support, clues, with design patterns that might be divided in categories such as: multimodality, interaction with the user and components. / A interação entre humano e computador tem melhorado substancialmente ao longo do tempo através da evolução das interfaces de interação. A possibilidade de usuários interagirem com máquinas através de várias modalidades de comunicação, e de forma natural, pode aumentar o nível de interesse do usuário e garantir o sucesso da aplicação. Porém, o estado da arte da área de multimodalidade demonstra que desenvolver tais interfaces não é uma tarefa simples, principalmente para projetistas inexperientes ou recém formados, pois cada modalidade de interação tem sua complexidade em termos técnicos, como aquisição e adaptação com novas ferramentas, linguagens, ações possíveis e etc. Além disso, é preciso verificar quais modalidades (voz, toque e gestos) podem ser usadas na aplicação, como combinar essas modalidades de forma que o ponto forte de uma complemente o ponto fraco da outra e vice-versa e também saber em qual contexto o usuário final estará inserido. O GuiBuilder Multimodal foi desenvolvido com o intuito de tentar suprir as necessidades básicas em se implementar uma interface que utiliza voz, toque e gesto. O framework promove um desenvolvimento de interface através do modelo WYSIWYG (What You See Is What You Get) onde o projetista apenas define alguns parâmetros para que um componente seja multimodal. Durante a fase de criação da interface agentes supervisionam o que o designer faz e fornece um apoio, dicas, com padrões de projeto que podem ser divididos em categorias como: multimodalidade, interação com o usuário e componentes.
17

Comparaison et combinaison de rendus visuels et sonores pour la conception d'interfaces homme-machine : des facteurs humains aux stratégies de présentation à base de distorsion / Comparison and combination of visual aud audio renderings to conceive human-computer interfaces : from human factors to distortion-based presentation strategies

Bouchara, Tifanie 29 October 2012 (has links)
Bien que de plus en plus de données sonores et audiovisuelles soient disponibles, la majorité des interfaces qui permettent d’y accéder reposent uniquement sur une présentation visuelle. De nombreuses techniques de visualisation ont déjà été proposées utilisant une présentation simultanée de plusieurs documents et des distorsions permettant de mettre en relief l’information plus pertinente. Nous proposons de définir des équivalents auditifs pour la présentation de plusieurs fichiers sonores en concurrence, et de combiner de façon optimale les stratégies audio et visuelles pour la présentation de documents multimédia. Afin d’adapter au mieux ces stratégies à l’utilisateur, nous avons dirigé nos recherches sur l’étude des processus perceptifs et attentionnels impliqués dans l’écoute et l’observation d’objets audiovisuels concurrents, en insistant sur les interactions entre les deux modalités.Exploitant les paramètres de taille visuelle et de volume sonore, nous avons étendu le concept de lentille grossissante, utilisée dans les méthodes focus+contexte visuelles, aux modalités auditive et audiovisuelle. A partir de ce concept, une application de navigation dans une collection de documents vidéo a été développée. Nous avons comparé notre outil à un autre mode de rendu dit de Pan&Zoom à travers une étude d’utilisabilité. Les résultats, en particulier subjectifs, encouragent à poursuivre vers des stratégies de présentation multimodales associant un rendu audio aux rendus visuels déjà disponibles.Une seconde étude a concerné l’identification de sons d’environnement en milieu bruité en présence d’un contexte visuel. Le bruit simule la présence de plusieurs sources sonores simultanées telles qu’on pourrait les retrouver dans une interface où les documents audio et audiovisuels sont présentés ensemble. Les résultats de cette expérience ont confirmé l’avantage de la multimodalité en condition de dégradation. De plus, au-delà des buts premiers de la thèse, l’étude a confirmé l’importance de la congruence sémantique entre les composantes visuelle et sonore pour la reconnaissance d’objets et a permis d’approfondir les connaissances sur la perception auditive des sons d’environnement.Finalement, nous nous sommes intéressée aux processus attentionnels impliqués dans la recherche d’un objet parmi plusieurs, en particulier au phénomène de « pop-out » par lequel un objet saillant attire l’attention automatiquement. En visuel, un objet net attire l’attention au milieu d’objets flous et certaines stratégies de présentation visuelle exploitent déjà ce paramètre visuel. Nous avons alors étendu la notion de flou aux modalités auditives et audiovisuelles par analogie. Une série d’expériences perceptives a confirmé qu’un objet net parmi des objets flous attire l’attention, quelle que soit la modalité. Les processus de recherche et d’identification sont alors accélérés quand l’indice de netteté correspond à la cible, mais ralentis quand il s’agit d’un distracteur, mettant ainsi en avant un phénomène de guidage involontaire. Concernant l’interaction intermodale, la combinaison redondante des flous audio et visuel s’est révélée encore plus efficace qu’une présentation unimodale. Les résultats indiquent aussi qu’une combinaison optimale n’implique pas d’appliquer obligatoirement une distorsion sur les deux modalités. / Although more and more sound and audiovisual data are available, the majority of access interfaces are solely based on a visual presentation. Many visualization techniques have been proposed that use simultaneous presentation of multiple documents and distortions to highlight the most relevant information. We propose to define equivalent audio technique for the presentation of several competing sound files, and optimally combine such audio and visual presentation strategies for multimedia documents. To better adapt these strategies to the user, we studied attentional and perceptual processes involved in listening and watching simultaneous audio-visual objects, focusing on the interactions between the two modalities.Combining visual size and sound level parameters, we extended the visual concept of magnifying lens to auditory and audiovisual modalities. Exploiting this concept, a navigation application in a video collection has been developed. We compared our tool with another rendering mode called Pan & Zoom through a usability study. Results, especially subjective results, encourage further research to develop multimodal presentation strategies by combining an audio rendering to the visual renderings already available.A second study concerned the identification of environmental sounds in a noisy environment in the presence of a visual context. The noise simulated the presence of multiple competing sounds as would be observed in an interface where several multimedia documents are presented together. The experimental results confirmed the multimodality advantage in condition of audio degradation. Moreover, beyond the primary goals of the thesis, this study confirms the importance of the semantic congruency between visual and auditory components for object recognition and provides deeper knowledge about the auditory perception of environmental sounds.Finally, we investigated attentional processes involved in the search of a specific object among many, especially the “pop-out” phenomenon whereby a salient object automatically attracts attention. In vision, an sharp object attracts attention among blurred objects and some visual strategies already exploit this parameter to display the information. We extended by analogy the concept of visual blur to auditory and audiovisual modalities. A serie of experiments confirmed that a perceptual object among blurred objects attracts attention, regardless of the modality. The identification and search process is then accelerated when the sharpness parameter is applied to the target, but slow when it is applied to a distractor. These results highlight an involuntary attraction effect. Concerning the crossmodal interaction, a redundant combination of audio and visual blur proved to be more effective than a unimodal presentation. Results also indicate that optimal combination does not necessarily require a distortion of both modalities.
18

HyMobWeb: uma abordagem para a adaptação híbrida de interfaces Web móveis sensíveis ao contexto e com suporte à multimodalidade / HyMobWeb: an approach for the hybrid adaptation of context-sensitive web interfaces with multimodality support

Bueno, Danilo Camargo 30 June 2017 (has links)
Submitted by Milena Rubi ( ri.bso@ufscar.br) on 2017-10-17T13:29:06Z No. of bitstreams: 1 BUENO_Danilo_2017.pdf: 12404601 bytes, checksum: b788f4b11bbb3fb391f608449597ae16 (MD5) / Approved for entry into archive by Milena Rubi ( ri.bso@ufscar.br) on 2017-10-17T13:29:18Z (GMT) No. of bitstreams: 1 BUENO_Danilo_2017.pdf: 12404601 bytes, checksum: b788f4b11bbb3fb391f608449597ae16 (MD5) / Approved for entry into archive by Milena Rubi ( ri.bso@ufscar.br) on 2017-10-17T13:29:26Z (GMT) No. of bitstreams: 1 BUENO_Danilo_2017.pdf: 12404601 bytes, checksum: b788f4b11bbb3fb391f608449597ae16 (MD5) / Made available in DSpace on 2017-10-17T13:29:35Z (GMT). No. of bitstreams: 1 BUENO_Danilo_2017.pdf: 12404601 bytes, checksum: b788f4b11bbb3fb391f608449597ae16 (MD5) Previous issue date: 2017-06-30 / Não recebi financiamento / The use of mobile devices to browse the Web has become increasingly popular as a consequence of easy access to the Internet. However, moving from the desktop development to the mobile platform features, requests from developers an important focus on interaction elements which fit into the interaction demands. There by, several approaches have emerged for adaptation of Web applications. One of the most adopted solution by Web developers are front-end frameworks . Nevertheless, this technique has shortcomings that directly impact in the interaction elements and user satisfaction. In this scenario, the objective of this work is to propose a hybrid adaptation approach of context-sensitive Web interfaces with multimodality support, called HyMobWeb, which aims to help developers to create solutions closer to the device’s characteristics, to the contexts of use and the needs of end-users. The approach is composed of stages of static and dynamic adaptation. Static adaptation subsidizes developers in marking elements to be adapted through a grammar that can reduce the coding effort of solutions that address aspects related to multimodality and context sensitivity. Dynamic adaptation is responsible for analyzing changes in the context of the user and performing the marked adaptations in static adaptation. The approach was outlined from a review of the literature on mobile Web interface adaptation and three exploratory studies. The first and second study dealt with end users’ difficulties regarding the use of non-mobile Web applications. The third is about gaps in traditional adaptations - made through frameworks front-end - in relation to the users needs. Aiming to evaluate the approach, two evaluations were carried out, one from the perspective of the developer and another from the end user. The first one focused on verifying the acceptance of the proposal by software developers in the use of the grammar and resources proposed from it. The second sought to identify if the adaptation, previously implemented by the developers, brought satisfaction to the end-users during its use. The findings suggested that HyMobWeb brought significant contributions to the work of the developers and that the resources explored by the approach provided positive reactions to the satisfaction of end-users. / O uso de dispositivos móveis para navegar na Web tornou-se cada vez mais popular devido à proliferação dos aparelhos e sua facilidade de acesso. No entanto, a transição da plataforma desktop para mobile inseriu um novo desafio aos desenvolvedores sobre o uso de elementos de interação e de suas funcionalidades. Com isso, surgiram diversas abordagens para adaptação das aplicações Web. Entre elas, uma das mais comuns entre Desenvolvedores Web é a utilização de frameworks front-end . Contudo, estes frameworks possuem limitantes nas funcionalidades de adaptação das interfaces com deficiências que impactam diretamente nos elementos de interação e na satisfação do usuário. Diante deste cenário, este trabalho tem como objetivo propor a HyMobWeb, uma abordagem híbrida de adaptação de interfaces Web móveis sensíveis ao contexto e com suporte a multimodalidade, que visa a auxiliar os desenvolvedores na criação de soluções mais próximas às características do dispositivo, aos contextos de utilização, e as necessidades dos usuários finais. A abordagem é composta das etapas de adaptação estática e dinâmica. A adaptação estática subsidia os desenvolvedores na marcação de elementos a serem adaptados através de uma gramática que pode reduzir o esforço de codificação de soluções que abordam aspectos relacionados à multimodalidade e à sensibilidade ao contexto. A adaptação dinâmica é a responsável por analisar as mudanças no contexto do usuário e realizar as adaptações marcadas na adaptação estática. A abordagem foi delineada a partir de uma revisão da literatura acerca da adaptação de interface Web em dispositivos móveis e três estudos exploratórios. O primeiro estudo tratou sobre as dificuldades dos usuários finais em relação à utilização de aplicações Web não adaptadas aos dispositivos móveis. O segundo sobre os impactos da adição da multimodalidade em tais ambientes. Enquanto o terceiro, sobre as lacunas existentes nas adaptações tradicionais - realizadas através de frameworks front-end - em relação às necessidades dos usuários finais. Visando avaliar a abordagem, duas avaliações foram realizadas: uma na perspectiva do desenvolvedor e outra do usuário final. A primeira focou em verificar aceitação da proposta por parte dos desenvolvedores de software no uso da gramática e recursos propostos a partir dela. A segunda buscou identificar se a adaptação, previamente implementada pelos desenvolvedores, trazia satisfação para os usuários finais durante seu uso. Os resultados encontrados sugeriram que a HyMobWeb trouxe contribuições significativas para o trabalho dos desenvolvedores e que os recursos explorados pela abordagem propiciaram reações positivas na satisfação dos usuários finais

Page generated in 0.1028 seconds