• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 353
  • 77
  • 42
  • 27
  • 9
  • 7
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 619
  • 349
  • 298
  • 274
  • 208
  • 160
  • 132
  • 103
  • 98
  • 98
  • 94
  • 93
  • 88
  • 87
  • 84
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

A comparison of interaction models in Virtual Reality using the HTC Vive

Essinger, Karl January 2018 (has links)
Virtual Reality (VR) is a field within the gaming industry which has gained much popularity during the last few years. This is caused mainly by the release of the VR-headsets Oculus Rift [1] and HTC Vive [2] two years ago. As the field has grown from almost nothing in a short time there has not yet been much research done in all VR-related areas. One such area is performance comparisons of different interaction models independent of VR-hardware. This study compares the effectiveness of four software-based interaction models for a specific simple pick-and-place task. Two of the interaction models depend on the user moving a motion controller to touch a virtual object, one automatically picks them up on touch, the other requires a button press. The other two interaction models have the user move a laser pointer to point at an object to pick it up. The first has the laser pointer emitted from a motion controller and the second has it emitted from the user’s head. All four interaction models use the same hardware, the default HTC Vive equipment. The effectiveness is measured in three metrics, time to complete the task, number of errors made during the task, and the amount of participant enjoyment rated on a scale from one to five. The first two metrics are measured through an observational experiment where the application running the virtual environment also logs all relevant information. The user enjoyment is gathered through a questionnaire the participant answers during the experiment. These are the research questions: • How do the interaction models compare in terms of accuracy and time efficiency when completing basic pick and place tasks in this experiment? • Which interaction models are subjectively more enjoyable to use according to participants? The results of the experiment are displayed as charts in the results chapter and then further analysed in the analysis and discussion chapter. Possible sources of error and theories about why the results turned out the way they did are also discussed. The study concludes that the laser pointer based interaction models, 3 and 4, were much less accurate than the handheld interaction models, 1 and 2, in this experiment. All interaction models except 4 achieved about the same test duration while interaction model 4 lagged several seconds behind. The participants liked interaction model 1 the most, followed closely by 3. They disliked 4 the most and rated 2 at a point in the middle of the rest.
202

Interação gestual sem dispositivos para displays públicos. / Deviceless gestural interaction aimed to public displays

Motta, Thiago Stein January 2013 (has links)
Com o constante crescimento tecnológico, é bastante comum deparar-se com um display público em lugares de grande concentração de pessoas, como aeroportos e cinemas. Apesar de possuírem informações úteis, esses displays poderiam ser melhor aproveitados se fossem interativos. Baseando-se em pesquisas sobre a interação com displays grandes e as características próprias de um display colocado em um espaço público, busca-se uma maneira de interação que seja adequada a esse tipo de situação. O presente trabalho introduz um método de interação por gestos sem necessitar que o usuário interagente segure ou tenha nele acoplado qualquer dispositivo ao interagir com um display público. Para realizar as tarefas que deseja, o usuário só precisa posicionar-se frente ao display e interagir com as informações na tela com suas mãos. São suportados gestos para navegação, seleção e manipulação de objetos, bem como para transladar a tela de visualização e ampliá-la ou diminui-la. O sistema proposto é construído de forma que possa funcionar em aplicações diferentes sem um grande custo de implantação. Para isso, é utilizado um sistema do tipo cliente-servidor que integra a aplicação que contém as informações de interesse do usuário e a que interpreta os seus gestos. É utilizado o Microsoft Kinect para a leitura dos movimentos do usuário e um pós-processamento de imagens é realizado de modo a detectar se as mãos do usuário se encontram abertas os fechadas. Após, essa informação é interpretada por uma máquina de estados que identifica o que o usuário está querendo executar na aplicação cliente. Afim de avaliar o quão robusto o sistema se portaria em um ambiente público real, são avaliados critérios que poderiam interferir na tarefa interativa, como a diferença de luminosidade do ambiente e a presença de mais pessoas no mesmo local de interação. Foram desenvolvidas três aplicações a título de estudo de caso e cada uma delas foi avaliada de forma diferente, sendo uma delas utiliza para fins de avaliação formal com usuários. Demonstrados os resultados da avaliação realizada, conclui-se que o sistema, apesar de não se portar corretamente em todas situações, tem potencial de uso desde que sejam contornadas suas deficiências, a maior parte das quais originária das próprias limitações inerentes ao Kinect. O sistema proposto funciona suficientemente bem para seleção e manipulação de objetos grandes e para aplicações baseadas em interação do tipo pan & zoom, como navegação em mapas, por exemplo, e não é influenciado por diferenças de iluminação ou presença de outras pessoas no ambiente. / With the constant technological growth, it is quite common to come across a public display in places with high concentration of people, such as airports and theaters. Although they provide useful information, these displays could be better employed if they were interactive. Based on research on topics of interaction with large displays and the characteristics of a display placed in a public space, a way of interaction that is suitable for this kind of situation is searched. This paper introduces a method of interaction by gestures without requiring that the interacting user take hold or have to him attached any device to interact with a public display. To accomplish the tasks he wants, he needs just to position himself in front of the display and to interact with the information on the screen with his hands. Gestures supported provide navigation, selection and manipulation of objects as well as to pan and zoom at the screen. The proposed system is constructed so that it works in different applications without a large installation cost. In order to achieve this, the system implements a client-server model application that is able to integrate the part that contains the useful information to the user and the one that interprets his gestures. The Microsoft Kinect is used for reading the user’s movements and techniques of image processing are performed to detect if the user’s hands are open or closed. After this information is obtained, it runs through a state machine that identifies what the user is trying to do in the application. In order to evaluate how robust the system is in a real public environment, some criteria that could interfere with the interactive task are evaluated, as the difference in brightness in the environment and the presence of another people in the same place of interaction. Three applications were developed as a case study and each one was evaluated differently, one of them being used for formal user evaluation. Given the results of the performed tasks, it is possible to conclude that the system, although not behaving correctly in all situations, has potential use if its difficulties are circumvented, most of which come from Kinect’s own inherent limitations. The proposed system works well enough for selection and manipulation of large objects and for use in applications based on pan & zoom, like those that supports map navigation, for example, and difference of ilumination or the presence of other persons on the environment does not interfere with the interaction process.
203

Designförslag för utveckling av drönarflygledningssystem i städer / Design suggestions for development of unmanned aircraft system traffic management in cities

Halvorsen, Ludwig January 2018 (has links)
Detta arbete utforskade olika designförslag för utvecklandet av ett drönarflygledningssystem för drönarflygledning i städer med hjälp av ämnesområdena informationsvisualisering, semiotik, och sonifiering. Arbetet skedde iterativt i sprintar och i varje sprint konceptskissades och byggdes prototyper som sedan användartestade de designförslag som hade tagits fram under sprintarna och kunde vägleda designarbetet. De lärdomar som arbetet gav visade att det är viktigt att drönarflygledningssystemet (UTM systemet) använder sig av flera olika informationsvisualiserings- och teckenrepresentationer som stödjer och kompletterar varandra. Kombinationen av olika representationer underlättar för drönarflygledaren att skapa sig en förståelse över drönartrafiksituationen i luftrummet och därigenom skapar bättre kontrollmöjligheter. Sonifieringens ljudikoner bör bestå av simpla men innehållsrika ljudsignaturer som tillåts att kombineras i ett dynamiskt ljudlandskap men inte upplevs irriterande. I och med att utvecklingen av UTM system är i ett tidigt utvecklingsstadium bör de designförslag som presenteras i arbetet ses som inspirations- och diskussionsunderlag och inte som färdiga designlösningar för utvecklingen av ett UTM system. Då utformningen av UTM systemet beror på vilka arbetsuppgifter och vilket ansvar drönarflygledare i framtiden kommer att ha, vilket det fortfarande råder osäkerhet om.
204

Visualization of Vehicle Usage Based on Position Data for Root-Cause Analysis : A Case Study in Scania CV AB

Sagala, Ramadhan Kurniawan January 2018 (has links)
Root cause analysis (RCA) is a process in Scania carried out to understand the root cause of vehicle breakdowns. It is commonly done by studying vehicle warranty claims and failure reports, identifying patterns that are correlated to the breakdowns, and then analyzing the root cause based on those findings. Vehicle usage is believed to be one of the factors that may contribute towards the breakdowns, but the data on vehicle usage is not commonly utilized in RCA. This thesis investigates a way to help RCA process by introducing a dataset of vehicle usage based on position data gathered in project FUMA (Fleet telematics big data analytics for vehicle Usage Modeling and Analysis). A user-centered design process of a visualization tool which presents FUMA data for people working in RCA process was carried out. Interviews were conducted to gain insights about the RCA process and generate design ideas. PACT framework was used to organize the ideas, and Use Cases were developed to project a conceptual scenario. A low fidelity prototype was developed as design artifact for the visualization, and a formative test was done to validate the design and gather feedback for future prototyping iterations. In each design phase, more insights about how visualization of vehicle usage should be used in RCA were obtained. Based on this study, the prototype design showed a promising start in visualizing vehicle usage for RCA purpose. Improvement on data presentation, however, still needs to be addressed to reach the level of practicality required in RCA. / Root cause analysis (RCA) är en process på Scania som används för att förstå rotorsaken till fordons behov av reparation.Oftast studeras fordonets försäkringsrapporter och felrapporter, för att identifiera och analysera mönster som motsvarar de olika behoven för reparation. Fordonsanvändningen tros vara en av de faktorer som bidrar till reparationsbehoven, men data angående detta används sällan i RCA. Denna rapport undersöker hur RCA-processen kan dra nytta av positionsdata som samlats in i projekt FUMA (Fleet telematics big data analytics for vehicle Usage Modeling and Analysis). En användarcentrerad designmetodik har använts för att ta fram ett visualiseringsverktyg som presenterar FUMA-data för personer som deltar i RCA-processen. Intervjuer har genomförts för att samla insikter om RCA-processen och för att generera designidéer. PACT-ramverket användes sedan för att organisera idéerna, och användningssituationer togs fram för att skapa ett konceptuellt scenario. En low-fidelity prototyp togs fraför personer som deltar i RCA-processenm som en designartefakt för visualiseringen och ett utvecklande test genomfördes för att validera designen och samla in feedback för framtida iterationer av prototyping. Under varje design-fas, samlades mer insikter om hur visualiseringen av fordonsanvändning skulle kunna användas för RCA in. Baserat på detta, visade design-prototypen en lovande start för visualisering av fordonsanvändning i RCA. Förbättringar på hur data presenteras måste dock genomföras, så att rätt genomförbarhet för RCA uppnås.
205

A Maker's Mechanological Paradigm: Seeing Experiential Media Systems as Structurally Determined

January 2015 (has links)
abstract: Wittgenstein’s claim: anytime something is seen, it is necessarily seen as something, forms the philosophical foundation of this research. I synthesize theories and philosophies from Simondon, Maturana, Varela, Wittgenstein, Pye, Sennett, and Reddy in a research process I identify as a paradigm construction project. My personal studio practice of inventing experiential media systems is a key part of this research and illustrates, with practical examples, my philosophical arguments from a range of points of observation. I see media systems as technical objects, and see technical objects as structurally determined systems, in which the structure of the system determines its organization. I identify making, the process of determining structure, as a form of structural coupling and see structural coupling as a means of knowing material. I introduce my theory of conceptual plurifunctionality as an extension to Simondon’s theory. Aspects of materiality are presented as a means of seeing material and immaterial systems, including cultural systems. I seek to answer the questions: How is structure seen as determining the organization of systems, and making seen as a process in which the resulting structures of technical objects and the maker are co-determined? How might an understanding of structure and organization be applied to the invention of contemporary experiential media systems? / Dissertation/Thesis / Doctoral Dissertation Media Arts and Sciences 2015
206

Flexibilité des processus de développement à la conception et à l'exécution : application à la plasticité des interfaces homme-machine / Development processes flexibility at design- and enactment-times : application to Human-Computer Interfaces plasticity

Ceret, Eric 04 July 2014 (has links)
La diversité des dispositifs et les exigences des utilisateurs en termes de disponibilité et de continuité de service complexifient l'ingénierie de l'interaction homme-machine : il devient nécessaire de créer des IHM douées d'adaptation dynamique à leur contexte d'usage. L'ingénierie de ces IHM, dites plastiques, peut suivre une approche dirigée par les modèles mais ces approches sont encore peu pratiquées et souffrent d'un coût d'apprentissage important. Il est donc impératif d'accompagner les concepteurs et développeurs par un guidage, mais ce guidage doit être suffisamment flexible pour intégrer des compétences variées et des pratiques diverses en constante évolution.L'ingénierie des méthodes de développement logiciel s'est depuis longtemps préoccupée de la flexibilité des modèles de processus pendant leur conception, mais très peu de travaux se sont préoccupés de la flexibilité à l'exécution. Pourtant, plusieurs études montrent que les concepteurs et les développeurs, qui sont les principaux utilisateurs des méthodes, en expriment le besoin. Ils souhaitent par exemple disposer de modèles de processus exprimés dans les langages qu'ils maîtrisent, qui les laissent maîtres des choix de conception ou de réalisation et les aident dans l'apprentissage de la démarche. La flexibilité des modèles de processus à l'exécution, telle que nous la proposons, permet de répondre à ces attentes et ouvre donc la possibilité de fournir un guidage adéquat pour le développement d'IHM plastiques.Nous nous sommes focalisés dans un premier temps sur la conceptualisation de la propriété de flexibilité. Cette étude nous a conduits à proposer une taxonomie des modèles de processus, Promote, qui définit et gradue la flexibilité selon six dimensions. Nous avons ensuite transcrit cette définition de la flexibilité dans un métamodèle de processus flexible, M2Flex, et l'avons implémenté dans deux outils : D2Flex (D pour Design time), un outil collaboratif de conception de modèles de processus, et R2Flex (R pour Runtime), un environnement d'exécution des modèles définis dans D2Flex. Nous avons appliqué notre approche aux modèles de processus de développement d'IHM plastiques en rendant flexible la méthode UsiXML. L'environnement logiciel est en maturation technologique pour un transfert vers l'industrie. Ces différentes contributions ont fait l'objet de validations, en particulier auprès de concepteurs novices, en ingénierie de l'interaction homme-machine et des systèmes d'information. / The increasing diversity of devices and services makes the engineering of user interfaces (UI) more complex: in particular, the UIs need to be capable of dynamic adaptation to the user's context of use. This property is named plasticity and so far addressed by model-based approaches. However, these approaches suffer from a high threshold of use. Therefore there is a need to support designers and developers with a flexible guidance, i.e. a guidance capable of adaptation to the evolving variety of skills and practices.Software development methods engineering has long been concerned with flexibility of process models at design time, but very few work has been done about enactment-time although several studies show that designers and developers, who are the primary users of methods, call for such a flexibility. For instance, they expect process models to be expressed in languages they master, to let them make decisions about design choices, and to help them in learning the approach.Our proposition of process models flexibility at both design time and runtime meets these expectations and thus opens the possibility of providing adequate guidance for the development of plastic UIs.We first focused on the conceptualization of flexibility. Thanks to this study, we elaborated Promote, a taxonomy of process models, which defines and graduates six kinds of flexibility. Then we transcribed this definition of flexibility into M2Flex, a flexible process metamodel, and implemented it in two tools: D2Flex (with a D as "Design time"), a collaborative tool for the Design of process models, and R2Flex (with a R as "Runtime") , a tool for enacting the process models defined in D2Flex. We applyed our approach to the development of plastic UIs by making the UsiXML methodology flexible. FlexiLab, our software environment, is actually under technological maturation for being transferred to companies. These contributions have been validated, especially with novice designers, in the fields of the engineering of plastic UIs and Information Systems.
207

Brain-computer interface games based on consumer-grade electroencephalography devices: systematic review and controlled experiments / Jogos de interface c?rebro-computador baseados em dispositivos comerciais de eletroencefalograma: revis?o sistem?tica e experimentos controlados

Mendes, Gabriel Alves Vasiljevic 31 July 2017 (has links)
Submitted by Automa??o e Estat?stica (sst@bczm.ufrn.br) on 2017-10-02T22:27:19Z No. of bitstreams: 1 GabrielAlvesVasiljevicMendes_DISSERT.pdf: 3791566 bytes, checksum: e847396390a6b6ca2128eefd4423f561 (MD5) / Approved for entry into archive by Arlan Eloi Leite Silva (eloihistoriador@yahoo.com.br) on 2017-10-06T23:26:09Z (GMT) No. of bitstreams: 1 GabrielAlvesVasiljevicMendes_DISSERT.pdf: 3791566 bytes, checksum: e847396390a6b6ca2128eefd4423f561 (MD5) / Made available in DSpace on 2017-10-06T23:26:09Z (GMT). No. of bitstreams: 1 GabrielAlvesVasiljevicMendes_DISSERT.pdf: 3791566 bytes, checksum: e847396390a6b6ca2128eefd4423f561 (MD5) Previous issue date: 2017-07-31 / Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico (CNPq) / Brain-computer interfaces (BCIs) are specialized systems that allow users to control a computer or a machine using their brain waves. BCI systems allow patients with severe physical impairments, such as those suffering from amyotrophic lateral sclerosis, cerebral palsy and locked-in syndrome, to communicate and regain physical movements with the help of specialized equipment. With the development of BCI technology in the second half of the 20th century and the advent of consumer-grade BCI devices in the late 2000s, brain-controlled systems started to find applications not only in the medical field, but in areas such as entertainment. One particular area that is gaining more evidence due to the arrival of consumer-grade devices is the field of computer games, which has become increasingly popular in BCI research as it allows for more user-friendly applications of BCI technology in both healthy and unhealthy users. However, numerous challenges are yet to be overcome in order to advance in this field, as the origins and mechanics of the brain waves and how they are affected by external stimuli are not yet fully understood. In this sense, a systematic literature review of BCI games based on consumer-grade technology was performed. Based on its results, two BCI games, one using attention and the other using meditation as control signals, were developed in order to investigate key aspects of player interaction: the influence of graphical elements on attention and control; the influence of auditory stimuli on meditation and work load; and the differences both in performance and multiplayer game experience, all in the context of neurofeedback-based BCI games.
208

Interação gestual sem dispositivos para displays públicos. / Deviceless gestural interaction aimed to public displays

Motta, Thiago Stein January 2013 (has links)
Com o constante crescimento tecnológico, é bastante comum deparar-se com um display público em lugares de grande concentração de pessoas, como aeroportos e cinemas. Apesar de possuírem informações úteis, esses displays poderiam ser melhor aproveitados se fossem interativos. Baseando-se em pesquisas sobre a interação com displays grandes e as características próprias de um display colocado em um espaço público, busca-se uma maneira de interação que seja adequada a esse tipo de situação. O presente trabalho introduz um método de interação por gestos sem necessitar que o usuário interagente segure ou tenha nele acoplado qualquer dispositivo ao interagir com um display público. Para realizar as tarefas que deseja, o usuário só precisa posicionar-se frente ao display e interagir com as informações na tela com suas mãos. São suportados gestos para navegação, seleção e manipulação de objetos, bem como para transladar a tela de visualização e ampliá-la ou diminui-la. O sistema proposto é construído de forma que possa funcionar em aplicações diferentes sem um grande custo de implantação. Para isso, é utilizado um sistema do tipo cliente-servidor que integra a aplicação que contém as informações de interesse do usuário e a que interpreta os seus gestos. É utilizado o Microsoft Kinect para a leitura dos movimentos do usuário e um pós-processamento de imagens é realizado de modo a detectar se as mãos do usuário se encontram abertas os fechadas. Após, essa informação é interpretada por uma máquina de estados que identifica o que o usuário está querendo executar na aplicação cliente. Afim de avaliar o quão robusto o sistema se portaria em um ambiente público real, são avaliados critérios que poderiam interferir na tarefa interativa, como a diferença de luminosidade do ambiente e a presença de mais pessoas no mesmo local de interação. Foram desenvolvidas três aplicações a título de estudo de caso e cada uma delas foi avaliada de forma diferente, sendo uma delas utiliza para fins de avaliação formal com usuários. Demonstrados os resultados da avaliação realizada, conclui-se que o sistema, apesar de não se portar corretamente em todas situações, tem potencial de uso desde que sejam contornadas suas deficiências, a maior parte das quais originária das próprias limitações inerentes ao Kinect. O sistema proposto funciona suficientemente bem para seleção e manipulação de objetos grandes e para aplicações baseadas em interação do tipo pan & zoom, como navegação em mapas, por exemplo, e não é influenciado por diferenças de iluminação ou presença de outras pessoas no ambiente. / With the constant technological growth, it is quite common to come across a public display in places with high concentration of people, such as airports and theaters. Although they provide useful information, these displays could be better employed if they were interactive. Based on research on topics of interaction with large displays and the characteristics of a display placed in a public space, a way of interaction that is suitable for this kind of situation is searched. This paper introduces a method of interaction by gestures without requiring that the interacting user take hold or have to him attached any device to interact with a public display. To accomplish the tasks he wants, he needs just to position himself in front of the display and to interact with the information on the screen with his hands. Gestures supported provide navigation, selection and manipulation of objects as well as to pan and zoom at the screen. The proposed system is constructed so that it works in different applications without a large installation cost. In order to achieve this, the system implements a client-server model application that is able to integrate the part that contains the useful information to the user and the one that interprets his gestures. The Microsoft Kinect is used for reading the user’s movements and techniques of image processing are performed to detect if the user’s hands are open or closed. After this information is obtained, it runs through a state machine that identifies what the user is trying to do in the application. In order to evaluate how robust the system is in a real public environment, some criteria that could interfere with the interactive task are evaluated, as the difference in brightness in the environment and the presence of another people in the same place of interaction. Three applications were developed as a case study and each one was evaluated differently, one of them being used for formal user evaluation. Given the results of the performed tasks, it is possible to conclude that the system, although not behaving correctly in all situations, has potential use if its difficulties are circumvented, most of which come from Kinect’s own inherent limitations. The proposed system works well enough for selection and manipulation of large objects and for use in applications based on pan & zoom, like those that supports map navigation, for example, and difference of ilumination or the presence of other persons on the environment does not interfere with the interaction process.
209

Ontology view : a new sub-ontology extraction method / Vista de ontologia : um novo metodo para extrair uma sub-ontologia

Aparicio, Jose Martin Lozano January 2015 (has links)
Hoje em dia, muitas empresas de petróleo estão adotando diferentes sistemas baseados em conhecimento com o objetivo de ter uma melhor predição de qualidade de reservatório. No entanto, existem obstáculos que não permitem geólogos com diferentes formações recuperar as informações sem a necessidade da ajuda de um especialista em tecnologia da informação. O principal problema é a heterogeneidade semântica dos usuários finais quando fazem consultas em um sistema de consulta visual (VQS). Isto pode ser pior quando há uma nova terminologia na base de conhecimentos que afetam a interação do usuário, especialmente para usuários novatos. Neste contexto, apresentamos contribuições teóricas e práticas que explora o sinergismo entre ontologia e interação homem-computador (HCI). Do lado da teoria, introduzimos o conceito de visão de ontologia bem fundamentada e a sua definição formal. Nós nos concentramos na extração de vista ontologia de uma ontologia bem fundamentada e completa, baseando-nos em meta-propriedades ontológicas e propusemos um algorítmo independente da linguagem para extração de sub-ontologia que é guiada por meta-propriedades ontológicas. No lado prático, baseado nos princípios de HCI e desenho de interação, propusemos um novo sistema de consulta visual que usa o enfoque de vistas de ontologias para guiar o processo de consulta. Também o nosso desenho inclui visualizações de dados que ajudarão geólogos a entender os dados recuperados. Além disso, avaliamos nosso desenho com um teste de usabilidade a-través de um questionário em experimento controlado. Cinco geólogos que trabalham na área de Geologia do Petróleo foram avaliados. O enfoque proposto é avaliado no domínio de petrografia tomando as comunidades de Diagênese e Microestrutural adotando o critério de precisão e revocação. Os resultados experimentais mostram que termos relevantes obtidos de documentos de uma comunidade varia entre 30 a 66% de precisão e 4.6 a 36% de revocação, dependendo do enfoque selecionado e da combinação de parâmetros. Além disso, os resultados mostram que, para toda combinação de parâmetros, a revocação obtidos de artigos de diagênese usando a sub-ontologia gerada para a comunidade de diagênese é maior que a revocação e f-measure usando a sub-ontologia gerada para a comunidade de microestrutural. Por outro lado, resultados para toda combinação de parâmetros mostram que a revocação e f-measure obtida de artigos de microestrutural usando a sub-ontologia gerada para a comunidade de microestrutural é maior que a revocação e o fmeasure usando a sub-ontologia gerada para a comunidade de diagêneses. / Nowadays many petroleum companies are adopting different knowledge-based systems aiming to have a better reservoir quality prediction. However, there are obstacles that not allow different background geologists to retrieve information without needing the help of an information technology expert. The main problem is the heterogeneity semantic of end users when doing queries in a visual query system (VQS). This can be worst when there is new terminology in the knowledge-base affecting the user interaction, particularly for novice users. In this context, we present theoretical and practical contributions that exploit the synergism between ontology and human computer interaction (HCI). On the theory side, we introduce the concept of ontology view for well-founded ontology and provide a formal definition and expressive power characterization. We focus in the ontology view extraction of a well-founded and complete ontology based on ontological meta-properties and propose a language independent algorithm for sub-ontology extraction, which is guided by ontological meta-properties. On the practical side, based on the principles of HCI and interaction design, we propose a new Visual Query System that uses the ontology view approach to guide the query process. Also, our design includes data visualizations that will help geologists to make sense of the retrieved data. Furthermore, we evaluated our interaction design with five users performing a usability testing through a questionnaire in a controlled experiment. The evaluation was performed over geologists that work in the area of petroleum geology. The approach proposed is evaluated on the petrography domain taking the communities of Diagenesis and MicroStructural adopting the well known criteria of precision and recall. Experimental results show that relevant terms obtained from the documents of a community varies from 30 to 66 % of precision and 4.6 to 36% of recall depending on the approach selected and the parameters combination. Furthermore, results show that almost for all the parameters combination that recall and f-measure obtained from diagenesis articles using the sub-ontology generated for the diagenesis community is greater than recall and f-measure using the sub-ontology generated for microstructural community. On the other hand, results for all the parameters combination that recall and f-measure obtained from microstructural articles using the sub-ontology generated for the microstructural community is greater than recall and f-measure using the subontology generated for diagenesis community.
210

Agent-based Interface Approach with Activity Theory : Human-Computer interaction in diabetic health care system

Bai, Wei January 2006 (has links)
IMIS (Integrated Mobile Information System for Diabetic Healthcare) aims at providing healthcare on both stationary and mobile platform, which is based on Engström’s triangle model in Activity Theory. It focuses on the need for communication and information accessibility between care-providers and their shared patients. Based on the identified need in the target area, IMIS has decided to construct a network-based communication system to support communication and accessibility to patients’ journal. Since the system integrates various roles from the heath care organization, it is a challenge to provide a useful software program to the group members. In order to facilitate the application and enhance the Human-Computer interaction of the system, agent technology is applied to increase the flexibility factor so that the system could be self-adapted to a wider range group of users. Besides, this thesis also introduces the approach of using social-psychology — Activity theory in HCI, and discuss the integration of these different disciplines. The Multi-agents System is applied with Gaia methodology from micro perspectives. From the macro perspective Activity theory constructs the coordination mechanism of the different agents. A prototype is applied based on the different model of our research.

Page generated in 0.0313 seconds