• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 65
  • 8
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 120
  • 120
  • 40
  • 29
  • 26
  • 25
  • 20
  • 17
  • 16
  • 14
  • 13
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

A recuperação da informação sob a ótica dos usuários: um estudo de caso da base de dados Accessus

Castro, Renan Marinho de 29 September 2011 (has links)
Submitted by Renan Castro (renan.castro@fgv.br) on 2011-10-26T12:20:06Z No. of bitstreams: 1 versão final aprovada.pdf: 1922246 bytes, checksum: c3576c323bddcba2aa56e4c0606cb19c (MD5) / Approved for entry into archive by Rafael Aguiar (rafael.aguiar@fgv.br) on 2011-10-27T12:27:12Z (GMT) No. of bitstreams: 1 versão final aprovada.pdf: 1922246 bytes, checksum: c3576c323bddcba2aa56e4c0606cb19c (MD5) / Approved for entry into archive by Rafael Aguiar (rafael.aguiar@fgv.br) on 2011-10-27T12:39:23Z (GMT) No. of bitstreams: 1 versão final aprovada.pdf: 1922246 bytes, checksum: c3576c323bddcba2aa56e4c0606cb19c (MD5) / Made available in DSpace on 2011-11-03T18:49:54Z (GMT). No. of bitstreams: 1 versão final aprovada.pdf: 1922246 bytes, checksum: c3576c323bddcba2aa56e4c0606cb19c (MD5) Previous issue date: 2011-09-29 / This research addresses organization and information retrieval issues in the collection of the Center for Research and Documentation in Contemporary History of Brazil - CPDOC. This analysis is based on a case study of the use of the reference service provided by the institution's Reference Room as well as in the use of the Accessus database. It presents a user profile and a research profile of these individuals to map the behavior of Accessus users. It considers the database’s context whilst its development and investigates the creation of the controlled vocabulary in history and related sciences, which was the basis for Accessus. Debates the issues of accessibility of language to a public not related to the area of study. This analysis compares the different user profiles. Discusses the CPDOC collection index and raises thoughts about this process that considers a direct relationship with the users’ profiles. / Trata das questões de organização e recuperação da informação no caso específico do acervo do Centro de Pesquisa e História Contemporânea do Brasil – CPDOC. Baseia essa análise num estudo de caso do uso do serviço de referência da instituição prestado pela Sala de Consulta e também no utilização da base de dados Accessus. Traça um perfil do usuário do acervo da instituição além de um perfil de pesquisa desses indivíduos ao mapear o comportamento dos usuários diante da ferramenta Accessus. Aborda o contexto da elaboração da base de dados e investiga a criação da linguagem controlada em história e ciências afins que serviu de base para o Accessus. Problematiza as questões de acessibilidade da linguagem a um público não relacionado com a área. Pareia essa problematização com análise dos diferentes perfis de usuários. Discute a forma de indexação do acervo do CPDOC e suscita reflexões sobre esse processo que considere uma relação direta com o perfil dos usuários.
92

Informação e Inclusão acadêmica: um estudo sobre as necessidades socioinformacionais dos universitários cegos do Campus I da UFPB

Silva, Aparecida Maria da 30 March 2012 (has links)
Made available in DSpace on 2015-04-16T15:23:21Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 2565602 bytes, checksum: 1287d4cae804672906d58ad619f8d721 (MD5) Previous issue date: 2012-03-30 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / The present paper aims to elucidate the informational actions used for the attendance of the social and informational needs of blind students at Campus 1 of the Federal University of Paraíba, identifying their informational needs and barriers more pertinent in the search and use of information. The methodology used Bardin s Analysis, by means of the categorization process based on Sense Making model of Brenda Dervin which, by means of meaning construction, intends to apprehend how users make sense through a subjective perspective. The obtained results revealed that there are some barriers, among them, informational, attitudinal and technical ones. Also, these barriers are not only for the blind student, but also among professors, in relation to the inclusive process, and the own University in the role of manager of possibilities, because it does not enable the usability of assistive technologies as a factor of digital inclusion in the entire educational context. The conclusion is that there is lack of pedagogical (in)formation of professors to change their social and educative praxis, as well as it suggests the realization of new researches about this problematic which reflects in the formation of professors and in the noiseless positioning of universities in relation to people with special needs. / Este trabalho tem como objetivo esclarecer as ações informacionais utilizadas para o atendimento das necessidades socioinformacionais dos universitários cegos do Campus 1 da Universidade Federal da Paraíba (UFPB), identificando as suas necessidades informacionais e barreiras mais pertinentes na busca e uso da informação. A metodologia utilizada foi a análise de Bardin através do processo de categorização embasada no modelo Sense Making de Brenda Dervin, que através da construção de sentido visa apreender como os usuários fazem sentido através de uma perspectiva subjetiva. Os resultados obtidos revelaram que existem algumas barreiras, entre elas, as informacionais, atitudinais e técnicas, não só para o universitário cego como também entre os docentes, em relação ao processo inclusivo, como da própria Universidade por ser gestora de possibilidades, em não viabilizar a usabilidade das tecnologias assistivas como fator de inclusão digital em todo o seu contexto educacional. Conclui que falta (in)formação pedagógica dos docentes para mudar sua práxis educativa e social, assim como sugere a realização de novas pesquisas voltadas para essa problemática que se reflete no aspecto sociocultural da formação do educador e no posicionamento silencioso das universidades diante dos portadores de necessidades especiais.
93

Biblioteca virtual temática em saúde: interatividade com usuário leigo

Lourenço, Regina Goulart 20 March 2013 (has links)
Made available in DSpace on 2015-10-19T11:50:09Z (GMT). No. of bitstreams: 1 lourenco2013.pdf: 3863529 bytes, checksum: 13c03e8b3ccce833aeb91a74cc63f62b (MD5) Previous issue date: 2013-03-20 / This study presents and analyzes the behavior of layman users, embedded in socioeconomic class C, in the search for health information in the virtual library. This is a literature research and exploratory study to know the potential of interactivity and communication between virtual library and user. The health area was used to cater for the study. Concepts of information, knowledge, virtual library, user behavior, marketing and health promotion are presented. In the research field a set of instruments was used for the gathering of data that included semi-structured interview, questions to characterize the respondent, and the technique of thinking aloud. The results indicated difficulties, barriers and expectations of users in relation to the searches performed in the virtual library. Suggestions are made to expand the interaction between user and virtual library / Este estudo apresenta e analisa o comportamento de usuários leigos, inseridos na classe socioeconômica definida como C, na busca por informação sobre saúde em biblioteca virtual. Trata-se de uma pesquisa bibliográfica e exploratória para conhecer o potencial de interatividade comunicacional entre biblioteca virtual e usuário. Utiliza-se a área da saúde para exemplificar o estudo. Conceitos de informação, conhecimento, biblioteca virtual, comportamento de usuário, marketing e promoção da saúde são apresentados. Na pesquisa de campo foi empregado um conjunto de instrumentos para a coleta de dados que incluiu entrevista semiestruturada e questões para caracterizar o entrevistado, além da técnica do protocolo verbal (think aloud). Os resultados apontaram dificuldades, barreiras e expectativas dos usuários em relação às buscas efetuadas na biblioteca virtual. São apresentadas sugestões para ampliar a interação entre biblioteca virtual e usuário
94

A produção do usuário e seu uso sumário: discursos da clientela de um NAPS / The user’s production and its concise application: discourses from a NAPS clientele

Sergio Bacchi Machado 11 August 2006 (has links)
Partindo de uma abordagem da figura polêmica da loucura como domínio estratégico de inúmeras relações de poder que não majoritariamente orientadas por princípios restritivos, esta dissertação enfoca a produção do sujeito “usuário” em um NAPS (Núcleo de Atenção Psicossocial) – instituição pública de saúde mental vinculada ao movimento da Luta Antimanicomial. Para tanto, transcrições de entrevistas com os usuários foram analisadas segundo o método de análise institucional de discurso. Visou-se, com isso, ao estudo da constituição do sujeito no discurso – o que implica tanto o seu vínculo com a instituição quanto a interlocução que se configura com o entrevistador no ato mesmo da entrevista. Empreendendo uma analítica do discurso do usuário – em detrimento de classificações psiquiátricas ou psicopatológicas impostas aos sujeitos –, realizou-se o delineamento de singularidades por meio da positivação desses discursos. Por fim, confrontando as análises das entrevistas, buscou-se mapear as regularidades discursivas e as diversas correlações de força. Temas como violência, médicos, sexualidade, medicamentos e cotidiano institucional são abordados nesta pesquisa, sempre tendo por base o discurso dos usuários. / Starting with an approach of the controversial image of insanity as a controlling strategy of various power relationships, which are not mostly guided by restrictive principles, this dissertation focuses on the subject’s production, a user in a NAPS (Social and Psychological Attention Centre) – public institution of mental care, which is linked to the “Anti-Mental Hospital” fight. Therefore, transcriptions of users’ interviews were analyzed according to the institutional discourse analysis method. For the objective of studying the subject’s constitution in the discourse – which implies its relation to the institution and the conversation configured with the interviewer during the interview. Applying the user’s discourse analysis – in disregard of psychiatric or psychopathological categories imposed on the subjects – was made the outline of singularities through the assertiveness of those discourses. At last, comparing the interviews’ analyses, we intended to map discourse regularities and various power correlations. Topics like violence, medicals, sexuality, medicine and institutional daily life are always approached in this research, keeping the users’ discourse as support.
95

Assessment of spectrum-based fault localization for practical use / Avaliação de localização de defeitos baseada em espectro para uso prático

Higor Amario de Souza 17 April 2018 (has links)
Debugging is one of the most time-consuming activities in software development. Several fault localization techniques have been proposed in the last years, aiming to reduce development costs. A promising approach, called Spectrum-based Fault localization (SFL), consists of techniques that provide a list of suspicious program elements (e.g., statements, basic blocks, methods) more likely to be faulty. Developers should inspect a suspiciousness list to search for faults. However, these fault localization techniques are not yet used in practice. These techniques are based on assumptions about the developer\'s behavior when inspecting such lists that may not hold in practice. A developer is supposed to inspect an SFL list from the most to the least suspicious program elements (e.g., statements) until reaching the faulty one. This assumption leads to some implications: the techniques are assessed only by the position of a bug in a list; a bug is deemed as found when the faulty element is reached. SFL techniques should pinpoint the faulty program elements among the first picks to be useful in practice. Most techniques use ranking metrics to assign suspiciousness values to program elements executed by the tests. These ranking metrics have presented similar modest results, which indicates the need for different strategies to improve the effectiveness of SFL. Moreover, most techniques use only control-flow spectra due to the high execution costs associated with other spectra, such as data-flow. Also, little research has investigated the use of SFL techniques by practitioners. Understanding how developers use SFL may help to clarify the theoretical assumptions about their behavior, which in turn can collaborate with the proposal of techniques more feasible for practical use. Therefore, user studies are a valuable tool for the development of the area. The goal of this thesis research was to propose strategies to improve spectrum-based fault localization, focusing on its practical use. This thesis presents the following contributions. First, we investigate strategies to provide contextual information for SFL. These strategies helped to reduce the amount of code to be inspected until reaching the faults. Second, we carried out a user study to understand how developers use SFL in practice. The results show that developers can benefit from SFL to locate bugs. Third, we explore the use of data-flow spectrum for SFL. Data-flow spectrum singles out faults significantly better than control-flow spectrum, improving the fault localization effectiveness. / Depuração é uma das atividades mais custosas durante o desenvolvimento de programas. Diversas técnicas de localização de defeitos têm sido propostas nos últimos anos com o objetivo de reduzir custos de desenvolvimento. Uma abordagem promissora, chamada Localização de Defeitos baseada em Espectro (LDE), é formada por técnicas que fornecem listas contendo elementos de código (comandos, blocos básicos, métodos) mais suspeitos de conter defeitos. Desenvolvedores deveriam inspecionar uma lista de suspeição para procurar por defeitos. No entanto, essas técnicas de localização de defeitos ainda não são usadas na prática. Essas técnicas baseiam-se em suposições sobre o comportamento de desenvolvedores durante a inspeção de tais listas que podem não ocorrer na prática. Um desenvolvedor supostamente inspeciona uma lista de LDE a partir do elemento mais suspeito para o menos suspeito até atingir o elemento defeituoso. Essa suposição leva a algumas implicações: as técnicas são avaliadas somente pela posição dos defeitos nas listas; um defeito é considerado como encontrado quando o elemento defeituoso é atingido. Técnicas de LDE deveriam posicionar os elementos de código defeituosos entre as primeiras posições para serem úteis na prática. A maioria das técnicas usa métricas de ranqueamento para atribuir valores de suspeição aos elementos executados pelos testes. Essas métricas de ranqueamento têm apresentado resultados semelhantes, o que indica a necessidade de estratégias diferentes para melhorar a eficácia de LDE. Além disso, a maioria das técnicas usa somente espectros de fluxo de controle devido ao alto custo de execução associado a outros espectros, tais como fluxo de dados. Também, poucas pesquisas têm investigado o uso de técnicas de LDE por programadores. Entender como desenvolvedores usam LDE pode ajudar a esclarecer as suposições teóricas sobre seu comportamento, o que por sua vez pode para colaborar para a proposição de técnicas mais viáveis para uso prático. Portanto, estudos com usuários são importantes para o desenvolvimento da área. O objetivo desta pesquisa de doutorado foi propor estratégias para melhorar a localização de defeitos baseada em espectro focando em seu uso prático. Esta tese apresenta as seguintes contribuições originais. Primeiro, nós investigamos estratégias para fornecer informação de contexto para LDE. Essas estratégias ajudaram a reduzir quantidade de código a ser inspecionado até atingir os defeitos. Segundo, nós realizamos um estudo com usuários para entender como desenvolvedores usam LDE na prática. Os resultados mostram que desenvolvedores podem beneficiar-se de LDE para localizar defeitos. Terceiro, nós exploramos o uso de espectros de fluxo de dados para LDE. Mostramos que o espectro de fluxo de dados seleciona defeitos significamente melhor que espectro de fluxo de controle, aumentando a eficácia de localização de defeitos.
96

Ausarbeitungsleitfaden für Nutzerstudien zur Evaluation von XR-Interfaces in der Produktentwicklung

Harlan, Jakob, Schleich, Benjamin, Wartzack, Sandro 06 September 2021 (has links)
Technologien der erweiterten Realität (extended reality, XR) finden im gesamten Produktentwicklungsprozess Anwendung. Insbesondere Systeme zur aktiven Erzeugung und Veränderung digitaler Produktdaten bieten noch viel Potential. Die Erforschung und Entwicklung dieser immersiven Interfaces beruht maßgeblich auf der Evaluation durch Nutzerstudien, denn nur so kann die Einsatztauglichkeit der Mensch-Maschine-Schnittstellen seriös beurteilt und verglichen werden. Bei der Konzeptionierung und Durchführung dieser Nutzerstudien gibt es viel zu beachten. In dieser Arbeit wird ein Leitfaden für das Design von Evaluationen von XR Interfaces für die Produktentwicklung präsentiert. Zu Beginn müssen die Fragestellungen festgelegt werden, welche die Studie beantworten soll. Ausgehend von diesen müssen die zu testenden Versuchsbedingungen, die gestellten Aufgaben, die aufgenommen Metriken, die gewählten Probanden und der geplante Ablauf festgelegt werden. Zusätzlich zu der allgemeinen Darlegung wird das Vorgehen anhand eines Fallbeispiels angewandt. Die Gestaltung einer Nutzerstudie zur Evaluation der Usability und Eignung eines neuartigen Virtual Reality Interfaces zur natürlichen Gestaltung von Vorentwürfen wird vorgestellt.
97

Evaluating Generated Co-Speech Gestures of Embodied Conversational Agent(ECA) through Real-Time Interaction / Utvärdering av genererade samspråkliga gester hos Embodied Conversational Agent (ECA) genom interaktion i realtid

He, Yuan January 2022 (has links)
Embodied Conversational Agents (ECAs)’ gestures can enhance human perception in many dimensions during interactions. In recent years, data-driven gesture generation approaches for ECAs have attracted considerable research attention and effort, and methods have been continuously optimized. Researchers have typically used human-agent interaction for user studies when evaluating systems of ECAs that generate rule-based gestures. However, when evaluating the performance of ECAs that generate gestures based on data-driven methods, participants are often required to watch prerecorded videos, which cannot provide an adequate assessment of human perception during the interaction. To address this limitation, we proposed two main research objectives: First, to explore the workflow of assessing data-driven gesturing ECAs through real-time interaction. Second, to investigate whether gestures could affect ECAs’ human-likeness, animacy, perceived intelligence, and humans’ focused attention in ECAs. Our user study required participants to interact with two ECAs by setting two experimental conditions with and without hand gestures. Both subjective data from the participants’ self-report questionnaire and objective data from the gaze tracker were collected. To our knowledge, the current study represents the first attempt to evaluate data-driven gesturing ECAs through real-time interaction and the first experiment using gaze-tracking to examine the effect of ECA gestures. The eye-gazing data indicated that when an ECA can generate gestures, it would attract more attention to its body. / Förkroppsligade konversationsagenter (Embodied Conversational Agents, ECAs) gester kan förbättra människans uppfattning i många dimensioner under interaktioner. Under de senaste åren har datadrivna metoder för att generera gester för ECA:er fått stor uppmärksamhet och stora ansträngningar inom forskningen, och metoderna har kontinuerligt optimerats. Forskare har vanligtvis använt sig av interaktion mellan människa och agent för användarstudier när de utvärderat system för ECA:er som genererar regelbaserade gester. När man utvärderar prestandan hos ECA:er som genererar gester baserat på datadrivna metoder måste deltagarna ofta titta på förinspelade videor, vilket inte ger en adekvat bedömning av människans uppfattning under interaktionen. För att åtgärda denna begränsning föreslog vi två huvudsakliga forskningsmål: För det första att utforska arbetsflödet för att bedöma datadrivna ECA:er för gester genom interaktion i realtid. För det andra att undersöka om gester kan påverka ECA:s människoliknande, animerade karaktär, upplevd intelligens och människors fokuserade uppmärksamhet i ECA:s. I vår användarstudie fick deltagarna interagera med två ECA:er genom att ställa in två experimentella villkor med och utan handgester. Både subjektiva data från deltagarnas självrapporterande frågeformulär och objektiva data från gaze tracker samlades in. Såvitt vi vet är den aktuella studien det första försöket att utvärdera datadrivna ECA:er med gester genom interaktion i realtid och det första experimentet där man använder blickspårning för att undersöka effekten av ECA:s gester. Uppgifterna om blickspårning visade att när en ECA kan generera gester skulle den locka mer uppmärksamhet till sin kropp.
98

A visual analytics approach for multi-resolution and multi-model analysis of text corpora : application to investigative journalism / Une approche de visualisation analytique pour une analyse multi-résolution de corpus textuels : application au journalisme d’investigation

Médoc, Nicolas 16 October 2017 (has links)
À mesure que la production de textes numériques croît exponentiellement, un besoin grandissant d’analyser des corpus de textes se manifeste dans beaucoup de domaines d’application, tant ces corpus constituent des sources inépuisables d’information et de connaissance partagées. Ainsi proposons-nous dans cette thèse une nouvelle approche de visualisation analytique pour l’analyse de corpus textuels, mise en œuvre pour les besoins spécifiques du journalisme d’investigation. Motivées par les problèmes et les tâches identifiés avec une journaliste d’investigation professionnelle, les visualisations et les interactions ont été conçues suivant une méthodologie centrée utilisateur, impliquant l’utilisateur durant tout le processus de développement. En l’occurrence, les journalistes d’investigation formulent des hypothèses, explorent leur sujet d’investigation sous tous ses angles, à la recherche de sources multiples étayant leurs hypothèses de travail. La réalisation de ces tâches, très fastidieuse lorsque les corpus sont volumineux, requiert l’usage de logiciels de visualisation analytique se confrontant aux problématiques de recherche abordées dans cette thèse. D’abord, la difficulté de donner du sens à un corpus textuel vient de sa nature non structurée. Nous avons donc recours au modèle vectoriel et son lien étroit avec l’hypothèse distributionnelle, ainsi qu’aux algorithmes qui l’exploitent pour révéler la structure sémantique latente du corpus. Les modèles de sujets et les algorithmes de biclustering sont efficaces pour l’extraction de sujets de haut niveau. Ces derniers correspondent à des groupes de documents concernant des sujets similaires, chacun représenté par un ensemble de termes extraits des contenus textuels. Une telle structuration par sujet permet notamment de résumer un corpus et de faciliter son exploration. Nous proposons une nouvelle visualisation, une carte pondérée des sujets, qui dresse une vue d’ensemble des sujets de haut niveau. Elle permet d’une part d’interpréter rapidement les contenus grâce à de multiples nuages de mots, et d’autre part, d’apprécier les propriétés des sujets telles que leur taille relative et leur proximité sémantique. Bien que l’exploration des sujets de haut niveau aide à localiser des sujets d’intérêt ainsi que leur voisinage, l’identification de faits précis, de points de vue ou d’angles d’analyse, en lien avec un événement ou une histoire, nécessite un niveau de structuration plus fin pour représenter des variantes de sujet. Cette structure imbriquée révélée par Bimax, une méthode de biclustering basée sur des motifs avec chevauchement, capture au sein des biclusters les co-occurrences de termes partagés par des sous-ensembles de documents pouvant dévoiler des faits, des points de vue ou des angles associés à des événements ou des histoires communes. Cette thèse aborde les problèmes de visualisation de biclusters avec chevauchement en organisant les biclusters terme-document en une hiérarchie qui limite la redondance des termes et met en exergue les parties communes et distinctives des biclusters. Nous avons évalué l’utilité de notre logiciel d’abord par un scénario d’utilisation doublé d’une évaluation qualitative avec une journaliste d’investigation. En outre, les motifs de co-occurrence des variantes de sujet révélées par Bima. sont déterminés par la structure de sujet englobante fournie par une méthode d’extraction de sujet. Cependant, la communauté a peu de recul quant au choix de la méthode et son impact sur l’exploration et l’interprétation des sujets et de ses variantes. Ainsi nous avons conduit une expérience computationnelle et une expérience utilisateur contrôlée afin de comparer deux méthodes d’extraction de sujet. D’un côté Coclu. est une méthode de biclustering disjointe, et de l’autre, hirarchical Latent Dirichlet Allocation (hLDA) est un modèle de sujet probabiliste dont les distributions de probabilité forment une structure de bicluster avec chevauchement. (...) / As the production of digital texts grows exponentially, a greater need to analyze text corpora arises in various domains of application, insofar as they constitute inexhaustible sources of shared information and knowledge. We therefore propose in this thesis a novel visual analytics approach for the analysis of text corpora, implemented for the real and concrete needs of investigative journalism. Motivated by the problems and tasks identified with a professional investigative journalist, visualizations and interactions are designed through a user-centered methodology involving the user during the whole development process. Specifically, investigative journalists formulate hypotheses and explore exhaustively the field under investigation in order to multiply sources showing pieces of evidence related to their working hypothesis. Carrying out such tasks in a large corpus is however a daunting endeavor and requires visual analytics software addressing several challenging research issues covered in this thesis. First, the difficulty to make sense of a large text corpus lies in its unstructured nature. We resort to the Vector Space Model (VSM) and its strong relationship with the distributional hypothesis, leveraged by multiple text mining algorithms, to discover the latent semantic structure of the corpus. Topic models and biclustering methods are recognized to be well suited to the extraction of coarse-grained topics, i.e. groups of documents concerning similar topics, each one represented by a set of terms extracted from textual contents. We provide a new Weighted Topic Map visualization that conveys a broad overview of coarse-grained topics by allowing quick interpretation of contents through multiple tag clouds while depicting the topical structure such as the relative importance of topics and their semantic similarity. Although the exploration of the coarse-grained topics helps locate topic of interest and its neighborhood, the identification of specific facts, viewpoints or angles related to events or stories requires finer level of structuration to represent topic variants. This nested structure, revealed by Bimax, a pattern-based overlapping biclustering algorithm, captures in biclusters the co-occurrences of terms shared by multiple documents and can disclose facts, viewpoints or angles related to events or stories. This thesis tackles issues related to the visualization of a large amount of overlapping biclusters by organizing term-document biclusters in a hierarchy that limits term redundancy and conveys their commonality and specificities. We evaluated the utility of our software through a usage scenario and a qualitative evaluation with an investigative journalist. In addition, the co-occurrence patterns of topic variants revealed by Bima. are determined by the enclosing topical structure supplied by the coarse-grained topic extraction method which is run beforehand. Nonetheless, little guidance is found regarding the choice of the latter method and its impact on the exploration and comprehension of topics and topic variants. Therefore we conducted both a numerical experiment and a controlled user experiment to compare two topic extraction methods, namely Coclus, a disjoint biclustering method, and hierarchical Latent Dirichlet Allocation (hLDA), an overlapping probabilistic topic model. The theoretical foundation of both methods is systematically analyzed by relating them to the distributional hypothesis. The numerical experiment provides statistical evidence of the difference between the resulting topical structure of both methods. The controlled experiment shows their impact on the comprehension of topic and topic variants, from analyst perspective. (...)
99

AI as a tool and its influence on the User Experience design process : A study on the usability of human-made vs more-than-human-made prototypes

Pop, Mira, Schricker, Max January 2023 (has links)
This research paper delves into the integration of artificial intelligence (AI) in the process of user experience (UX) design resulting in more-than-human-made designs. Specifically, the study focuses on the utilization of the text-to-image AI tool, Midjourney. The primary research questions addressed in this paper are twofold: 1) How do AI tools influence the current UX design process of a high-fidelity prototype? and 2) How do more-than-human-made high-fidelity prototypes compare with human-made high-fidelity prototypes in terms of UX? To answer these research questions, a two-method study design was employed. Firstly, two focus groups with in total of 8 designers as participants were formed, with one group utilizing Midjourney to investigate its influence on the design process and to compare the two groups regarding their workwise. The aim was to create two comparable prototypes within a specific e-commerce setting. Secondly, a between-subjects design user study with 32 participants was conducted to test the high-fidelity prototypes and to assess any potential disparities in UX quality between them. The findings regarding the first research question indicate that Midjourney primarily serves as an inspirational tool. Designers were able to harness the AI tool to generate dark mode images, with the final chosen dark mode exemplifying the impact of Midjourney. Additionally, designers attempted to utilize the tool for creating icons. Regarding the second research question, the user study revealed that, despite similar and comparable use cases, there were only minor significant differences in terms of UX quality. The overall scores in System Usability Scale (SUS) and User Experience Questionnaire Plus (UEQ+) did not exhibit any significant disparity. This study suggests that while Midjourney proves to be a useful tool within the design process, its current influence on designers' UX design process and the ultimate performance of the final prototype remains relatively modest. Further research and development may be required to enhance its impact in the field of UX design and the study design should be used to test other AI tools in comparable settings.
100

A Cyclist Warning System to enhance Traffic Safety - Development, Implementation & Evaluation in a Bicycle Simulator

Kreißig, lsabel, Springer, Sabine, Willner, Robert, Keil, Wolfram 02 January 2023 (has links)
The aim of the research project RADimFOKUS was to develop and evaluate a cyclist waming system (CWS) prototype in order to prevent safety critical events (SCEs), such as accidents, for the specifically vulnerable group of cyclists and, in turn, contributing to an enhanced traffic safety for this sustainable and healthy mode of transport. The basic idea of the system was to warn cyclists in case a SCE is detected. Although research about CWS is rather scarce, :first evaluations of such systems are promising [1]. Considering actual developments and trends, the CWS detects SCEs based on connected traffic information and is in a fust step intended for the implementation in electrified bicycles (i.e. pedelecs), where power supply is provided by the integrated battery. In the scope of the project, we performed the following 3 stages, which are described in the current contribution: (1) Development of the waming model and user interface for the CWS prototype, (2) Development of a bicycle simulator and implementation of the CWS interface for user studies, and (3) First evaluation of the CWS prototype in the scope of a bicycle simulator user study.

Page generated in 0.0603 seconds