• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 14
  • 14
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Comparing Web Accessibility between Major Retailers and Novelties for E-Commerce

Xu, Jinmeng January 2020 (has links)
Purpose –Comparing the accessibility of e-commerce for General Merchandise, between major retailers and novelties, in the conformance of web accessibility guidelines, and the implementation of rich internet applications specifications. So that to figure out whether the differences exist between them, if so, what are the differences, and analyzing upcoming e-commerce and established e-commerce.  Method – The descriptive and quantitative case studies were used to evaluated 45 websites respectively from major retailers and novelties in General Merchandise of e-commerce, where sample websites were selected from Alexa’s rank lists. Then the WAVE tool, Accessibility Quantitative Metric (WAQM), and Fona assessment were applied for analyzing cases for representing the accessibilities and answering the research questions of purpose.  Findings – ARIA specifications as a kind of technological solution really had positive functions on web accessibility when only focusing on accessibility guidelines, because the novelty websites with less ARIA attributes resulting in lower accessibility levels generally, even though there were also many other elements that can affect accessibility.  Implications – In the main branch of General Merchandise, the degree of web accessibility in major retailer websites was better than that in novelties, which means as far as e-commerce is concerned, the accessibility of mature websites that had been established for a long time was contemporarily stronger than that of new websites with creative products.  Limitations – From the perspective of the sample, there were limitations in sample sources, size, websites languages, while in the technical aspect, the evaluation of dynamic contents just aims at keyboard navigation, and the tool of Fona assessment also had restrictions.
12

Prédiction de performances des systèmes de Reconnaissance Automatique de la Parole / Performance prediction of Automatic Speech Recognition systems

Elloumi, Zied 18 March 2019 (has links)
Nous abordons dans cette thèse la tâche de prédiction de performances des systèmes de reconnaissance automatique de la parole (SRAP).Il s'agit d'une tâche utile pour mesurer la fiabilité d'hypothèses de transcription issues d'une nouvelle collection de données, lorsque la transcription de référence est indisponible et que le SRAP utilisé est inconnu (boîte noire).Notre contribution porte sur plusieurs axes:d'abord, nous proposons un corpus français hétérogène pour apprendre et évaluer des systèmes de prédiction de performances ainsi que des systèmes de RAP.Nous comparons par la suite deux approches de prédiction: une approche à l'état de l'art basée sur l'extraction explicite de traitset une nouvelle approche basée sur des caractéristiques entraînées implicitement à l'aide des réseaux neuronaux convolutifs (CNN).L'utilisation jointe de traits textuels et acoustiques n'apporte pas de gains avec de l'approche état de l'art,tandis qu'elle permet d'obtenir de meilleures prédictions en utilisant les CNNs. Nous montrons également que les CNNs prédisent clairement la distribution des taux d'erreurs sur une collection d'enregistrements, contrairement à l'approche état de l'art qui génère une distribution éloignée de la réalité.Ensuite, nous analysons des facteurs impactant les deux approches de prédiction. Nous évaluons également l'impact de la quantité d'apprentissage des systèmes de prédiction ainsi que la robustesse des systèmes appris avec les sorties d'un système de RAP particulier et utilisés pour prédire la performance sur une nouvelle collection de données.Nos résultats expérimentaux montrent que les deux approches de prédiction sont robustes et que la tâche de prédiction est plus difficile sur des tours de parole courts ainsi que sur les tours de parole ayant un style de parole spontané.Enfin, nous essayons de comprendre quelles informations sont capturées par notre modèle neuronal et leurs liens avec différents facteurs.Nos expériences montrent que les représentations intermédiaires dans le réseau encodent implicitementdes informations sur le style de la parole, l'accent du locuteur ainsi que le type d'émission.Pour tirer profit de cette analyse, nous proposons un système multi-tâche qui se montre légèrement plus efficace sur la tâche de prédiction de performance. / In this thesis, we focus on performance prediction of automatic speech recognition (ASR) systems.This is a very useful task to measure the reliability of transcription hypotheses for a new data collection, when the reference transcription is unavailable and the ASR system used is unknown (black box).Our contribution focuses on several areas: first, we propose a heterogeneous French corpus to learn and evaluate ASR prediction systems.We then compare two prediction approaches: a state-of-the-art (SOTA) performance prediction based on engineered features and a new strategy based on learnt features using convolutional neural networks (CNNs).While the joint use of textual and signal features did not work for the SOTA system, the combination of inputs for CNNs leads to the best WER prediction performance. We also show that our CNN prediction remarkably predicts the shape of the WER distribution on a collection of speech recordings.Then, we analyze factors impacting both prediction approaches. We also assess the impact of the training size of prediction systems as well as the robustness of systems learned with the outputs of a particular ASR system and used to predict performance on a new data collection.Our experimental results show that both prediction approaches are robust and that the prediction task is more difficult on short speech turns as well as spontaneous speech style.Finally, we try to understand which information is captured by our neural model and its relation with different factors.Our experiences show that intermediate representations in the network automatically encode information on the speech style, the speaker's accent as well as the broadcast program type.To take advantage of this analysis, we propose a multi-task system that is slightly more effective on the performance prediction task.
13

Construção de banco de questões para exames de proficiência em inglês para programas de pós-graduação

Monzón, Andrea Jessica Borges 03 December 2008 (has links)
Made available in DSpace on 2016-06-02T20:25:02Z (GMT). No. of bitstreams: 1 2222.pdf: 2028194 bytes, checksum: 069bb69ee62ece533677e4392f107a09 (MD5) Previous issue date: 2008-12-03 / As English is the most frequent language in science and in the academic community, the proficiency in this language is necessary to whom is in this environment included, mainly regarding reading comprehension. Many proficiency exams that evaluate this ability in candidates or graduate students, although request students only to translate a text. At ICMC (USP - São Carlos) since 2001 an English Proficiency Exam (EPE) has been made evaluating graduate students knowledge of research article as a genre (Swales,1990; Weissberg & Buker, 1990). The assessment is divided into four modules: 1) linguistic conventions, 2) structure of research article, 3) reading comprehension and 4) writing strategies which are peculiar to Academic English (Aluísio et al, 2003). However, the modules are not fully implemented and there s still a lot of space for linguistic reflections, once the exam was elaborated by a professor in the Computer Science area. The current study aims the construction of a corpus and the elaboration of questions referring to module 1 with the axioms needed in any evaluation (Perrenoud, 1999; Miller et al, 1998), the peculiar characterists of a foreign language test, Bloom s Taxonomy (1984), the Admissible Probability Method (APM) (Klinger, 1997) and the most recurrent mistakes made by Brazilian graduate students (Genoves Jr. et al, 2007). Computer tools will be used, such as: a concord and a word list that are commonly available in corpus processor like WordSmith Tool besides a chunker, a parser and a parser tree. The questions will be part of an item base to be used for: 1) the elaboration of future EPEs; 2) raising relevant information about the conditions of the testees reading comprehension; and 3) testees´ study and preparation through CALEAP-Web (Gonçalves, 2004). / Como o inglês é a língua franca da ciência e da comunidade acadêmica, a proficiência em tal idioma se faz necessária a quem está neste meio inserido, principalmente no que diz respeito à compreensão de leitura. Muitos exames de proficiência que avaliam tal habilidade de candidatos ou alunos de pós-graduação, entretanto, requerem que os avaliandos apenas traduzam um texto. No ICMC (Instituto de Ciências Matemáticas e Computação) da USP de São Carlos, desde 2001, já ocorre um EPI (Exame de Proficiência em Inglês) que avalia os conhecimentos dos mestrandos acerca do gênero artigo científico (Swales, 1990; Weissberg & Buker, 1990). Tal avaliação está dividida em quatro módulos: 1) convenções lingüísticas, 2) estrutura de textos científicos, 3) compreensão de texto e 4) estratégias de escrita peculiares ao inglês acadêmico (Aluísio et al, 2003). Entretanto, os módulos ainda não estão plenamente implementados, e ainda há campo fértil para reflexões lingüísticas, uma vez que o exame foi elaborado por um docente da área de Computação. A presente pesquisa visa à construção de um córpus de artigos científicos e à criação de questões referentes ao módulo 1, com axiomas necessários a qualquer avaliação (Perrenoud, 1999; Miller et al, 1998), nas características peculiares a uma avaliação em língua estrangeira, na Taxonomia de Bloom (1984), no Método de Probabilidade Admissível - MPA (Klinger, 1997) e nos erros mais recorrentes na escrita científica de brasileiros (Genoves et al, 2007). Para tanto, serão utilizadas ferramentas computacionais tais como: um concordanciador e um contador de freqüência que comumente são disponibilizados em processadores de córpus como o WordSmith Tool , além de um chunker, um parser e um parser tree. Tais questões farão parte de um banco de itens que poderá ser utilizado para: 1) a elaboração de futuros EPIs; 2) levantamento de informações relevantes sobre o panorama da compreensão de leitura dos examinandos; e 3) estudo e preparação por parte dos avaliandos através da ferramenta CALEAP-Web (Gonçalves, 2004).
14

Transforming Education into Chatbot Chats : The implementation of Chat-GPT to prepare educational content into a conversational format to be used for practicing skills / Omvandla utbildningsmaterial till chattbot-samtal : Implementeringen av Chat-GPT för att förbereda utbildningsmaterial till konversationsbaserat format för inlärnings syften

Wickman, Simon, Zandin, Philip January 2023 (has links)
In this study we explore the possibility of using ChatGPT, to summarise large contents of educational content and put it in a template that later can be used for dialogue purposes and will explore the challenges and solutions that occur during the implementation. Today there is a problem for users to create wellmade prompts for learning scenarios that fulfill all the requirements set by the user. This problem is significant as it addresses the challenges of information overload and how generating prompts for dialogue purposes can be trivialized for users. We solved this problem by doing an implementation for the company Fictive Reality in their application, conducting research, and performing tests. The implementation was made with the help of OpenAI’s application programming interface, ChatGPT-4 which is a model that is popular due to its wide range of domain knowledge, and we connected it to a web page where users could upload text or audio files. The method to find a suitable prompt to summarise text was primarily through experimentation supported by previous research. We used automatic metrics for evaluation like ROUGE, BERTScore, and ChatGPT(Self-Evaluation), we also had users give feedback on the implementation and quality of the result. This study shows that ChatGPT effectively summarizes extensive educational content and transforms it into dialogue templates for ChatGPT to use. The research demonstrates streamlined and improved prompt creation, addressing the challenges of information overload. The efficiency and quality were either equal to or surpassed user-generated prompts while preserving almost relevant information, and reduced the time-consumption of this task by a substantial margin. The biggest struggle we had was getting ChatGPT to grasp our instructions. However, with research and with an iterative approach the process became much smoother. ChatGPT exhibits robust potential for enhancing educational prompt generation. Future work could be dedicated to improving the prompt further, by making it more flexible. / I denna studie utforskar vi möjligheten att använda ChatGPT för att sammanfatta stora mängder utbildningsinnehåll och placera det i en mall som senare kan användas för dialogändamål. Vi kommer att undersöka de utmaningar och lösningar som uppstår under implementeringen. Idag finns det ett problem för användare att skapa välgjorda uppmaningar för lärandescenarier som uppfyller alla krav som användaren ställer. Detta problem är betydande då det tar upp utmaningarna med informationsöverbelastning och hur generering av uppmaningar för dialogändamål kan förenklas för användare. Vi löste detta problem genom att göra en implementation hos Fictive Reality där vi gjorde forskning, tester och programvara. Implementeringen gjordes med hjälp av OpenAI:s applikationsprogrammeringsgränssnitt, ChatGPT-4, som är en modell som är populär på grund av dess breda kunskap inom olika områden. Vi anslöt den till en webbsida där användare kunde ladda upp text- eller ljudfiler. Metoden för att hitta en lämpliga instruktioner för att sammanfatta texter var främst genom experimentering med stöd av tidigare forskning i området. Vi använde automatiska utvärderings verktyg, såsom ROUGE, BERTScore och ChatGPT (självutvärdering). Vi hade också användare som gav feedback om implementeringen och resultatets kvalitet. Denna studie visar att ChatGPT effektivt sammanfattar omfattande utbildningsinnehåll och omvandlar det till dialogmallar redo för ett lärnings scenario med ChatGPT. Forskningen visade bra resultat vid skapandet av instruktioner, vilket tacklar utmaningarna med informationsöverbelastning. Effektiviteten och kvaliteten var antingen likvärdig eller bättre än användarskapade instruktioner samtidigt som nästan all relevant information bevarades, och tidsåtgången för denna uppgift minskades avsevärt. Den största utmaningen vi stod inför var att få ChatGPT att förstå våra instruktioner. Dock blev processen mycket smidigare med forskning och en iterativ metodik. ChatGPT visar på stark potential för att förbättra genereringen av utbildningssammanfattningar. Framtida arbete kan fokusera på att ytterligare förbättra instruktionerna genom att göra den mer flexibel.

Page generated in 0.5562 seconds