• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 7
  • 7
  • 7
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Solving University entrance assessment using information retrieval / Resolvendo Vestibular utilizando recuperação de informação

Silveira, Igor Cataneo 05 July 2018 (has links)
Answering questions posed in natural language is a key task in Artificial Intelligence. However, producing a successful Question Answering (QA) system is challenging, since it requires text understanding, information retrieval, information extraction and text production. This task is made even harder by the difficulties in collecting reliable datasets and in evaluating techniques, two pivotal points for machine learning approaches. This has led many researchers to focus on Multiple-Choice Question Answering (MCQA), a special case of QA where systems must select the correct answers from a small set of alternatives. One particularly interesting type of MCQA is solving Standardized Tests, such as Foreign Language Proficiency exams, Elementary School Science exams and University Entrance exams. These exams provide easy-to-evaluate challenging multiple-choice questions of varying difficulties about large, but limited, domains. The Exame Nacional do Ensino Médio (ENEM) is a High School level exam taken every year by students all over Brazil. It is widely used by Brazilian universities as an entrance exam and is the world\'s second biggest university entrance examination in number of registered candidates. This exam consists in writing an essay and solving a multiple-choice test comprising questions on four major topics: Humanities, Language, Science and Mathematics. Questions inside each major topic are not segmented by standard scholar disciplines (e.g. Geography, Biology, etc.) and often require interdisciplinary reasoning. Moreover, the previous editions of the exam and their solutions are freely available online, making it a suitable benchmark for MCQA. In this work we automate solving the ENEM focusing, for simplicity, on purely textual questions that do not require mathematical thinking. We formulate the problem of answering multiple-choice questions as finding the candidate-answer most similar to the statement. We investigate two approaches for measuring textual similarity of candidate-answer and statement. The first approach addresses this as a Text Information Retrieval (IR) problem, that is, as a problem of finding in a database the most relevant document to a query. Our queries are made of statement plus candidate-answer and we use three different corpora as database: the first comprises plain-text articles extracted from a dump of the Wikipedia in Portuguese language; the second contains only the text given in the question\'s header and the third is composed by pairs of question and correct answer extracted from ENEM assessments. The second approach is based on Word Embedding (WE), a method to learn vectorial representation of words in a way such that semantically similar words have close vectors. WE is used in two manners: to augment IR\'s queries by adding related words to those on the query according to the WE model, and to create vectorial representations for statement and candidate-answers. Using these vectorial representations we answer questions either directly, by selecting the candidate-answer that maximizes the cosine similarity to the statement, or indirectly, by extracting features from the representations and then feeding them into a classifier that decides which alternative is the answer. Along with the two mentioned approaches we investigate how to enhance them using WordNet, a structured lexical database where words are connected according to some relations like synonymy and hypernymy. Finally, we combine different configurations of the two approaches and their WordNet variations by creating an ensemble of algorithms found by a greedy search. This ensemble chooses an answer by the majority voting of its components. The first approach achieved an average of 24% accuracy using the headers, 25% using the pairs database and 26.9% using Wikipedia. The second approach achieved 26.6% using WE indirectly and 28% directly. The ensemble achieved 29.3% accuracy. These results, slightly above random guessing (20%), suggest that these techniques can capture some of the necessary skills to solve standardized tests. However, more sophisticated techniques that perform text understanding and common sense reasoning might be required to achieve human-level performance. / Responder perguntas feitas em linguagem natural é uma capacidade há muito desejada pela Inteligência Artificial. Porém, produzir um sistema de Question Answering (QA) é uma tarefa desafiadora, uma vez que ela requer entendimento de texto, recuperação de informação, extração de informação e produção de texto. Além disso, a tarefa se torna ainda mais difícil dada a dificuldade em coletar datasets confiáveis e em avaliar as técnicas utilizadas, sendo estes pontos de suma importância para abordagens baseadas em aprendizado de máquina. Isto tem levado muitos pesquisadores a focar em Multiple-Choice Question Answering (MCQA), um caso especial de QA no qual os sistemas devem escolher a resposta correta dentro de um grupo de possíveis respostas. Um caso particularmente interessante de MCQA é o de resolver testes padronizados, tal como testes de proficiência linguística, teste de ciências para ensino fundamental e vestibulares. Estes exames fornecem perguntas de múltipla escolha de fácil avaliação sobre diferentes domínios e de diferentes dificuldades. O Exame Nacional do Ensino Médio (ENEM) é um exame realizado anualmente por estudantes de todo Brasil. Ele é utilizado amplamente por universidades brasileiras como vestibular e é o segundo maior vestibular do mundo em número de candidatos inscritos. Este exame consiste em escrever uma redação e resolver uma parte de múltipla escolha sobre questões de: Ciências Humanas, Linguagens, Matemática e Ciências Naturais. As questões nestes tópicos não são divididas por matérias escolares (Geografia, Biologia, etc.) e normalmente requerem raciocínio interdisciplinar. Ademais, edições passadas do exame e suas soluções estão disponíveis online, tornando-o um benchmark adequado para MCQA. Neste trabalho nós automatizamos a resolução do ENEM focando, por simplicidade, em questões puramente textuais que não requerem raciocínio matemático. Nós formulamos o problema de responder perguntas de múltipla escolha como um problema de identificar a alternativa mais similar à pergunta. Nós investigamos duas abordagens para medir a similaridade textual entre pergunta e alternativa. A primeira abordagem trata a tarefa como um problema de Recuperação de Informação Textual (IR), isto é, como um problema de identificar em uma base de dados qualquer qual é o documento mais relevante dado uma consulta. Nossas consultas são feitas utilizando a pergunta mais alternativa e utilizamos três diferentes conjuntos de texto como base de dados: o primeiro é um conjunto de artigos em texto simples extraídos da Wikipedia em português; o segundo contém apenas o texto dado no cabeçalho da pergunta e o terceiro é composto por pares de questão-alternativa correta extraídos de provas do ENEM. A segunda abordagem é baseada em Word Embedding (WE), um método para aprender representações vetoriais de palavras de tal modo que palavras semanticamente próximas possuam vetores próximos. WE é usado de dois modos: para aumentar o texto das consultas de IR e para criar representações vetoriais para a pergunta e alternativas. Usando essas representações vetoriais nós respondemos questões diretamente, selecionando a alternativa que maximiza a semelhança de cosseno em relação à pergunta, ou indiretamente, extraindo features das representações e dando como entrada para um classificador que decidirá qual alternativa é a correta. Junto com as duas abordagens nós investigamos como melhorá-las utilizando a WordNet, uma base estruturada de dados lexicais onde palavras são conectadas de acordo com algumas relações, tais como sinonímia e hiperonímia. Por fim, combinamos diferentes configurações das duas abordagens e suas variações usando WordNet através da criação de um comitê de resolvedores encontrado através de uma busca gulosa. O comitê escolhe uma alternativa através de voto majoritário de seus constituintes. A primeira abordagem teve 24% de acurácia utilizando o cabeçalho, 25% usando a base de dados de pares e 26.9% usando Wikipedia. A segunda abordagem conseguiu 26.6% de acurácia usando WE indiretamente e 28% diretamente. O comitê conseguiu 29.3%. Estes resultados, pouco acima do aleatório (20%), sugerem que essas técnicas conseguem captar algumas das habilidades necessárias para resolver testes padronizados. Entretanto, técnicas mais sofisticadas, capazes de entender texto e de executar raciocínio de senso comum talvez sejam necessárias para alcançar uma performance humana.
2

Solving University entrance assessment using information retrieval / Resolvendo Vestibular utilizando recuperação de informação

Igor Cataneo Silveira 05 July 2018 (has links)
Answering questions posed in natural language is a key task in Artificial Intelligence. However, producing a successful Question Answering (QA) system is challenging, since it requires text understanding, information retrieval, information extraction and text production. This task is made even harder by the difficulties in collecting reliable datasets and in evaluating techniques, two pivotal points for machine learning approaches. This has led many researchers to focus on Multiple-Choice Question Answering (MCQA), a special case of QA where systems must select the correct answers from a small set of alternatives. One particularly interesting type of MCQA is solving Standardized Tests, such as Foreign Language Proficiency exams, Elementary School Science exams and University Entrance exams. These exams provide easy-to-evaluate challenging multiple-choice questions of varying difficulties about large, but limited, domains. The Exame Nacional do Ensino Médio (ENEM) is a High School level exam taken every year by students all over Brazil. It is widely used by Brazilian universities as an entrance exam and is the world\'s second biggest university entrance examination in number of registered candidates. This exam consists in writing an essay and solving a multiple-choice test comprising questions on four major topics: Humanities, Language, Science and Mathematics. Questions inside each major topic are not segmented by standard scholar disciplines (e.g. Geography, Biology, etc.) and often require interdisciplinary reasoning. Moreover, the previous editions of the exam and their solutions are freely available online, making it a suitable benchmark for MCQA. In this work we automate solving the ENEM focusing, for simplicity, on purely textual questions that do not require mathematical thinking. We formulate the problem of answering multiple-choice questions as finding the candidate-answer most similar to the statement. We investigate two approaches for measuring textual similarity of candidate-answer and statement. The first approach addresses this as a Text Information Retrieval (IR) problem, that is, as a problem of finding in a database the most relevant document to a query. Our queries are made of statement plus candidate-answer and we use three different corpora as database: the first comprises plain-text articles extracted from a dump of the Wikipedia in Portuguese language; the second contains only the text given in the question\'s header and the third is composed by pairs of question and correct answer extracted from ENEM assessments. The second approach is based on Word Embedding (WE), a method to learn vectorial representation of words in a way such that semantically similar words have close vectors. WE is used in two manners: to augment IR\'s queries by adding related words to those on the query according to the WE model, and to create vectorial representations for statement and candidate-answers. Using these vectorial representations we answer questions either directly, by selecting the candidate-answer that maximizes the cosine similarity to the statement, or indirectly, by extracting features from the representations and then feeding them into a classifier that decides which alternative is the answer. Along with the two mentioned approaches we investigate how to enhance them using WordNet, a structured lexical database where words are connected according to some relations like synonymy and hypernymy. Finally, we combine different configurations of the two approaches and their WordNet variations by creating an ensemble of algorithms found by a greedy search. This ensemble chooses an answer by the majority voting of its components. The first approach achieved an average of 24% accuracy using the headers, 25% using the pairs database and 26.9% using Wikipedia. The second approach achieved 26.6% using WE indirectly and 28% directly. The ensemble achieved 29.3% accuracy. These results, slightly above random guessing (20%), suggest that these techniques can capture some of the necessary skills to solve standardized tests. However, more sophisticated techniques that perform text understanding and common sense reasoning might be required to achieve human-level performance. / Responder perguntas feitas em linguagem natural é uma capacidade há muito desejada pela Inteligência Artificial. Porém, produzir um sistema de Question Answering (QA) é uma tarefa desafiadora, uma vez que ela requer entendimento de texto, recuperação de informação, extração de informação e produção de texto. Além disso, a tarefa se torna ainda mais difícil dada a dificuldade em coletar datasets confiáveis e em avaliar as técnicas utilizadas, sendo estes pontos de suma importância para abordagens baseadas em aprendizado de máquina. Isto tem levado muitos pesquisadores a focar em Multiple-Choice Question Answering (MCQA), um caso especial de QA no qual os sistemas devem escolher a resposta correta dentro de um grupo de possíveis respostas. Um caso particularmente interessante de MCQA é o de resolver testes padronizados, tal como testes de proficiência linguística, teste de ciências para ensino fundamental e vestibulares. Estes exames fornecem perguntas de múltipla escolha de fácil avaliação sobre diferentes domínios e de diferentes dificuldades. O Exame Nacional do Ensino Médio (ENEM) é um exame realizado anualmente por estudantes de todo Brasil. Ele é utilizado amplamente por universidades brasileiras como vestibular e é o segundo maior vestibular do mundo em número de candidatos inscritos. Este exame consiste em escrever uma redação e resolver uma parte de múltipla escolha sobre questões de: Ciências Humanas, Linguagens, Matemática e Ciências Naturais. As questões nestes tópicos não são divididas por matérias escolares (Geografia, Biologia, etc.) e normalmente requerem raciocínio interdisciplinar. Ademais, edições passadas do exame e suas soluções estão disponíveis online, tornando-o um benchmark adequado para MCQA. Neste trabalho nós automatizamos a resolução do ENEM focando, por simplicidade, em questões puramente textuais que não requerem raciocínio matemático. Nós formulamos o problema de responder perguntas de múltipla escolha como um problema de identificar a alternativa mais similar à pergunta. Nós investigamos duas abordagens para medir a similaridade textual entre pergunta e alternativa. A primeira abordagem trata a tarefa como um problema de Recuperação de Informação Textual (IR), isto é, como um problema de identificar em uma base de dados qualquer qual é o documento mais relevante dado uma consulta. Nossas consultas são feitas utilizando a pergunta mais alternativa e utilizamos três diferentes conjuntos de texto como base de dados: o primeiro é um conjunto de artigos em texto simples extraídos da Wikipedia em português; o segundo contém apenas o texto dado no cabeçalho da pergunta e o terceiro é composto por pares de questão-alternativa correta extraídos de provas do ENEM. A segunda abordagem é baseada em Word Embedding (WE), um método para aprender representações vetoriais de palavras de tal modo que palavras semanticamente próximas possuam vetores próximos. WE é usado de dois modos: para aumentar o texto das consultas de IR e para criar representações vetoriais para a pergunta e alternativas. Usando essas representações vetoriais nós respondemos questões diretamente, selecionando a alternativa que maximiza a semelhança de cosseno em relação à pergunta, ou indiretamente, extraindo features das representações e dando como entrada para um classificador que decidirá qual alternativa é a correta. Junto com as duas abordagens nós investigamos como melhorá-las utilizando a WordNet, uma base estruturada de dados lexicais onde palavras são conectadas de acordo com algumas relações, tais como sinonímia e hiperonímia. Por fim, combinamos diferentes configurações das duas abordagens e suas variações usando WordNet através da criação de um comitê de resolvedores encontrado através de uma busca gulosa. O comitê escolhe uma alternativa através de voto majoritário de seus constituintes. A primeira abordagem teve 24% de acurácia utilizando o cabeçalho, 25% usando a base de dados de pares e 26.9% usando Wikipedia. A segunda abordagem conseguiu 26.6% de acurácia usando WE indiretamente e 28% diretamente. O comitê conseguiu 29.3%. Estes resultados, pouco acima do aleatório (20%), sugerem que essas técnicas conseguem captar algumas das habilidades necessárias para resolver testes padronizados. Entretanto, técnicas mais sofisticadas, capazes de entender texto e de executar raciocínio de senso comum talvez sejam necessárias para alcançar uma performance humana.
3

Leveraging Large Language Models Trained on Code for Symbol Binding

Robinson, Joshua 09 August 2022 (has links) (PDF)
While large language models like GPT-3 have achieved impressive results in the zero-, one-, and few-shot settings, they still significantly underperform on some tasks relative to the state of the art (SOTA). For many tasks it would be useful to have answer options explicitly listed out in a multiple choice format, decreasing computational cost and allowing the model to reason about the relative merits of possible answers. We argue that the reason this hasn't helped models like GPT-3 close the gap with the SOTA is that these models struggle with symbol binding - associating each answer option with a symbol that represents it. To ameliorate this situation we introduce index prompting, a way of leveraging language models trained on code to successfully answer multiple choice formatted questions. When used with the OpenAI Codex model, our method improves accuracy by about 18% on average in the few-shot setting relative to GPT-3 across 8 datasets representing 4 common NLP tasks. It also achieves a new single-model state of the art on ANLI R3, ARC (Easy), and StoryCloze, suggesting that GPT-3's latent "understanding" has been previously underestimated.
4

論開放題與選擇題測量政治知識的適用性 / The Applicability of the Open-Ended and Multiple-Choice Format for the Measurement of Political Knowledge

潘心儀, Pan, Sin Yi Unknown Date (has links)
政治知識之於民主社會有其重要性,在政治學界中與政治知識相關的研究產出相當豐富,研究者利用政治知識此一變數進行相關研究前,對於題目如何選定、選項如何提供、題型的差異都是研究者需要去關注的重點,而本文主要的研究目的即是聚焦於討論何種題型更適合用來測量民眾的政治知識。 目前國內測量政治知識的問卷題型較為常見的為開放題與選擇題題型,在這兩類題型的討論上,前者被認為會低估受訪者政治知識程度,後者的測量結果則被質疑提供猜題空間導致高估了受訪者的政治知識程度,然而目前國內外卻缺乏足夠的實證研究來證明這兩個題型的適用性。 本文採用具有實驗設計性質的二手資料,利用前後測的方式讓受測者填答相同題目不同題型的問卷,藉此檢視各種知識程度的受測者在面對不同題型時是否會產生回應模式上的差異。本研究發現,開放題會使得較高政治知識程度的受訪者被低估,選擇題反而能準確測量出此類受訪者的知識程度。為了進一步證實受訪者在偏難的題目上所增加的猜題比例並非是來自於盲猜,本文採用多項機率單元模型來檢視受訪者於於選擇題選擇各個答項的機率。研究發現,儘管選擇題無法避免受訪者猜題,但受訪者並非是盲猜,反而會根據其具有的知識依據來答題,故政治知識程度高的受訪者能採用猜題方式答對題目,政治知識低的受訪者無法利用猜題方式猜中答案。整體而言,選擇題比起開放題更適合用來測量民眾的政治知識。 / Political knowledge plays an important role in the democratic society, and therefore there has been much research on political knowledge in the discipline of political science. To study political knowledge, political scientists have to understand the way of questions and options presented, and also the differences between a variety of question formats. This paper aims to analyze which question format is better for measuring the political knowledge of the public. The open-ended and multiple-choice items are both common formats for measuring political knowledge in Taiwan. The open-ended question is always considered to underestimate the respondents’ level of political knowledge, while the multiple-choice format is thought of overestimating the levels of political knowledge for providing the respondents with opportunity to guess. However, a strong evidence to decide the most suitable format for the measurement of political knowledge is still lacking. This paper uses the secondary data which is collected by a pretest-posttest questionnaire to examine whether guessing behavior will emerge or not when the respondents facing the same question with different formats. This research finds that open-ended questions underestimate the respondents’ knowledge levels who has higher level of political knowledge originally, but the multiple-choice questions can estimate the levels more accurately. To further confirm that the higher guess proportions in the more difficult questions are not resulted from the blind guessing, the study examines the probabilities of options selected by Multinomial Probit Model. The research finds that though the respondents may have guess more in multiple-choice question, however, they tend to answer the questions based on their knowledge instead of blind guessing. Therefore, the respondents who have higher levels of political knowledge can guess correctly, while those who have lower levels of political knowledge cannot. In summary, the multiple-choice questions are more suitable to measure people’s political knowledge.
5

CITE för elektroingenjörer : Diagnostiska prov som testar studenternas förståelse av viktiga begrepp / Concept Inventory Tests for Electrical Engineers : Assessments that test the students' understanding of central concepts

Wengle, Emil January 2018 (has links)
Written exams, which are intended to examine students on the course goals, can sometimes be passed without any conceptual knowledge by memorizing procedures or facts. Because future courses depend on the students’ understanding of concepts in the required courses, not knowing the concepts could be a major issue for the student, for the teacher and for the program board. Here, we focus on developing conceptual multiple-choice questions and the algorithms for understanding the answers to the questions. The goal is to be able to answer questions such as “For how long do the students remember the key concepts?” and “Which concepts do the courses have any positive (or negative) effect on?”. To do that, a courses-concepts matrix was created and the most central concepts were identified. Multiple-choice questions were written on those concepts and the questions were imported into the test creator Respondus. Feedback was added to the questions, they were grouped by concept and exported to a quiz bank in the educational platform Blackboard. A set of answers from a survey on report writing was obtained and statistics were written to answer the second question that was posed. An issue with the probability function is that it only considers whether the student had a setback or an improvement, not how significant it was. The next step would be to use the slopes more effectively by considering the magnitude of the improvement or setback. / Developing Concept Inventory Tests for Electrical Engineering
6

Characterizing Multiple-Choice Assessment Practices in Undergraduate General Chemistry

Jared B Breakall (8080967) 04 December 2019 (has links)
<p>Assessment of student learning is ubiquitous in higher education chemistry courses because it is the mechanism by which instructors can assign grades, alter teaching practice, and help their students to succeed. One type of assessment that is popular in general chemistry courses, yet difficult to create effectively, is the multiple-choice assessment. Despite its popularity, little is known about the extent that multiple-choice general chemistry exams adhere to accepted design practices or the processes that general chemistry instructors engage in while creating these assessments. Further understanding of multiple-choice assessment quality and the design practices of general chemistry instructors could inform efforts to improve the quality of multiple-choice assessment practice in the future. This work attempted to characterize multiple-choice assessment practices in undergraduate general chemistry classrooms by, 1) conducting a phenomenographic study of general chemistry instructor’s assessment practices and 2) designing an instrument that can detect violations of item writing guidelines in multiple-choice chemistry exams. </p> <p>The phenomenographic study of general chemistry instructors’ assessment practices included 13 instructors from the United States who participated in a three-phase interview. They were asked to describe how they create multiple-choice assessments, to evaluate six multiple-choice exam items, and to create two multiple-choice exam items using a think-aloud protocol. It was found that the participating instructors considered many appropriate assessment design practices yet did not utilize, or were not familiar with, all the appropriate assessment design practices available to them. </p> <p>Additionally, an instrument was developed that can be used to detect violations of item writing guidelines in multiple-choice exams. The instrument, known as the Item Writing Flaws Evaluation Instrument (IWFEI) was shown to be reliable between users of the instrument. Once developed, the IWFEI was used to analyze 1,019 general chemistry exam items. This instrument provides a tool for researchers to use to study item writing guideline adherence, as well as, a tool for instructors to use to evaluate their own multiple-choice exams. The use of the IWFEI is hoped to improve multiple-choice item writing practice and quality.</p> <p>The results of this work provide insight into the multiple-choice assessment design practices of general chemistry instructors and an instrument that can be used to evaluate multiple-choice exams for item writing guideline adherence. Conclusions, recommendations for professional development, and recommendations for future research are discussed.</p>
7

Improved interface design for submitting student generated multiple choice questions : A comparison of three interfaces / Förbättrad gränssnittsutformning för flervalsfrågor genererade av studenter : En jämförelse av tre gränssnitt

Åkerlund, Elias January 2022 (has links)
Active learning has been suggested to be more effective than traditional learning in terms of exam results and time spent on the course material, making it an attractive alternative to the traditional lectures. One way of practicing active learning is through active learnersourcing, a method of learning that also generates material that contributes to further learning. Learnersourcing can be practiced by generating multiple choice questions (MCQs) related to the course material. However, generating useful and high-quality MCQs is challenging for students, especially since the available digital systems developed for this purpose have great issues in terms of user-friendliness and their outdated visual design. Two of these systems are RiPPLE and PeerWise. When developing a platform designed for generating MCQs for learning, there are general and specific guidelines that can be followed: Nielsen’s 10 heuristics for a good user interface/user experience, and 10 principles for a good MCQ. In this thesis, a new system was developed where these guidelines were applied, as an attempt to investigate if the user experience can be improved compared to the currently available interfaces RiPPLE and PeerWise. The project was named MyCleverQuestion. A user test was conducted, in which the three systems’ interfaces for creating and submitting a question were compared and graded. The results shows that the users had the best experience when using MyCleverQuestion. 83.3% of the users said they would use MyCleverQuestion again, stressing the importance of both a good user interface and user experience. / Aktivt lärande har visat sig vara effektivare än traditionellt lärande i avseende till tentamensresultat och tid studenter spenderar på kursmaterialet, och är således ett attraktivt alternativ till de traditionella föreläsningarna. Ett sätt att utöva aktivt lärande är genom att skapa flervalsfrågor kopplade till kursmaterialet. Denna metod kallas aktiv learnersourcing och gör att man genom lärandet även bidrar med material som kan användas för vidare lärande. Det är dock svårt för studenter att skapa högkvalitativa och användbara flervalsfrågor med hjälp av de digitala system som utvecklats för detta syfte, då deras användarvänlighet är bristande och visuella utformning är föråldrade. RiPPLE och PeerWise är två system utvecklade för att skapa flervalsfrågor i utbildningssyfte, och har båda användarvänlighetsproblem. Det finns särskilda riktlinjer som kan följas för att utveckla ett system där studenter kan generera flervalsfrågor för att utöva aktivt lärande. I denna uppsats har både generella och specifika riktlinjer använts: 10 generella heuristiska principer för att skapa en bra användarupplevelse och användargränssnitt, samt 10 principer för att skapa en bra flervalsfrågas, för att slutligen undersöka om användarupplevelsen kan förbättras jämfört med RiPPLE och PeerWise. Namnet som valdes för projektet var MyCleverQuestion. En användarundersökning genomfördes, där gränssnittet för att skapa en fråga för varje system utvärderades. Resultatet visar att gränssnittet med bäst användarupplevelse är MyCleverQuestion. 83,3% av användarna angav att de skulle använda MyCleverQuestion igen, vilket bevisar vikten av ett bra gränssnitt.

Page generated in 0.0969 seconds