• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Språkhantering på ett mindre bibliotek : en kvalitativ undersökning om hur ett mindre bibliotek arbetar med minoritetsspråk / Managing languages at a smaller library : a qualitative study about how a smaller library works with minority languages

Bäckström, Pontus, Hellkvist, Åsa January 2010 (has links)
The following thesis will examine how a smaller library manages the prioritization of media between Swedish and 'minority language'. We are also interested in the challenges and problems that can occur in such work, as well as how the librarians think about their role in the integration process. We have studied this through several qualitative interviews with librarians as well as earlier related research. The results show that librarians display a high propensity of appreciation related to working with minority languages and consider this an important aspect of their profession – they fully ingratiate and accept the importance of helping the immigrant population in learning Swedish. However, our study also discovers several organizational problems that hinder this work, which mainly relate to a lack of funds and other issues related to resource inefficiency. The conclusion states the awareness and importance of providing media in minority languages, but is hindered by flaws in the systems surrounding the organization of public libraries.
2

Digitalt berättande : -En drivkraft till att skriva / Digital Storytelling : -A Driving Force to Write

Andersson, Malin, Bjerseth, Hilda January 2022 (has links)
The two writers who wrote this paper are two students from Malmö University. We are studying to become primary school teachers. Our interest in the topic came mostly through our experience of digital storytelling and pupils' motivation in our schools where we have our internship. This paper is based on ten different studies about digital storytelling and the focus throughout the paper is to answer the following question: what kind of impact does digital storytelling have on pupils' motivation? Some keywords are important throughout the paper. These are digital storytelling, motivation, self-esteem, writing skills and language management. There have been field studies regarding digital storytelling within education and they are mostly based on groups between 22-30 pupils around the age of five to eleven years old. The purpose of the studies was to see if pupils’ motivation has increased. How long did the pupils devote themselves to write a story? The research resulted in up to three conclusions. The pupils seemed to become more motivated and could concentrate longer on the assignment by digital storytelling. They also improved their language skills and their writing abilities due to higher self-esteem and the novelty of using digital devices. If every teacher used digital storytelling, pupils might have better writing and language skills in all subjects.
3

Stora språkmodeller för bedömning av applikationsrecensioner : Implementering och undersökning av stora språkmodeller för att sammanfatta, extrahera och analysera nyckelinformation från användarrecensioner / Large Language Models for application review data : Implementation survey of Large Language Models (LLM) to summarize, extract, and analyze key information from user reviews

von Reybekiel, Algot, Wennström, Emil January 2024 (has links)
Manuell granskning av användarrecensioner för att extrahera relevant informationkan vara en tidskrävande process. Denna rapport har undersökt om stora språkmodeller kan användas för att sammanfatta, extrahera och analysera nyckelinformation från recensioner, samt hur en sådan applikation kan konstrueras.  Det visade sig att olika modeller presterade olika bra beroende på mätvärden ochviktning mellan recall och precision. Vidare visade det sig att fine-tuning av språkmodeller som Llama 3 förbättrade prestationen vid klassifikation av användbara recensioner och ledde, enligt vissa mätvärden, till högre prestation än större språkmodeller som Chat-Bison. För engelskt översatta recensioner hade Llama 3:8b:Instruct, Chat-Bison samt den fine-tunade versionen av Llama 3:8b ett F4-makro-score på 0.89, 0.90 och 0.91 respektive. Ytterligare ett resultat är att de större modellerna Chat-Bison, Text-Bison och Gemini, presterade bättre i fallet för generering av sammanfattande texter, än de mindre modeller som testades vid inmatning av flertalet recensioner åt gången.  Generellt sett presterade språkmodellerna också bättre om recensioner först översattes till engelska innan bearbetning, snarare än då recensionerna var skrivna i originalspråk där de majoriteten av recensionerna var skrivna på svenska. En annan lärdom från förbearbetning av recensioner är att antal anrop till dessa språkmodeller kan minimeras genom att filtrera utifrån ordlängd och betyg.  Utöver språkmodeller visade resultaten att användningen av vektordatabaser och embeddings kan ge en större överblick över användbara recensioner genom vektordatabasers inbyggda förmåga att hitta semantiska likheter och samla liknande recensioner i kluster. / Manually reviewing user reviews to extract relevant information can be a time consuming process. This report investigates if large language models can be used to summarize, extract, and analyze key information from reviews, and how such anapplication can be constructed.  It was discovered that different models exhibit varying degrees of performance depending on the metrics and the weighting between recall and precision. Furthermore, fine-tuning of language models such as Llama 3 was found to improve performance in classifying useful reviews and, according to some metrics, led to higher performance than larger language models like Chat-bison. Specifically, for English translated reviews, Llama 3:8b:Instruct, Chat-bison, and Llama 3:8b fine-tuned had an F4 macro score 0.89, 0.90, 0.91 respectively. A further finding is that the larger models, Chat-Bison, Text-Bison, and Gemini performed better than the smaller models that was tested, when inputting multiple reviews at a time in the case of summary text generation.  In general, language models performed better if reviews were first translated into English before processing rather than when reviews were written in the original language where most reviews were written in Swedish. Additionally, another insight from the pre-processing phase, is that the number of API-calls to these language models can be minimized by filtering based on word length and rating. In addition to findings related to language models, the results also demonstrated that the use of vector databases and embeddings can provide a greater overview of reviews by leveraging the databases’ built-in ability to identify semantic similarities and cluster similar reviews together.

Page generated in 0.0836 seconds