• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 64
  • 7
  • 6
  • 5
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 101
  • 101
  • 37
  • 30
  • 27
  • 26
  • 25
  • 25
  • 23
  • 23
  • 23
  • 22
  • 20
  • 15
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Statistické jazykové modely založené na neuronových sítích / STATISTICAL LANGUAGE MODELS BASED ON NEURAL NETWORKS

Mikolov, Tomáš January 2012 (has links)
Statistické jazykové modely jsou důležitou součástí mnoha úspěšných aplikací, mezi něž patří například automatické rozpoznávání řeči a strojový překlad (příkladem je známá aplikace Google Translate). Tradiční techniky pro odhad těchto modelů jsou založeny na tzv. N-gramech. Navzdory známým nedostatkům těchto technik a obrovskému úsilí výzkumných skupin napříč mnoha oblastmi (rozpoznávání řeči, automatický překlad, neuroscience, umělá inteligence, zpracování přirozeného jazyka, komprese dat, psychologie atd.), N-gramy v podstatě zůstaly nejúspěšnější technikou. Cílem této práce je prezentace několika architektur jazykových modelůzaložených na neuronových sítích. Ačkoliv jsou tyto modely výpočetně náročnější než N-gramové modely, s technikami vyvinutými v této práci je možné jejich efektivní použití v reálných aplikacích. Dosažené snížení počtu chyb při rozpoznávání řeči oproti nejlepším N-gramovým modelům dosahuje 20%. Model založený na rekurentní neurovové síti dosahuje nejlepších publikovaných výsledků na velmi známé datové sadě (Penn Treebank).
62

Unveiling the Values of ChatGPT : An Explorative Study on Human Values in AI Systems / Avslöjandet av ChatGPT:s värderingar : En undersökande studie om mänskliga värderingar i AI-system

Lindahl, Caroline, Saeid, Helin January 2023 (has links)
Recent technological breakthroughs in natural language processing and artificial intelligence (AI) and the subsequent release of OpenAIs generative AI system, ChatGPT, have warranted much attention from researchers and the general public alike. Some with praise, foreseeing a brighter future for all, and some predicting the end of humanity. As AI agents become increasingly complex and gain the ability to deal with tradeoffs and become more autonomous, the problem of embedding human values into these AI agents becomes more pressing. Embedding human values is a crucial part of the development of aligned AI systems that act in accordance with human intents and desires. The black-box nature of large language models (LLMs) offers little insight into the mechanics of the AI agents' decision-making processes. For this reason, it is of great interest to explore what values an LLM might hold. This explorative study lets the most popular LLM chatbot today, ChatGPT answer a set of questions focusing on human values. The questions were adopted from the World Value Survey (WVS) and relate to current global values around different subjects, such as same-sex marriage, corruption and raising children. The results were compared to the latest data set (from 2022) from the WVS to show how close or far the values of ChatGPT are to the respondents' values across countries. The findings contribute to the broader understanding of the challenges and implications of developing AI systems that align with human values. Which is crucial in ensuring the systems trustworthiness and beneficial impact on society. The findings of this explorative study support that ChatGPT's values are influenced by the values prevalent in developed democracies, with a leaning towards progressive/liberal views. Results could also imply that ChatGPT may propose a neutral attitude towards questioning established systems and institutions, and emphasizing individual rights. / Nya tekniska genombrott inom naturlig språkbehandling och artificiell intelligens (AI) samt OpenAIs generativa AI-system, ChatGPT, har genererat mycket uppmärksamhet från både forskare inom fältet och från allmänheten. Vissa spår en ljusare framtid, medan andra siar om slutet för mänskligheten. Allteftersom “AI-agenter” blir mer komplexa utvecklar förmågan att göra avvägningar, och blir mer autonoma, blir problemet med att integrera mänskliga värden i dessa AI-agenter mer angeläget. Att integrera mänskliga värderingar är en avgörande del i utvecklingen av AI-system som agerar i enlighet med mänskliga avsikter och vilja. Vi saknar insyn i mekaniken för beslutsprocesser hos stora språkmodeller (eng: large language models, LLMs) och av denna anledning är det av stort intresse att utforska vilka värderingar en språkmodell uttrycker. Denna utforskande studie låter den, för närvarande, mest populära LLM-chatboten, ChatGPT, svara på en uppsättning av frågor som fokuserar på mänskliga värderingar. Frågorna har hämtats från World Value Survey (WVS) och relaterar till aktuella globala värderingar kring olika ämnen, såsom samkönade äktenskap, korruption och barnuppfostran. Resultaten jämfördes med den senaste datan (från 2022) från WVS för att visa hur nära eller långt värdena för ChatGPT ligger respondenterna från olika länders. Resultaten från denna studie bidrar till en bredare förståelse för utmaningarna och konsekvenserna av att utveckla AI-system som är i linje med mänskliga värderinga. Detta är en viktig aspekt för att kunna säkerställa systemens pålitlighet och positiva inverkan på samhället. Resultaten av denna explorativa studie stödjer att ChatGPT:s värderingar influeras av de värderingar som råder i utvecklade demokratier, med en tendens mot progressiva/liberala åsikter. Resultaten kan också antyda att ChatGPT kan ställa sig neutralt till ifrågasättandet av etablerade system och institutioner, samt betonar individuella rättigheter.
63

Exploration of Knowledge Distillation Methods on Transformer Language Models for Sentiment Analysis / Utforskning av metoder för kunskapsdestillation på transformatoriska språkmodeller för analys av känslor

Liu, Haonan January 2022 (has links)
Despite the outstanding performances of the large Transformer-based language models, it proposes a challenge to compress the models and put them into the industrial environment. This degree project explores model compression methods called knowledge distillation in the sentiment classification task on Transformer models. Transformers are neural models having stacks of identical layers. In knowledge distillation for Transformer, a student model with fewer layers will learn to mimic intermediate layer vectors from a teacher model with more layers by designing and minimizing loss. We implement a framework to compare three knowledge distillation methods: MiniLM, TinyBERT, and Patient-KD. Student models produced by the three methods are evaluated by accuracy score on the SST-2 and SemEval sentiment classification dataset. The student models’ attention matrices are also compared with the teacher model to find the best student model for capturing dependencies in the input sentences. The comparison results show that the distillation method focusing on the Attention mechanism can produce student models with better performances and less variance. We also discover the over-fitting issue in Knowledge Distillation and propose a Two-Step Knowledge Distillation with Transformer Layer and Prediction Layer distillation to alleviate the problem. The experiment results prove that our method can produce robust, effective, and compact student models without introducing extra data. In the future, we would like to extend our framework to support more distillation methods on Transformer models and compare performances in tasks other than sentiment classification. / Trots de stora transformatorbaserade språkmodellernas enastående prestanda är det en utmaning att komprimera modellerna och använda dem i en industriell miljö. I detta examensarbete undersöks metoder för modellkomprimering som kallas kunskapsdestillation i uppgiften att klassificera känslor på Transformer-modeller. Transformers är neurala modeller med staplar av identiska lager. I kunskapsdestillation för Transformer lär sig en elevmodell med färre lager att efterlikna mellanliggande lagervektorer från en lärarmodell med fler lager genom att utforma och minimera förluster. Vi genomför en ram för att jämföra tre metoder för kunskapsdestillation: MiniLM, TinyBERT och Patient-KD. Elevmodeller som produceras av de tre metoderna utvärderas med hjälp av noggrannhetspoäng på datasetet för klassificering av känslor SST-2 och SemEval. Elevmodellernas uppmärksamhetsmatriser jämförs också med den från lärarmodellen för att ta reda på vilken elevmodell som är bäst för att fånga upp beroenden i de inmatade meningarna. Jämförelseresultaten visar att destillationsmetoden som fokuserar på uppmärksamhetsmekanismen kan ge studentmodeller med bättre prestanda och mindre varians. Vi upptäcker också problemet med överanpassning i kunskapsdestillation och föreslår en tvåstegs kunskapsdestillation med transformatorskikt och prediktionsskikt för att lindra problemet. Experimentresultaten visar att vår metod kan producera robusta, effektiva och kompakta elevmodeller utan att införa extra data. I framtiden vill vi utöka vårt ramverk för att stödja fler destillationmetoder på Transformer-modeller och jämföra prestanda i andra uppgifter än sentimentklassificering.
64

Arabic text recognition of printed manuscripts. Efficient recognition of off-line printed Arabic text using Hidden Markov Models, Bigram Statistical Language Model, and post-processing.

Al-Muhtaseb, Husni A. January 2010 (has links)
Arabic text recognition was not researched as thoroughly as other natural languages. The need for automatic Arabic text recognition is clear. In addition to the traditional applications like postal address reading, check verification in banks, and office automation, there is a large interest in searching scanned documents that are available on the internet and for searching handwritten manuscripts. Other possible applications are building digital libraries, recognizing text on digitized maps, recognizing vehicle license plates, using it as first phase in text readers for visually impaired people and understanding filled forms. This research work aims to contribute to the current research in the field of optical character recognition (OCR) of printed Arabic text by developing novel techniques and schemes to advance the performance of the state of the art Arabic OCR systems. Statistical and analytical analysis for Arabic Text was carried out to estimate the probabilities of occurrences of Arabic character for use with Hidden Markov models (HMM) and other techniques. Since there is no publicly available dataset for printed Arabic text for recognition purposes it was decided to create one. In addition, a minimal Arabic script is proposed. The proposed script contains all basic shapes of Arabic letters. The script provides efficient representation for Arabic text in terms of effort and time. Based on the success of using HMM for speech and text recognition, the use of HMM for the automatic recognition of Arabic text was investigated. The HMM technique adapts to noise and font variations and does not require word or character segmentation of Arabic line images. In the feature extraction phase, experiments were conducted with a number of different features to investigate their suitability for HMM. Finally, a novel set of features, which resulted in high recognition rates for different fonts, was selected. The developed techniques do not need word or character segmentation before the classification phase as segmentation is a byproduct of recognition. This seems to be the most advantageous feature of using HMM for Arabic text as segmentation tends to produce errors which are usually propagated to the classification phase. Eight different Arabic fonts were used in the classification phase. The recognition rates were in the range from 98% to 99.9% depending on the used fonts. As far as we know, these are new results in their context. Moreover, the proposed technique could be used for other languages. A proof-of-concept experiment was conducted on English characters with a recognition rate of 98.9% using the same HMM setup. The same techniques where conducted on Bangla characters with a recognition rate above 95%. Moreover, the recognition of printed Arabic text with multi-fonts was also conducted using the same technique. Fonts were categorized into different groups. New high recognition results were achieved. To enhance the recognition rate further, a post-processing module was developed to correct the OCR output through character level post-processing and word level post-processing. The use of this module increased the accuracy of the recognition rate by more than 1%. / King Fahd University of Petroleum and Minerals (KFUPM)
65

An Automated Discharge Summary System Built for Multiple Clinical English Texts by Pre-trained DistilBART Model

Alaei, Sahel January 2023 (has links)
The discharge summary is an important document, summarizing a patient’s medical information during their hospital stay. It is crucial for communication between clinicians and primary care physicians. Creating a discharge sum- mary is a necessary task. However, it is time-consuming for physicians. Using technology to automatically generate discharge summaries can be helpful for physicians and assist them in concentrating more on the patients than writing clinical summarization notes and discharge summaries. This master’s thesis aims to contribute to the research of building a transformer-based model for an automated discharge summary with a pre-trained DistilBART language model. This study plans to answer this main research question: How e↵ective is the pre-trained DistilBART language model in predicting an automated discharge summary for multiple clinical texts? The research strategy used in this study is experimental. the dataset is MIMIC- III. To evaluate the e↵ectiveness of the model, ROUGE scores are selected. The result of this model is compared with the result of the baseline BART model, which is implemented on the same dataset in the other recent research. This study regards multiple document summarization as the process of combining multiple inputs into a single input, which is then summarized. The findings indicate an improvement in ROUGE-2 and ROUGE-Lsum in the DistilBART model in comparison with the baseline BART model. However, one important limitation was computational resource constraint. The study also provides eth- ical considerations and some recommendations for future works.
66

Large language models as an interface to interact with API tools in natural language

Tesfagiorgis, Yohannes Gebreyohannes, Monteiro Silva, Bruno Miguel January 2023 (has links)
In this research project, we aim to explore the use of Large Language Models (LLMs) as an interface to interact with API tools in natural language. Bubeck et al. [1] shed some light on how LLMs could be used to interact with API tools. Since then, new versions of LLMs have been launched and the question of how reliable a LLM can be in this task remains unanswered. The main goal of our thesis is to investigate the designs of the available system prompts for LLMs, identify the best-performing prompts, and evaluate the reliability of different LLMs when using the best-identified prompts. We will employ a multiple-stage controlled experiment: A literature review where we reveal the available system prompts used in the scientific community and open-source projects; then, using F1-score as a metric we will analyse the precision and recall of the system prompts aiming to select the best-performing system prompts in interacting with API tools; and in a latter stage, we compare a selection of LLMs with the best-performing prompts identified earlier. From these experiences, we realize that AI-generated system prompts perform better than the current prompts used in open-source and literature with GPT-4, zero-shot prompts have better performance in this specific task with GPT-4 and that a good system prompt in one model does not generalize well into other models.
67

Natural Language Processing for Improving Search Query Results : Applied on The Swedish Armed Force's Profession Guide

Harju Schnee, Andreas January 2023 (has links)
Text has been the historical way of preserving and acquiring knowledge, and text data today is an increasingly growing part of the digital footprint together with the need to query this data for information. Seeking information is a constant ongoing process, and is a crucial part of many systems all around us. The ability to perform fast and effective searches is a must when dealing with vast amounts of data. This thesis implements an information retrieval system based on the Swedish Defence Force's profession guide, with the aim to produce a system that retrieves relevant professions based on user defined queries of varying size. A number of Natural Language Processing techniques are investigated and implemented, in order to transform the gathered profession descriptions a document embedding model, doc2vec, was implemented resulting in document vectors that are compared to find similarities between documents. The final system was evaluated by domain experts, represented by active military personal that quantified the relevancy of the profession retrievals into a measurable performance. The system managed to retrieve relevant information for 46.6% and 56.6% of the long- and short text inputs respectively. Resulting in a much more generalized and capable system compared to the search function available at the profession guide today.
68

Domain Knowledge and Representation Learning for Centroid Initialization in Text Clustering with k-Means : An exploratory study / Domänkunskap och Representationsinlärning för Centroidinitialisering vid Textklustering med k-Means : En utforskande studie

Yu, David January 2023 (has links)
Text clustering is a problem where texts are partitioned into homogeneous clusters, such as partitioning them based on their sentiment value. Two techniques to address the problem are representation learning, in particular language representation models, and clustering algorithms. The state-ofthe-art language models are based on neural networks, in particular the Transformer architecture, and the models are used to transform a text into a point in a high dimensional vector space. The texts are then clustered using a clustering algorithm, and a recognized partitional clustering algorithm is k-Means. Its goal is to find centroids that represent the clusters (partitions) by minimizing a distance measure. Two influential parameters of k-Means are the number of clusters and the initial centroids. Multiple heuristics exist to decide how the parameters are selected. The heuristic of using domain knowledge is commonly used when it is available, e.g., the number of clusters is set to the number of dataset labels. This project further explores this idea. The main contribution of the thesis is an investigation of domain knowledge and representation learning as a heuristic in centroid initialization applied to k-Means. Initial centroids were obtained by applying a representation learning technique on the dataset labels. The project analyzed a Swedish dataset with views towards different aspects of Swedish immigration and a Swedish translated movie review dataset using six Swedish compatible language models and two versions of k-Means. Clustering evaluation was measured using eight metrics related to cohesion, separation, external entropy and accuracy. The results show the proposed heuristic made a positive impact on the metrics. By employing the proposed heuristic, six out of eight metrics were improved compared to the baseline. The improvements were observed using six language models and k-Means on two datasets. Additionally, the evaluation metrics indicated that the proposed heuristic has opportunities for future improvements. / Textklustering är ett problem där texter partitioneras i homogena kluster, till exempel genom att gruppera dem baserat på dess sentimentala värde. Två tekniker för att undersöka problemet är representationsinlärning, i synnerhet språkrepresentationsmodeller, och klustringsalgoritmer. Moderna språkmodeller är baserade på neurala nätverk, framförallt på Transformer arkitekturen, och modellerna används för att omvandla texter till punkter i ett högdimensionellt vektorrum. Därefter klustras texterna med hjälp av en klusteringsalgoritm, och en erkänd partition klusteringalgorithm är kMeans. Målet med algoritmen är att finna centroider som representerar klustren (partitioner) genom att minimera ett avståndsmått. Två inflytelserika parametrar i k-Means är antalet kluster och initiala centroider. Många heuristiker existerar för att bestämma hur dessa parametrar skall väljas. En vanligt förekommande heuristik är att använda domänkunskap om det är tillgängligt, e.g., antalet kluster väljs som antalet datamängdsetiketter. Detta projekt genomför ytterligare utforskningar av idén. Avhandlingens huvudsakliga bidrag är en undersökning av att använda kunskaper om domänen för datamängden och representationsinlärning som heuristik för centroid initialisering applicerat på k-Means. Initiala centroider erhölls genom att applicera en representationsinlärningsteknik på datamängdsetiketter. Projektet analyserar en svensk datamängd med åsikter gentemot olika aspekter av svensk immigration och en svensk översatt datamängd om filmrecensioner med hjälp av sex svenskkompatibla språkmodeller och kMeans. Utvärdering av klustringen uppmättes med hjälp av åtta mätetal relaterade till sammanhållning, separation, entropi och ackuratess. Den föreslagna heuristiken hade en positiv effekt på mätetalen. Genom att använda den föreslagna heuristiken förbättrades sex av åtta mätetal jämfört med baslinjen. Förbättringarna observerades med användning av sex språkmodeller och k-Means på två datamängder. Evalueringsmätetalen indikerar också på att heuristiken har möjligheter till framtida förbättringar.
69

Efficient Sentiment Analysis and Topic Modeling in NLP using Knowledge Distillation and Transfer Learning / Effektiv sentimentanalys och ämnesmodellering inom NLP med användning av kunskapsdestillation och överföringsinlärning

Malki, George January 2023 (has links)
This abstract presents a study in which knowledge distillation techniques were applied to a Large Language Model (LLM) to create smaller, more efficient models without sacrificing performance. Three configurations of the RoBERTa model were selected as ”student” models to gain knowledge from a pre-trained ”teacher” model. Multiple steps were used to improve the knowledge distillation process, such as copying some weights from the teacher to the student model and defining a custom loss function. The selected task for the knowledge distillation process was sentiment analysis on Amazon Reviews for Sentiment Analysis dataset. The resulting student models showed promising performance on the sentiment analysis task capturing sentiment-related information from text. The smallest of the student models managed to obtain 98% of the performance of the teacher model while being 45% lighter and taking less than a third of the time to analyze an entire the entire IMDB Dataset of 50K Movie Reviews dataset. However, the student models struggled to produce meaningful results on the topic modeling task. These results were consistent with the topic modeling results from the teacher model. In conclusion, the study showcases the efficacy of knowledge distillation techniques in enhancing the performance of LLMs on specific downstream tasks. While the model excelled in sentiment analysis, further improvements are needed to achieve desirable outcomes in topic modeling. These findings highlight the complexity of language understanding tasks and emphasize the importance of ongoing research and development to further advance the capabilities of NLP models. / Denna sammanfattning presenterar en studie där kunskapsdestilleringstekniker tillämpades på en stor språkmodell (Large Language Model, LLM) för att skapa mindre och mer effektiva modeller utan att kompremissa på prestandan. Tre konfigurationer av RoBERTa-modellen valdes som ”student”-modeller för att inhämta kunskap från en förtränad ”teacher”-modell. Studien mäter även modellernas prestanda på två ”DOWNSTREAM” uppgifter, sentimentanalys och ämnesmodellering. Flera steg användes för att förbättra kunskapsdestilleringsprocessen, såsom att kopiera vissa vikter från lärarmodellen till studentmodellen och definiera en anpassad förlustfunktion. Uppgiften som valdes för kunskapsdestilleringen var sentimentanalys på datamängden Amazon Reviews for Sentiment Analysis. De resulterande studentmodellerna visade lovande prestanda på sentimentanalysuppgiften genom att fånga upp information relaterad till sentiment från texten. Den minsta av studentmodellerna lyckades erhålla 98% av prestandan hos lärarmodellen samtidigt som den var 45% lättare och tog mindre än en tredjedel av tiden att analysera hela IMDB Dataset of 50K Movie Reviews datasettet.Dock hade studentmodellerna svårt att producera meningsfulla resultat på ämnesmodelleringsuppgiften. Dessa resultat överensstämde med ämnesmodelleringsresultaten från lärarmodellen. Dock hade studentmodellerna svårt att producera meningsfulla resultat på ämnesmodelleringsuppgiften. Dessa resultat överensstämde med ämnesmodelleringsresultaten från lärarmodellen.
70

Towards a Language Model for Stenography : A Proof of Concept

Langstraat, Naomi Johanna January 2022 (has links)
The availability of the stenographic manuscripts of Astrid Lindgren have sparked an interest in the creation of a language model for stenography. By its very nature stenography is low-resource and the unavailability of data requires a tool for using normal data. The tool presented in this thesis is to create stenographic data from manipulating orthographic data. Stenographic data is distinct from orthographic data through three different types manipulations that can be carried out. Firstly stenography is based on a phonetic version of language, secondly it used its own alphabet that is distinct from normal orthographic data, and thirdly it used several techniques to compress the data. The first type of manipulation is done by using a grapheme-to-phoneme converter. The second type is done by using an orthographic representation of a stenographic alphabet. The third type of manipulation is done by manipulating based on subword level, word level and phrase level. With these manipulations different datasets are created with different combinations of these manipulations. Results are measured for both perplexity on a GPT-2 language model and for compression rate on the different datasets. These results show a general decrease of perplexity scores and a slight compression rate across the board. We see that the lower perplexity scores are possibly due to the growth of ambiguity.

Page generated in 0.0604 seconds