• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The application of constraint rules to data-driven parsing

Jaf, Sardar January 2015 (has links)
The process of determining the structural relationships between words in both natural and machine languages is known as parsing. Parsers are used as core components in a number of Natural Language Processing (NLP) applications such as online tutoring applications, dialogue-based systems and textual entailment systems. They have been used widely in the development of machine languages. In order to understand the way parsers work, we will investigate and describe a number of widely used parsing algorithms. These algorithms have been utilised in a range of different contexts such as dependency frameworks and phrase structure frameworks. We will investigate and describe some of the fundamental aspects of each of these frameworks, which can function in various ways including grammar-driven approaches and data-driven approaches. Grammar-driven approaches use a set of grammatical rules for determining the syntactic structures of sentences during parsing. Data-driven approaches use a set of parsed data to generate a parse model which is used for guiding the parser during the processing of new sentences. A number of state-of-the-art parsers have been developed that use such frameworks and approaches. We will briefly highlight some of these in this thesis. There are three specific important features that it is important to integrate into the development of parsers. These are efficiency, accuracy, and robustness. Efficiency is concerned with the use of as little time and computing resources as possible when processing natural language text. Accuracy involves maximising the correctness of the analyses that a parser produces. Robustness is a measure of a parser’s ability to cope with grammatically complex sentences and produce analyses of a large proportion of a set of sentences. In this thesis, we present a parser that can efficiently, accurately, and robustly parse a set of natural language sentences. Additionally, the implementation of the parser presented here allows for some trading-off between different levels of parsing performance. For example, some NLP applications may emphasise efficiency/robustness over accuracy while some other NLP systems may require a greater focus on accuracy. In dialogue-based systems, it may be preferable to produce a correct grammatical analysis of a question, rather than incorrectly analysing the grammatical structure of a question or quickly producing a grammatically incorrect answer for a question. Alternatively, it may be desirable that document translation systems translate a document into a different language quickly but less accurately, rather than slowly but highly accurately, because users may be able to correct grammatically incorrect sentences manually if necessary. The parser presented here is based on data-driven approaches but we will allow for the application of constraint rules to it in order to improve its performance.
2

Introducing Generative Artificial Intelligence in Tech Organizations : Developing and Evaluating a Proof of Concept for Data Management powered by a Retrieval Augmented Generation Model in a Large Language Model for Small and Medium-sized Enterprises in Tech / Introducering av Generativ Artificiell Intelligens i Tech Organisationer : Utveckling och utvärdering av ett Proof of Concept för datahantering förstärkt av en Retrieval Augmented Generation Model tillsammans med en Large Language Model för små och medelstora företag inom Tech

Lithman, Harald, Nilsson, Anders January 2024 (has links)
In recent years, generative AI has made significant strides, likely leaving an irreversible mark on contemporary society. The launch of OpenAI's ChatGPT 3.5 in 2022 manifested the greatness of the innovative technology, highlighting its performance and accessibility. This has led to a demand for implementation solutions across various industries and companies eager to leverage these new opportunities generative AI brings. This thesis explores the common operational challenges faced by a small-scale Tech Enterprise and, with these challenges identified, examines the opportunities that contemporary generative AI solutions may offer. Furthermore, the thesis investigates what type of generative technology is suitable for adoption and how it can be implemented responsibly and sustainably. The authors approach this topic through 14 interviews involving several AI researchers and the employees and executives of a small-scale Tech Enterprise, which served as a case company, combined with a literature review.  The information was processed using multiple inductive thematic analyses to establish a solid foundation for the investigation, which led to the development of a Proof of Concept. The findings and conclusions of the authors emphasize the high relevance of having a clear purpose for the implementation of generative technology. Moreover, the authors predict that a sustainable and responsible implementation can create the conditions necessary for the specified small-scale company to grow.  When the authors investigated potential operational challenges at the case company it was made clear that the most significant issue arose from unstructured and partially absent documentation. The conclusion reached by the authors is that a data management system powered by a Retrieval model in a LLM presents a potential path forward for significant value creation, as this solution enables data retrieval functionality from unstructured project data and also mitigates a major inherent issue with the technology, namely, hallucinations. Furthermore, in terms of implementation circumstances, both empirical and theoretical findings suggest that responsible use of generative technology requires training; hence, the authors have developed an educational framework named "KLART".  Moving forward, the authors describe that sustainable implementation necessitates transparent systems, as this increases understanding, which in turn affects trust and secure use. The findings also indicate that sustainability is strongly linked to the user-friendliness of the AI service, leading the authors to emphasize the importance of HCD while developing and maintaining AI services. Finally, the authors argue for the value of automation, as it allows for continuous data and system updates that potentially can reduce maintenance.  In summary, this thesis aims to contribute to an understanding of how small-scale Tech Enterprises can implement generative AI technology sustainably to enhance their competitive edge through innovation and data-driven decision-making.

Page generated in 0.0754 seconds