141 |
Protocoles d'évaluation pour l'extraction d'information libreLéchelle, William 04 1900 (has links)
No description available.
|
142 |
Konsistenzerhaltende Techniken für generierbare Wissensbasen zum Entwurf eingebetteter SystemeSporer, Mathias 16 July 2007 (has links)
Der Entwurfsprozess informationsverarbeitender Systeme ist gekennzeichnet durch die Beschreibung von speichernden, verarbeitenden und übertragenden Komponenten auf unterschiedlichen Abstraktionsstufen. Sowohl für spezifische Anwendungsdomänen als auch für die jeweiligen Abstraktionsstufen wurden in der Vergangenheit Werkzeuge entwickelt, die den Systemdesigner von der Phase der Anforderungsspezifikation bis hin zu Implementierung und funktionaler Erprobung begleiten. Beim Entwurf komplexer Systeme im allgemeinen und eingebetteter Systeme im besonderen stellt sich zusätzlich das Problem der Wiederverwendung von Komponenten aus früheren Entwürfen, der Transformation des Entwurfswissens über die Grenzen der Abstraktionsstufen hinweg sowie die Integration einer variablen Anzahl domänenspezifischer Werkzeuge in den Entwurfsprozess. Voraussetzung eines korrekten Designs ist dabei die anwendungsinvariante Integritätserhaltung aller beteiligten Entwurfsdaten unabhängig von ihrer Repräsentation. Nach der Diskussion des Integritätsbegriffs für konventionelle Informationssysteme und den nötigen Erweiterungen für eingebettete Systeme werden Verfahren zur Modellierung des Entwurfsprozesses vorgestellt, mit deren Hilfe eine der spezifischen Entwicklungsaufgabe optimal entsprechende Wissensbasis generiert und fortwährend neuen Anforderungen von Fremdwerkzeugen und Entwurfsverfahren angepasst werden kann. Sie erfordert vom Anwender keine Detailkenntnisse des zugrunde liegenden Datenmodells. Die Generierbarkeit der Wissensbasis und ihrer Werkzeuge beruht auf einem Metamodell, das sich auf eine erweiterbare Objektalgebra zur Struktur- und Verhaltensbeschreibung informationsverarbeitender Systeme stützt und in domänenspezifische Zielsysteme transformierbar ist. / The design process of data processing systems is characterized by the description of storing, processing and transmitting components on different levels of abstraction. In the past tools have been developed for specific application domains as well as for the respective abstraction levels. They support the system designer from the stage of the requirements specification down to implementation and functional test. During the sketch of complex systems in general and embedded systems in particular, problems occur in the following areas: reusing the components from former drafts; transforming the design knowledge across the boundaries of abstraction levels; integrating a variable number of domain specific tools in the design process. The precondition for a correct design is the integrity preservation of all involved draft data no matter which sources such as databases, XML files or conventional HOST file systems provide them. After discussing the integrity term regarding conventional information systems and the extensions necessary for embedded systems, approaches for modelling the design process are presented. They help to generate a knowledge base which is optimally adjusted to a particular design task and can be continuously adapted to new requests coming from external tools and design processes. The user does not need detailed knowledge about the knowledge base's underlying data model. The capability of generating the knowledge base and its tools is based on a meta model. First, this model is based on an extensible object algebra applied when describing the structure and behaviour of data processing systems and second, the model is transformable into domain specific target systems.
|
143 |
Verification of Data-aware Business Processes in the Presence of OntologiesSantoso, Ario 13 May 2016 (has links)
The meet up between data, processes and structural knowledge in modeling complex enterprise systems is a challenging task that has led to the study of combining formalisms from knowledge representation, database theory, and process management. Moreover, to ensure system correctness, formal verification also comes into play as a promising approach that offers well-established techniques. In line with this, significant results have been obtained within the research on data-aware business processes, which studies the marriage between static and dynamic aspects of a system within a unified framework. However, several limitations are still present. Various formalisms for data-aware processes that have been studied typically use a simple mechanism for specifying the system dynamics. The majority of works also assume a rather simple treatment of inconsistency (i.e., reject inconsistent system states). Many researches in this area that consider structural domain knowledge typically also assume that such knowledge remains fixed along the system evolution (context-independent), and this might be too restrictive. Moreover, the information model of data-aware processes sometimes relies on relatively simple structures. This situation might cause an abstraction gap between the high-level conceptual view that business stakeholders have, and the low-level representation of information. When it comes to verification, taking into account all of the aspects above makes the problem more challenging.
In this thesis, we investigate the verification of data-aware processes in the presence of ontologies while at the same time addressing all limitations above. Specifically, we provide the following contributions: (1) We propose a formal framework called Golog-KABs (GKABs), by leveraging on the state of the art formalisms for data-aware processes equipped with ontologies. GKABs enable us to specify semantically-rich data-aware business processes, where the system dynamics are specified using a high-level action language inspired by the Golog programming language. (2) We propose a parametric execution semantics for GKABs that is able to elegantly accommodate a plethora of inconsistency-aware semantics based on the well-known notion of repair, and this leads us to consider several variants of inconsistency-aware GKABs. (3) We enhance GKABs towards context-sensitive GKABs that take into account the contextual information during the system evolution. (4) We marry these two settings and introduce inconsistency-aware context-sensitive GKABs. (5) We introduce the so-called Alternating-GKABs that allow for a more fine-grained analysis over the evolution of inconsistency-aware context-sensitive systems. (6) In addition to GKABs, we introduce a novel framework called Semantically-Enhanced Data-Aware Processes (SEDAPs) that, by utilizing ontologies, enable us to have a high-level conceptual view over the evolution of the underlying system. We provide not only theoretical results, but have also implemented this concept of SEDAPs.
We also provide numerous reductions for the verification of sophisticated first-order temporal properties over all of the settings above, and show that verification can be addressed using existing techniques developed for Data-Centric Dynamic Systems (which is a well-established data-aware processes framework), under suitable boundedness assumptions for the number of objects freshly introduced in the system while it evolves. Notably, all proposed GKAB extensions have no negative impact on computational complexity.
|
144 |
How faculties of education respond to new knowledge requirements embedded in teacher education policies : stepping through the looking-glassPapier, Joy C. 09 July 2008 (has links)
This study examines how university academics understand and enact knowledge requirements embedded in official teacher education policies. The research probes faculty understandings of what constitutes ‘relevant and appropriate pedagogies’ in teacher education curricula, and the basis of such knowledge selections in the absence of a stable ‘knowledge base’ of teacher education. In teacher education, new national norms and standards are intended to guide curriculum processes in new programmes. However, policies remain open to wide interpretation and assume common understandings among the teacher education community with regard to knowledge, practices and values. This study, conducted in three university-based Faculties of Education, analyses the curriculum motivations, processes and practices of education academics, in an attempt to understand and explain their responses to policy requirements. The conceptual framework of Paul Trowler is employed to examine the Teaching and Learning Regimes (TLRs) at work in academic contexts. By lifting out the discursive repertoires, identities in interaction, tacit assumptions, connotative codes, implicit theories of teaching and learning, power relations, rules of appropriateness and recurrent practices among faculty members, this research demonstrates how knowledge is mediated in and through institutional contexts. Three parallel Faculty portraits elucidate stark differences in approaches to curricula and in curriculum processes, a consequence of the lack of a stable knowledge base and the perceived vagueness of policy directives. Significantly, institutional histories and traditions feature prominently as ‘shapers’ of academic responses to change, factors that, the study argues, government policies have not taken into account. / Thesis (PhD (Education Policy Studies))--University of Pretoria, 2006. / Education Management and Policy Studies / unrestricted
|
145 |
Knowledge Discovery for Avionics Maintenance : An Unsupervised Concept Learning Approach / Découverte de connaissances pour la maintenance avionique : une approche d'apprentissage de concepts non superviséePalacios Medinacelli, Luis 04 June 2019 (has links)
Dans cette thèse, nous étudions le problème de l’analyse de signatures de pannes dans le domaine de la maintenance avionique, afin d’identifier les défaillances au sein d’équipements en panne et suggérer des actions correctives permettant de les réparer. La thèse a été réalisée dans le cadre d’une convention CIFRE entre Thales Research & Technology et l’Université Paris-Sud. Les motivations sont donc à la fois théoriques et industrielles. Une signature de panne devrait fournir toutes les informations nécessaires pour identifier, comprendre et réparer la panne. Pour comprendre le mécanisme la panne son identification doit donc être explicable. Nous proposons une approche à base d’ontologies pour modéliser le domaine d’étude, permettant une interprétation automatisée des tests techniques réalisés afin d’identifier les pannes et obtenir les actions correctives associées. Il s’agit d’une approche d’apprentissage de concepts permettant de découvrir des concepts représentant les signatures de pannes tout en fournissant des explications sur les choix de propositions de réparations. Comme les signatures ne sont pas connues a priori, un algorithme d’apprentissage automatique non supervisé approxime les définitions des concepts. Les signatures apprises sont fournies sous forme de définitions de la logique de description (DL) et ces définitions servent d’explications. Contrairement aux techniques courantes d’apprentissage de concepts conçues pour faire de l’apprentissage supervisé ou basées sur l’analyse de patterns fréquents au sein de gros volumes de données, l’approche proposée adopte une perspective différente. Elle repose sur une construction bottom-up de l’ontologie. Le processus d’apprentissage est réalisé via un opérateur de raffinement appliqué sur l’espace des expressions de concepts et le processus est guidé par les données, c’est-à-dire les individus de l’ontologie. Ainsi, les notions de justifications, de concepts plus spécifiques et de raffinement de concepts ont été révisées et adaptées pour correspondre à nos besoins. L’approche a ensuite été appliquée au problème de la maintenance avionique. Un prototype a été implémenté et mis en œuvre au sein de Thales Avionics à titre de preuve de concept. / In this thesis we explore the problem of signature analysis in avionics maintenance, to identify failures in faulty equipment and suggest corrective actions to resolve the failure. The thesis takes place in the context of a CIFRE convention between Thales R&T and the Université Paris-Sud, thus it has both a theoretical and an industrial motivation. The signature of a failure provides all the information necessary to understand, identify and ultimately repair a failure. Thus when identifying the signature of a failure it is important to make it explainable. We propose an ontology based approach to model the domain, that provides a level of automatic interpretation of the highly technical tests performed in the equipment. Once the tests can be interpreted, corrective actions are associated to them. The approach is rooted on concept learning, used to approximate description logic concepts that represent the failure signatures. Since these signatures are not known in advance, we require an unsupervised learning algorithm to compute the approximations. In our approach the learned signatures are provided as description logics (DL) definitions which in turn are associated to a minimal set of axioms in the A-Box. These serve as explanations for the discovered signatures. Thus providing a glass-box approach to trace the reasons on how and why a signature was obtained. Current concept learning techniques are either designed for supervised learning problems, or rely on frequent patterns and large amounts of data. We use a different perspective, and rely on a bottom-up construction of the ontology. Similarly to other approaches, the learning process is achieved through a refinement operator that traverses the space of concept expressions, but an important difference is that in our algorithms this search is guided by the information of the individuals in the ontology. To this end the notions of justifications in ontologies, most specific concepts and concept refinements, are revised and adapted to our needs. The approach is then adapted to the specific avionics maintenance case in Thales Avionics, where a prototype has been implemented to test and evaluate the approach as a proof of concept.
|
146 |
Complexity Theory of Leadership and Management InformationSimpson, Mark Aloysius 01 January 2018 (has links)
Implementing effective leadership strategies in management of information systems (MIS) can positively influence overall organizational performance. This study was an exploration of the general problem of failure to lead effectively in the current knowledge-based economy and the resulting deleterious effects on organizational performance and threats to continuing organizational viability. The specific problem was the lack of understanding regarding the interaction of leadership processes with MIS functions and the impact on organizational success. Managers' and employees' lived experiences of leadership in small- to medium-sized enterprises were explored, as well as how those experiences influenced the organization's adaptive responses regarding technology and performance in the knowledge-based economy. The complexity theory of leadership was applied as the theoretical foundation for this study. A phenomenological methodology was used. Data were collected through semi-structured interviews and analyzed through open coding to identify emergent themes from the data. The themes were leaders motivate employees' positive work-related behaviors, effective communication skills ensure accessibility and efficiency of the organizational information system, and leadership practices influence business productivity. This study contributes to social change by providing insights for managers and employees regarding effective strategies for working as teams and networks via the use of nontraditional leadership theory, which promotes company sustainability by demonstrating the benefits of responding to the changing economy.
|
147 |
Modeling and mining business process variants in cloud environments / Modélisation et fouille de variants de procédés d'entreprise dans les environnements cloudYongsiriwit, Karn 23 January 2017 (has links)
De plus en plus les organisations adoptent les systèmes d'informations sensibles aux processus basés sur Cloud en tant qu'un environnement pour gérer et exécuter des processus dans le Cloud dans l'objectif de partager et de déployer leurs applications de manière optimale. Cela est particulièrement vrai pour les grandes organisations ayant des succursales opérant dans des différentes régions avec des processus considérablement similaires. Telles organisations doivent soutenir de nombreuses variantes du même processus en raison de la culture locale de leurs succursales, de leurs règlements, etc. Cependant, le développement d'une nouvelle variante de processus à partir de zéro est sujet à l'erreur et peut prendre beaucoup du temps. Motivés par le paradigme "la conception par la réutilisation", les succursales peuvent collaborer pour développer de nouvelles variantes de processus en apprenant de leurs processus similaires. Ces processus sont souvent hétérogènes, ce qui empêche une interopérabilité facile et dynamique entre les différentes succursales. Une variante de processus est un ajustement d'un modèle de processus afin de s'adapter d'une façon flexible aux besoins spécifiques. De nombreuses recherches dans les universités et les industries visent à faciliter la conception des variantes de processus. Plusieurs approches ont été développées pour aider les concepteurs de processus en recherchant des modèles de processus métier similaires ou en utilisant des modèles de référence. Cependant, ces approches sont lourdes, longues et sujettes à des erreurs. De même, telles approches recommandent des modèles de processus pas pratiques pour les concepteurs de processus qui ont besoin d'ajuster une partie spécifique d'un modèle de processus. En fait, les concepteurs de processus peuvent mieux développer des variantes de processus ayant une approche qui recommande un ensemble bien défini d'activités à partir d'un modèle de processus défini comme un fragment de processus. Les grandes organisations multi-sites exécutent les variantes de processus BP dans l'environnement Cloud pour optimiser le déploiement et partager les ressources communes. Cependant, ces ressources Cloud peuvent être décrites en utilisant des différents standards de description des ressources Cloud ce qui empêche l'interopérabilité entre les différentes succursales. Dans cette thèse, nous abordons les limites citées ci-dessus en proposant une approche basée sur les ontologies pour peupler sémantiquement une base de connaissance commune de processus et de ressources Cloud, ce qui permet une interopérabilité entre les succursales de l'organisation. Nous construisons notre base de connaissance en étendant les ontologies existantes. Ensuite, nous proposons une approche pour exploiter cette base de connaissances afin de supporter le développement des variantes BP. De plus, nous adoptons un algorithme génétique pour allouer d'une manière optimale les ressources Cloud aux BPs. Pour valider notre approche, nous développons deux preuves de concepts et effectuons des expériences sur des ensembles de données réels. Les résultats expérimentaux montrent que notre approche est réalisable et précise dans des cas d'utilisation réels / More and more organizations are adopting cloud-based Process-Aware Information Systems (PAIS) to manage and execute processes in the cloud as an environment to optimally share and deploy their applications. This is especially true for large organizations having branches operating in different regions with a considerable amount of similar processes. Such organizations need to support many variants of the same process due to their branches' local culture, regulations, etc. However, developing new process variant from scratch is error-prone and time consuming. Motivated by the "Design by Reuse" paradigm, branches may collaborate to develop new process variants by learning from their similar processes. These processes are often heterogeneous which prevents an easy and dynamic interoperability between different branches. A process variant is an adjustment of a process model in order to flexibly adapt to specific needs. Many researches in both academics and industry are aiming to facilitate the design of process variants. Several approaches have been developed to assist process designers by searching for similar business process models or using reference models. However, these approaches are cumbersome, time-consuming and error-prone. Likewise, such approaches recommend entire process models which are not handy for process designers who need to adjust a specific part of a process model. In fact, process designers can better develop process variants having an approach that recommends a well-selected set of activities from a process model, referred to as process fragment. Large organizations with multiple branches execute BP variants in the cloud as environment to optimally deploy and share common resources. However, these cloud resources may be described using different cloud resources description standards which prevent the interoperability between different branches. In this thesis, we address the above shortcomings by proposing an ontology-based approach to semantically populate a common knowledge base of processes and cloud resources and thus enable interoperability between organization's branches. We construct our knowledge base built by extending existing ontologies. We thereafter propose an approach to mine such knowledge base to assist the development of BP variants. Furthermore, we adopt a genetic algorithm to optimally allocate cloud resources to BPs. To validate our approach, we develop two proof of concepts and perform experiments on real datasets. Experimental results show that our approach is feasible and accurate in real use-cases
|
148 |
Knowledge Base Augmentation from Spreadsheet Data : Combining layout inference with multimodal candidate classificationHeyder, Jakob Wendelin January 2020 (has links)
Spreadsheets compose a valuable and notably large dataset of documents within many enterprise organizations and on the Web. Although spreadsheets are intuitive to use and equipped with powerful functionalities, extraction and transformation of the data remain a cumbersome and mostly manual task. The great flexibility they provide to the user results in data that is arbitrarily structured and hard to process for other applications. In this paper, we propose a novel architecture that combines supervised layout inference and multimodal candidate classification to allow knowledge base augmentation from arbitrary spreadsheets. In our design, we consider the need for repairing misclassifications and allow for verification and ranking of ambiguous candidates. We evaluate the performance of our system on two datasets, one with single-table spreadsheets, another with spreadsheets of arbitrary format. The evaluation result shows that the proposed system achieves similar performance on single-table spreadsheets compared to state-of-the-art rule-based solutions. Additionally, the flexibility of the system allows us to process arbitrary spreadsheet formats, including horizontally and vertically aligned tables, multiple worksheets, and contextualizing metadata. This was not possible with existing purely text-based or table-based solutions. The experiments demonstrate that it can achieve high effectiveness with an F1 score of 95.71 on arbitrary spreadsheets that require the interpretation of surrounding metadata. The precision of the system can be further increased by applying candidate schema-matching based on semantic similarity of column headers. / Kalkylblad består av ett värdefullt och särskilt stort datasätt av dokument inom många företagsorganisationer och på webben. Även om kalkylblad är intuitivt att använda och är utrustad med kraftfulla funktioner, utvinning och transformation av data är fortfarande en besvärlig och manuell uppgift. Den stora flexibiliteten som de ger användaren resulterar i data som är godtyckligt strukturerade och svåra att bearbeta för andra applikationer. I det här förslaget föreslår vi en ny arkitektur som kombinerar övervakad layoutinferens och multimodal kandidatklassificering för att tillåta kunskapsbasförstärkning från godtyckliga kalkylblad. I vår design överväger vi behovet av att reparera felklassificeringar och möjliggöra verifiering och rangordning av tvetydiga kandidater. Vi utvärderar systemets utförande på två datasätt, en med singeltabellkalkylblad, en annan med kalkylblad av godtyckligt format. Utvärderingsresultatet visar att det föreslagna systemet uppnår liknande prestanda på singel-tabellkalkylblad jämfört med state-of-the-art regelbaserade lösningar. Dessutom tillåter systemets flexibilitet oss att bearbeta godtyckliga kalkylark format, inklusive horisontella och vertikala inriktade tabeller, flera kalkylblad och sammanhangsförande metadata. Detta var inte möjligt med existerande rent textbaserade eller tabellbaserade lösningar. Experimenten visar att det kan uppnå hög effektivitet med en F1-poäng på 95.71 på godtyckliga kalkylblad som kräver tolkning av omgivande metadata. Systemets precision kan ökas ytterligare genom att applicera schema-matchning av kandidater baserat på semantisk likhet mellan kolumnrubriker.
|
149 |
Exploring Intensive Reading Intervention Teachers' Formal And Practical Knowledge Of Beginning Reading Instruction Provided To At-risk First Grade ReadersCortelyou, Kathryn 01 January 2012 (has links)
This study was designed with two goals in mind. The first goal was to describe the formal and practical knowledge of intensive reading intervention teachers related to beginning reading instruction with at-risk first graders. A second goal was to understand any potential relationships between intensive reading teachers’ practical knowledge and formal knowledge. These two goals framed the study’s three research questions. To answer these three questions, the study was conducted in two phases. Phase one included 32 participants, all of whom worked in the role of a K-2 intensive reading intervention teacher. Each of these 32 participants completed a background questionnaire and a paper/pencil Teacher Knowledge Assessment (TKA). The TKA measured participants’ formal knowledge of beginning reading concepts. Participants’ scores on the TKA were then rank-ordered from lowest to highest to help guide the selection of phase two participants. Eight teachers in all participated in phase two of the study dedicated to the study of teachers’ practical knowledge of reading. Participants’ practical knowledge of reading was explored through three activities including a semi-structured interview, a concept-mapping activity and a videotaped reading lesson. Data analysis revealed several important findings. Intensive reading intervention teachers in this study’s sample differed in their formal knowledge of reading, measured by the TKA, and in their practical knowledge of reading, explored through interviews, concept-maps and reading lessons. The TKA revealed that study participants’ held more formal knowledge of concepts related to phonology and phonics and less formal knowledge of concepts related to morphology and syllable types. Related to practical knowledge, data analysis revealed that the teachers in this sample differed in their knowledge of beginning reading with subject-matter knowledge iv accounting for most of the differences. These gaps in subject-matter knowledge also impacted this sample of teachers’ use of instructional strategies and purposes of instruction. Data analysis also revealed insight into the relationships between this sample of teachers’ formal and practical reading knowledge. In this sample, intensive reading intervention teachers with more formal knowledge of reading concepts as measured on the TKA demonstrated more evidence of these concepts within their instruction provided to at-risk first grade readers. The participants in this sample who had less formal knowledge of beginning reading as measured by the TKA demonstrated less evidence of these concepts within their instruction provided to at-risk first grade readers. Participants with less formal knowledge did accurately calibrate their knowledge of the concepts tested on the TKA but did not equate the lower scores to their practical knowledge and overall teaching efficacy. The findings from this study added several important contributions to the literature on teacher knowledge and beginning reading instruction. First, the study was unique in its focus on intensive reading intervention teachers, thus contributing new findings related to a specialized group of teachers. Secondly, this study contributed descriptions of teachers’ practical knowledge with regards to beginning reading instruction. These descriptions are relatively absent in the current literature on teacher knowledge. Thirdly, the results from this study supported earlier findings in favor of a specialized body of subject-matter knowledge, especially related to beginning reading skills and concepts. Finally, the results contributed insight into the relationships between teachers’ formal reading knowledge and practical reading knowledge
|
150 |
ElektroCHAT: A Knowledge Base-Driven Dialogue System for Electrical Engineering Students : A Proposal for Interactive Tutoring / ElektroCHAT: Ett Kunskapsbaserat Dialogsystem för Ingenjörsstudenter Inom Elektroteknik : Ett Förslag för Interaktiv HandledningGölman, Fredrik January 2023 (has links)
Universities worldwide face challenges both with students dropping out of educational programmes and repetitive questions directed toward teaching staff which both consume resources and result in delays. Recent progress in natural language processing (NLP) introduces the possibility of more sophisticated dialogue systems that could help alleviate the situation. Dialogue systems in education are complex to construct for multiple reasons. Two such reasons are that domain-specific data is often not readily available and extending an existing system often requires configuring the system again and re-training models. In this thesis, a graph-based knowledge base (KB) which is the foundation of a heavily rule-based dialogue system is proposed. The core of the natural language understanding (NLU) in the pipeline-based dialogue system includes the transformer-based DIET classifier for intent classification and entity extraction. The custom logic of the dialogue system relies on contextual and distributional embeddings. While the proposed solution is used in electrical engineering specifically, the KB and the architecture of the dialogue system are designed with generalization in mind. An emphasis is to maintain a low level of system maintenance after deployment allowing teaching staff without expertise in computer science and machine learning to operate the system. The utilization of transfer learning with pre-trained language models helps achieve this objective. The findings suggest that the system is sufficiently sophisticated to improve learning environments for students while potentially alleviating the workload of teaching staff. They further indicate that computer science and machine learning expertise are not required to operate the system over time. / Universitet världen runt möter utmaningar vad gäller både studenter som avbryter sina studier i förtid och repetitiva frågeställningar riktade till kursansvariga vilket konsumerar resurser och resulterar i onödig tidsåtgång. Den utveckling som på senare tid har skett inom naturlig språkhantering (NLP) introducerar möjligheter för mer sofistikerade dialogsystem som skulle kunna avhjälpa situationen. Dialogsystem inom utbildning är ofta komplexa att konstruera av flera anledningar. Två av dessa anledningar är att domän-specifik data sällan finns tillgänglig och att vidareutveckla existerande dialogsystem ofta kräver omkonfigurering och att man åter tränar de involverade modellerna. I denna uppsats föreslås en grafbaserad kunskapsbas (KB) som är grunden av ett till stora delar regelbaserat dialogsystem. Kärnan av den naturliga språkförståelsen (NLU) i det pipeline-baserade dialogsystemet inkluderar den transformer-baserade DIET-modellen för klassificering av intentioner och extrahering av entiteter. Den egenutvecklade logiken i dialogsystemet förlitar sig på förtränade kontextuella och distribuerade inbäddningar. Medan den föreslagna lösningen används specifikt inom elektroteknik så är både KB och dialogsystemets arkitektur utvecklade med generalisering i åtanke. Det finns även en betoning på att bibehålla en låg underhållningsnivå efter att systemet har sjösatts för att tillåta att systemet drivs av kursansvariga utan expertis inom datalogi eller maskininlärning. Användandet av förtränade språkmodeller hjälper till att uppnå detta mål. Upptäckterna tyder på att systemet är tillräckligt sofistikerat för att förbättra lärandemiljön för studenter medan det samtidigt möjligtvis kan hjälpa till att förminska arbetsbelastningen för kursansvariga. Vidare så indikerar upptäckterna att expertis inom datalogi och maskininlärning inte är nödvändigt för att driva systemet över tid.
|
Page generated in 0.0332 seconds