• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 12
  • 7
  • 7
  • 7
  • 4
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 200
  • 200
  • 49
  • 44
  • 41
  • 41
  • 27
  • 24
  • 23
  • 23
  • 21
  • 21
  • 20
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Classification of uncertain data in the framework of belief functions : nearest-neighbor-based and rule-based approaches / Classification des données incertaines dans le cadre des fonctions de croyance : la métode des k plus proches voisins et la méthode à base de règles

Jiao, Lianmeng 26 October 2015 (has links)
Dans de nombreux problèmes de classification, les données sont intrinsèquement incertaines. Les données d’apprentissage disponibles peuvent être imprécises, incomplètes, ou même peu fiables. En outre, des connaissances spécialisées partielles qui caractérisent le problème de classification peuvent également être disponibles. Ces différents types d’incertitude posent de grands défis pour la conception de classifieurs. La théorie des fonctions de croyance fournit un cadre rigoureux et élégant pour la représentation et la combinaison d’une grande variété d’informations incertaines. Dans cette thèse, nous utilisons cette théorie pour résoudre les problèmes de classification des données incertaines sur la base de deux approches courantes, à savoir, la méthode des k plus proches voisins (kNN) et la méthode à base de règles.Pour la méthode kNN, une préoccupation est que les données d’apprentissage imprécises dans les régions où les classes de chevauchent peuvent affecter ses performances de manière importante. Une méthode d’édition a été développée dans le cadre de la théorie des fonctions de croyance pour modéliser l’information imprécise apportée par les échantillons dans les régions qui se chevauchent. Une autre considération est que, parfois, seul un ensemble de données d’apprentissage incomplet est disponible, auquel cas les performances de la méthode kNN se dégradent considérablement. Motivé par ce problème, nous avons développé une méthode de fusion efficace pour combiner un ensemble de classifieurs kNN couplés utilisant des métriques couplées apprises localement. Pour la méthode à base de règles, afin d’améliorer sa performance dans les applications complexes, nous étendons la méthode traditionnelle dans le cadre des fonctions de croyance. Nous développons un système de classification fondé sur des règles de croyance pour traiter des informations incertains dans les problèmes de classification complexes. En outre, dans certaines applications, en plus de données d’apprentissage, des connaissances expertes peuvent également être disponibles. Nous avons donc développé un système de classification hybride fondé sur des règles de croyance permettant d’utiliser ces deux types d’information pour la classification. / In many classification problems, data are inherently uncertain. The available training data might be imprecise, incomplete, even unreliable. Besides, partial expert knowledge characterizing the classification problem may also be available. These different types of uncertainty bring great challenges to classifier design. The theory of belief functions provides a well-founded and elegant framework to represent and combine a large variety of uncertain information. In this thesis, we use this theory to address the uncertain data classification problems based on two popular approaches, i.e., the k-nearest neighbor rule (kNN) andrule-based classification systems. For the kNN rule, one concern is that the imprecise training data in class over lapping regions may greatly affect its performance. An evidential editing version of the kNNrule was developed based on the theory of belief functions in order to well model the imprecise information for those samples in over lapping regions. Another consideration is that, sometimes, only an incomplete training data set is available, in which case the ideal behaviors of the kNN rule degrade dramatically. Motivated by this problem, we designedan evidential fusion scheme for combining a group of pairwise kNN classifiers developed based on locally learned pairwise distance metrics.For rule-based classification systems, in order to improving their performance in complex applications, we extended the traditional fuzzy rule-based classification system in the framework of belief functions and develop a belief rule-based classification system to address uncertain information in complex classification problems. Further, considering that in some applications, apart from training data collected by sensors, partial expert knowledge can also be available, a hybrid belief rule-based classification system was developed to make use of these two types of information jointly for classification.
172

A concept of an intent-based contextual chat-bot with capabilities for continual learning

Strutynskiy, Maksym January 2020 (has links)
Chat-bots are computer programs designed to conduct textual or audible conversations with a single user. The job of a chat-bot is to be able to find the best response for any request the user issues. The best response is considered to answer the question and contain relevant information while following grammatical and lexical rules. Modern chat-bots often have trouble accomplishing all these tasks. State-of-the-art approaches, such as deep learning, and large datasets help chat-bots tackle this problem better. While there is a number of different approaches that can be applied for different kind of bots, datasets of suitable size are not always available. In this work, we introduce and evaluate a method of expanding the size of datasets. This will allow chat-bots, in combination with a good learning algorithm, to achieve higher precision while handling their tasks. The expansion method uses the continual learning approach that allows the bot to expand its own dataset while holding conversations with its users. In this work we test continual learning with IBM Watson Assistant chat-bot as well as a custom case study chat-bot implementation. We conduct the testing using a smaller and a larger datasets to find out if continual learning stays effective as the dataset size increases. The results show that the more conversations the chat-bot holds, the better it gets at guessing the intent of the user. They also show that continual learning works well for larger and smaller datasets, but the effect depends on the specifics of the chat-bot implementation. While continual learning makes good results better, it also turns bad results into worse ones, thus the chat-bot should be manually calibrated should the precision of the original results, measured before the expansion, decrease.
173

Automatické ladění vah pravidlových bází znalostí / Automated Weight Tuning for Rule-Based Knowledge Bases

Valenta, Jan January 2009 (has links)
This dissertation thesis introduces new methods of automated knowledge-base creation and tuning in information and expert systems. The thesis is divided in the two following parts. The first part is focused on the legacy expert system NPS32 developed at the Faculty of Electrical Engineering and Communication, Brno University of Technology. The mathematical base of the system is expression of the rule uncertainty using two values. Thus, it extends information capability of the knowledge-base by values of the absence of the information and conflict in the knowledge-base. The expert system has been supplemented by a learning algorithm. The learning algorithm sets weights of the rules in the knowledge base using differential evolution algorithm. It uses patterns acquired from an expert. The learning algorithm is only one-layer knowledge-bases limited. The thesis shows a formal proof that the mathematical base of the NPS32 expert system can not be used for gradient tuning of the weights in the multilayer knowledge-bases. The second part is focused on multilayer knowledge-base learning algorithm. The knowledge-base is based on a specific model of the rule with uncertainty factors. Uncertainty factors of the rule represents information impact ratio. Using a learning algorithm adjusting weights of every single rule in the knowledge base structure, the modified back propagation algorithm is used. The back propagation algorithm is modified for the given knowledge-base structure and rule model. For the purpose of testing and verifying the learning algorithm for knowledge-base tuning, the expert system RESLA has been developed in C#. With this expert system, the knowledge-base from medicine field, was created. The aim of this knowledge base is verify learning ability for complex knowledge-bases. The knowledge base represents heart malfunction diagnostic base on the acquired ECG (electrocardiogram) parameters. For the purpose of the comparison with already existing knowledge-basis, created by the expert and knowledge engineer, the expert system was compared with professionally designed knowledge-base from the field of agriculture. The knowledge-base represents system for suitable cultivar of winter wheat planting decision support. The presented algorithms speed up knowledge-base creation while keeping all advantages, which arise from using rules. Contrary to the existing solution based on neural network, the presented algorithms for knowledge-base weights tuning are faster and more simple, because it does not need rule extraction from another type of the knowledge representation.
174

Extraction et sélection de motifs émergents minimaux : application à la chémoinformatique / Extraction and selection of minimal emerging patterns : application to chemoinformatics

Kane, Mouhamadou bamba 06 September 2017 (has links)
La découverte de motifs est une tâche importante en fouille de données. Cemémoire traite de l’extraction des motifs émergents minimaux. Nous proposons une nouvelleméthode efficace qui permet d’extraire les motifs émergents minimaux sans ou avec contraintede support ; contrairement aux méthodes existantes qui extraient généralement les motifs émergentsminimaux les plus supportés, au risque de passer à côté de motifs très intéressants maispeu supportés par les données. De plus, notre méthode prend en compte l’absence d’attributqui apporte une nouvelle connaissance intéressante.En considérant les règles associées aux motifs émergents avec un support élevé comme desrègles prototypes, on a montré expérimentalement que cet ensemble de règles possède unebonne confiance sur les objets couverts mais malheureusement ne couvre pas une bonne partiedes objets ; ce qui constitue un frein pour leur usage en classification. Nous proposons uneméthode de sélection à base de prototypes qui améliore la couverture de l’ensemble des règlesprototypes sans pour autant dégrader leur confiance. Au vu des résultats encourageants obtenus,nous appliquons cette méthode de sélection sur un jeu de données chimique ayant rapport àl’environnement aquatique : Aquatox. Cela permet ainsi aux chimistes, dans un contexte declassification, de mieux expliquer la classification des molécules, qui sans cette méthode desélection serait prédites par l’usage d’une règle par défaut. / Pattern discovery is an important field of Knowledge Discovery in Databases.This work deals with the extraction of minimal emerging patterns. We propose a new efficientmethod which allows to extract the minimal emerging patterns with or without constraint ofsupport ; unlike existing methods that typically extract the most supported minimal emergentpatterns, at the risk of missing interesting but less supported patterns. Moreover, our methodtakes into account the absence of attribute that brings a new interesting knowledge.Considering the rules associated with emerging patterns highly supported as prototype rules,we have experimentally shown that this set of rules has good confidence on the covered objectsbut unfortunately does not cover a significant part of the objects ; which is a disavadntagefor their use in classification. We propose a prototype-based selection method that improvesthe coverage of the set of the prototype rules without a significative loss on their confidence.We apply our prototype-based selection method to a chemical data relating to the aquaticenvironment : Aquatox. In a classification context, it allows chemists to better explain theclassification of molecules, which, without this method of selection, would be predicted by theuse of a default rule.
175

Translation of keywords between English and Swedish / Översättning av nyckelord mellan engelska och svenska

Ahmady, Tobias, Klein Rosmar, Sander January 2014 (has links)
In this project, we have investigated how to perform rule-based machine translation of sets of keywords between two languages. The goal was to translate an input set, which contains one or more keywords in a source language, to a corresponding set of keywords, with the same number of elements, in the target language. However, some words in the source language may have several senses and may be translated to several, or no, words in the target language. If ambiguous translations occur, the best translation of the keyword should be chosen with respect to the context. In traditional machine translation, a word's context is determined by a phrase or sentences where the word occurs. In this project, the set of keywords represents the context. By investigating traditional approaches to machine translation (MT), we designed and described models for the specific purpose of keyword- translation. We have proposed a solution, based on direct translation for translating keywords between English and Swedish. In the proposed solu- tion, we also introduced a simple graph-based model for solving ambigu- ous translations. / I detta projekt har vi undersökt hur man utför regelbaserad maskinöver- sättning av nyckelord mellan två språk. Målet var att översätta en given mängd med ett eller flera nyckelord på ett källspråk till en motsvarande, lika stor mängd nyckelord på målspråket. Vissa ord i källspråket kan dock ha flera betydelser och kan översättas till flera, eller inga, ord på målsprå- ket. Om tvetydiga översättningar uppstår ska nyckelordets bästa över- sättning väljas med hänsyn till sammanhanget. I traditionell maskinö- versättning bestäms ett ords sammanhang av frasen eller meningen som det befinner sig i. I det här projektet representerar den givna mängden nyckelord sammanhanget. Genom att undersöka traditionella tillvägagångssätt för maskinöversätt- ning har vi designat och beskrivit modeller specifikt för översättning av nyckelord. Vi har presenterat en direkt maskinöversättningslösning av nyckelord mellan engelska och svenska där vi introducerat en enkel graf- baserad modell för tvetydiga översättningar.
176

De la conception prospective et innovante dans les organisations municipales québécoises : vers une régénération des routines en urbanisme ?

Lavoie, Nicolas 12 1900 (has links)
Les transitions écologiques et numériques, ainsi que les préoccupations relatives aux inégalités sociales, signalent l’avènement de nouveaux défis complexes pour les villes contemporaines. Ces changements soulèvent la question de la capacité dynamique des urbanistes, plus précisément leur capacité à revoir leurs outils et leurs routines de planification dans les projets urbains afin d’explorer le potentiel des nouveaux paradigmes d’action collective et de favoriser des voies de transition innovantes pour les villes. Les entreprises européennes, en particulier dans le domaine des transports publics, ont relevé ce défi, avec des résultats convaincants, en développant des outils basés sur des théories de conception innovante. L’un de ces outils méthodologiques, le processus Définition-Connaissance-Concept-Proposition (DKCP), a été utilisé pour générer une nouvelle gamme d’options de planification dans trois recherches-interventions à Montréal, au Canada. La routine traditionnelle du planificateur se concentre généralement sur une seule activité du processus, la formulation de propositions (phase P), en adaptant légèrement les anciens projets au contexte et aux règles locales. Cependant, la routine des futurs urbanistes devrait inclure de nouvelles capacités de gestion des étapes en amont des projets sous la forme d’une succession de phases DKCP. La nécessité de relever les défis complexes de la ville du XXIe siècle ouvre la voie à une nouvelle identité professionnelle : celle de « l’urbaniste innovant ». / Ecological and digital transitions, along with concerns over social inequalities, signal the advent of complex new challenges for contemporary cities. These changes raise the issue of the dynamic capability of urban planners: more specifically, their ability to review their tools and planning routines in urban projects in order to explore the potential of new paradigms of collective action and foster innovative transition paths for cities. European companies, especially in public transportation, have responded to this challenge, with convincing results, by developing tools based on innovative design theories. One of these methodological tools, the Definition-Knowledge-Concept-Proposition (DKCP) process, was used to generate a new range of planning options for three urban districts in Montreal, Canada. The traditional planner’s routine generally focuses on a single activity in the process, the formulation of propositions (Phase P), by slightly adapting former projects to the local context and rules. However, the future urban planners’ routine should include new capabilities for managing upstream stages of projects in the form of a succession of DKCP phases. The need to tackle the complex challenges of the 21st century city opens the way to a new professional identity: the “innovative urban planner”.
177

Extending Artemis With a Rule-Based Approach for Automatically Assessing Modeling Tasks

Rodestock, Franz 27 September 2022 (has links)
The Technische Universität Dresden has multiple e-learning projects in use. The Chair of Software Technology uses Inloop to teach students object-oriented programming through automatic feedback. In the last years, interest has grown in giving students automated feedback on modeling tasks. This is why there was an extension developed by Hamann to automate the assessment of modeling tasks in 2020. The TU Dresden currently has plans to replace Inloop with Artemis, a comparable system. Artemis currently supports the semi-automatic assessment of modeling exercises. In contrast, the system proposed by Hamann, called Inloom, is based on a rule-based approach and provides instant feedback. A rule-based system has certain advantages over a similarity-based system. One advantage is the mostly better feedback that these systems generate. To give instructors more flexibility and choice, this work tries to identify possible ways of extending Artemis with the rule-based approach Inloom. In the second step, this thesis will provide a proof of concept implementation. Furthermore, a comparison between different systems is developed to help instructors choose the best suitable system for their usecase.:Introduction, Background, Related Work, Analysis, System Design, Implementation, Evaluation, Conclusion and Future Work, Bibliography, Appendix
178

Development of Multiple Linear Regression Model and Rule Based Decision Support System to Improve Supply Chain Management of Road Construction Projects in Disaster Regions

Anwar, Waqas January 2019 (has links)
Supply chain operations of construction industry including road projects in disaster regions results in exceeding project budget and timelines. In road construction projects, supply chain with poor performance can affect efficiency and completion time of the project. This is also the case of the road projects in disaster areas. Disaster areas consider both natural and man-made disasters. Few examples of disaster zones are; Pakistan, Afghanistan, Iraq, Sri Lanka, India, Japan, Haiti and many other countries with similar environments. The key factors affecting project performance and execution are insecurity, uncertainties in demand and supply, poor communication and technology, poor infrastructure, lack of political and government will, unmotivated organizational staff, restricted accessibility to construction materials, legal hitches, multiple challenges of hiring labour force and exponential construction rates due to high risk environment along with multiple other factors. The managers at all tiers are facing challenges of overrunning time and budget of supply chain operations during planning as well as execution phase of development projects. The aim of research is to develop a Multiple Linear Regression Model (MLRM) and a Rule Based Decision Support System by incorporating various factors affecting supply chain management of road projects in disaster areas in the order of importance. This knowledge base (KB) (importance / coefficient of each factor) will assist infrastructure managers (road projects) and practitioners in disaster regions in decision making to minimize the effect of each factor which will further help them in project improvement. Conduct of Literature Review in the fields of disaster areas, supply chain operational environments of road project, statistical techniques, Artificial Intelligence (AI) and types of research approaches has provided deep insights to the researchers. An initial questionnaire was developed and distributed amongst participants as pilot project and consequently results were analysed. The results’ analysis enabled the researcher to extract key variables impacting supply chain performance of road project. The results of questionnaire analysis will facilitate development of Multiple Linear Regression Model, which will eventually be verified and validated with real data from actual environments. The development of Multiple Linear Regression Model and Rule Based Decision Support System incorporating all factors which affect supply chain performance of road projects in disastrous regions is the most vital contribution to the research. The significance and novelty of this research is the methodology developed that is the integration of those different methods which will be employed to measure the SCM performance of road projects in disaster areas.
179

Myaamia Translator: Using Neural Machine Translation With Attention to Translate a Low-resource Language

Baaniya, Bishal 06 April 2023 (has links)
No description available.
180

Revisorernas inflytande : Komponentmetoden, en rättvisande bild och den institutionella teorins förklaring av mindre fastighetsföretags val av principbaserad redovisning

Backman, Mikaela January 2016 (has links)
No description available.

Page generated in 0.0281 seconds