• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 218
  • 71
  • 32
  • 19
  • 10
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 528
  • 528
  • 148
  • 139
  • 124
  • 123
  • 119
  • 111
  • 103
  • 101
  • 97
  • 83
  • 80
  • 65
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
461

REPRESENTAÇÃO E AGREGAÇÃO DE CONTEÚDOS EM REPOSITÓRIO DE OBJETOS DE APRENDIZAGEM / REPRESENTATION AND AGGREGATION OF CONTENTS IN REPOSITORY OF LEARNING OBJECTS

Silva, Roosewelt Lins 18 June 2007 (has links)
Made available in DSpace on 2016-08-17T14:53:03Z (GMT). No. of bitstreams: 1 Roosewelt Lins.pdf: 1525522 bytes, checksum: d86a5be41b9380c24fb5a8b9bc673ec3 (MD5) Previous issue date: 2007-06-18 / The education mediated by technology is a tool used in academic and corporative environments. With advance of the Web, diverse environments of teaching and learning make possible the production and distribution of multimedia contents for the use of learners and teachers. However the contents access still is one of the main problems for the use and sharing between different applications. The document representation in Semantic Web is related to the use of metadata to describe resources. In Web-based education, diverses standards have been considered to provide sharing learning resources in distributed form. One believes that ontology use allows one better conceptualization and domain representation, making possible the formalization of the metadata schema for learning object management. One presents an Aggregation and Representation Content Model for conceptualization of a Semantic Learning Object Repository. The Aggregation Model makes use of standard LOM (Learning Object Metadata) to describe and add educational contents. The Content Representation Model is a Classification Schema based on SKOS (Simple Knowledge Organisation Systems) standard destined the specification of knowledge organisation systems in the Semantics Web. It was use OWL language (Web Ontology Language) for ontology construction and framework Jena for manipulation of the ontological model. In such a way, it argues concepts associates the educational technologies, perspectives and challenges for knowledge representation on the Web, and for the development of new generation of the Web. / A educação mediada por tecnologia é uma ferramenta cada vez mais utilizada em ambientes acadêmicos e corporativos. Com o avanço da Web, diversos ambientes de ensino-aprendizagem possibilitaram a produção e disponibilização de conteúdos multimídias para o uso de aprendizes e educadores. Todavia o acesso a estes conteúdos ainda é um dos principais problemas para o uso e compartilhamento entre diferentes aplicações. A representação de documentos na Web Semântica é uma técnica relacionada ao uso de metadados para descrever recursos, sendo uma solução para o problema de acesso a conteúdos na Web. No cenário da educação baseada na Web, diversos padrões de metadados têm sido propostos para proporcionar o compartilhamento de recursos de aprendizagem de forma distribuída. Acredita-se que o uso das ontologias permitirá uma melhor conceituação e representação do domínio, possibilitando desta forma uma formalização dos esquemas de metadados para gerenciamento de objetos de aprendizagem. Apresenta-se um Modelo de Agregação e Representação de Conteúdo para conceituação de um Repositório Semântico de Objetos de Aprendizagem. O Modelo de Agregação faz uso do padrão LOM (Learning Object Metadata) para descrever e agregar conteúdos educacionais. O Modelo de Representação de Conteúdos é um Esquema de Classificação baseado no padrão SKOS (Simple Knowledge Organisation Systems) destinado à especificação de Sistemas de Organização do Conhecimento na Web Semântica. Utilizou-se a metodologia METHONTOLOGY, linguagem OWL (Web Ontology Language) para construção da ontologia e o uso do framework Jena destinado à manipulação de modelo ontológico. Desta forma, discutem-se pressupostos associados à representação do conhecimento na Web, tecnologias educacionais, perspectivas e desafios para o desenvolvimento da nova geração da Web.
462

Proposition d'un cadre pour l'analyse automatique, l'interprétation et la recherche interactive d'images de bande dessinée / A framework for the automated analysis, interpretation and interactive retrieval of comic books' images

Guérin, Clément 24 November 2014 (has links)
Le paysage numérique de la culture française et mondiale subit de grands bouleversements depuis une quinzaine d’années avec des mutations historiques des médias, de leur format traditionnel au format numérique, tirant avantageusement parti des nouveaux moyens de communication et des dispositifs mobiles aujourd’hui popularisés. Aux côtés de formes culturelles ayant achevé, ou étant en passe d’achever, leur transition vers le numérique, la bande dessinée tâtonne encore pour trouver sa place dans l’espace du tout dématérialisé. En parallèle de l’émergence de jeunes auteurs créant spécifiquement pour ces nouveaux supports de lecture que sont ordinateurs, tablettes et smartphones, plusieurs acteurs du monde socio-économique s’intéressent à la valorisation du patrimoine existant. Les efforts se concentrent autant sur une démarche d’adaptation des œuvres aux nouveaux paradigmes de lecture que sur celle d’une indexation de leur contenu facilitant la recherche d’informations dans des bases d’albums numérisés ou dans des collections d’œuvres rares. La problématique est double, il s’agit premièrement d’être en mesure d’identifier la structure d’une planche de bande dessinée en se basant sur des extractions de primitives, issues d’une analyse d’image, validées et corrigées grâce à l’action conjointe de deux ontologies, la première manipulant les extractions d’images bas-niveau, la deuxième modélisant les règles de composition classiques de la bande dessinée franco-belge. Dans un second temps l’accent est mis sur l’enrichissement sémantique des éléments identifiés comme composants individuels d’une planche en s’appuyant sur les relations spatiales qu’ils entretiennent les uns avec les autres ainsi que sur leurs caractéristiques physiques intrinsèques. Ces annotations peuvent porter sur des éléments seuls (place d’une case dans la séquence de lecture) ou sur des liens entre éléments (texte prononcé par un personnage). / Since the beginning of the twenty-first century, the cultural industry, both in France and worldwide, has been through a massive and historical mutation. They have had to adapt to the emerging digital technology represented by the Internet and the new handheld devices such as smartphones and tablets. Although some industries successfully transfered a piece of their activity to the digital market and are about to find a sound business model, the comic books industry keeps looking for the right solution and has not yet produce anything as convincing as the music or movie offers. While many new young authors and writers use their creativity to produce specifically digital designed pieces of art, some other minds are focused on the preservation and the development of the already existing heritage. So far, efforts have been concentrated on the transfer from printed to digital support, with a special attention given to their specific features and how they can be used to create new reading conventions. There has also been some concerns about the content indexing, which is a hard task regarding the large amount of data created since the very beginning of the comics history. From a scientific point of view, there are several issues related to these goals. First, it implies to be able to identify the underlying structure of a comic books page. This comes through the extraction of the page's components, their validation and their correction based on the representation and reasoning capacities of two ontologies. The first one focus on the representation of the image analysis concepts and the second one represents the comic books domain knowledge. Secondly, a special attention is given to the semantic enhancement of the extracted elements, based on their spatial relations to each others and on their own characteristics. These annotations can be related to elements only (e.g. the position of a panel in the reading sequence), or to the bound between several elements (e.g. the text pronounced by a character).
463

Incomplete and uncertain information in relational databases

Zimanyi, Esteban 01 January 1992 (has links)
<p align="justify">In real life it is very often the case that the available knowledge is imperfect in the sense that it represents multiple possible states of the external world, yet it is unknown which state corresponds to the actual situation of the world. Imperfect knowledge can be of two different categories. Knowledge is incomplete if it represents different states, one of which is true in the external world. On the contrary, knowledge is uncertain if it represents different states which may be satisfied or are likely to be true in the external world.</p><p><p align="justify">Imperfect knowledge can be considered under two different perspectives: using either an algebraic or a logical approach. We present both approaches in relation with the standard relational model, providing the necessary background for the subsequent development.</p><p><p align="justify">The study of imperfect knowledge has been an active area of research, in particular in the context of relational databases. However, due to the complexity of manipulating imperfect knowledge, little practical results have been obtained so far. In this thesis we provide a survey of the field of incompleteness and uncertainty in relational databases;it can be used also as an introductory tutorial for understanding the intuitive semantics and the problems encountered when representing and manipulating such imperfect knowledge. The survey concentrates in giving an unifying presentation of the different approaches and results found in the literature, thus providing a state of the art in the field.</p><p><p align="justify">The rest of the thesis studies in detail the manipulation of one type of incomplete knowledge, namely disjunctive information, and one type of uncertain knowledge, namely probabilistic information. We study both types of imperfect knowledge using similar approaches, that is through an algebraic and a logical framework. The relational algebra operators are generalized for disjunctive and probabilistic relations, and we prove the correctness of these generalizations. In addition, disjunctive and probabilistic databases are formalized using appropriate logical theories and we give sound and complete query evaluation algorithms.</p><p><p align="justify">A major implication of these studies is the conviction that viewing incompleteness and uncertainty as different facets of the same problem would allow to achieve a deeper understanding of imperfect knowledge, which is absolutely necessary for building information systems capable of modeling complex real-life situations. </p> / Doctorat en sciences, Spécialisation Informatique / info:eu-repo/semantics/nonPublished
464

Decentralising the codification of rules in a decision support expert knowledge base

De Kock, Erika 04 March 2004 (has links)
The paradigm of Decision Support Systems (DSS) is to support decision-making, while an Expert System’s (ES) major objective is to provide expert advice in specialised situations. Knowledge-Based DSS (KB-DSS), also called Intelligent Decision Support Systems (IDSS), integrate traditional DSS with the advances of ES. A KB-DSS’ knowledge base usually contains knowledge expressed by an expert and captured by a knowledge engineer. The indirect transfer between the domain expert and the knowledge base through a knowledge engineer may lead to a long and inefficient knowledge acquisition process. This thesis compares 11 DSS packages in search of a (KB-) DSS generator where domain experts can specify and maintain a Specific Decision Support System (SDSS) to assist users in making decisions. The proposed (KB-) DSS-generator is tested with a university and study-program prototype. Since course and study plan programs change intermittently, the (KB-) DSS’ knowledge base enables domain experts to set and maintain their course and study plan rules without the assistance of a knowledge engineer. Criteria are set to govern the (KB-) DSS generator search process. Example knowledge base rules are inspected to determine if domain experts will be able to maintain a set of production rules used in a student registration advice system. By developing a prototype and inspecting knowledge base rules, it was found that domain experts would be able to maintain their knowledge in the decentralised knowledge base, on condition that the objects and attributes used in the rule base were first specified by a builder/programmer. / Dissertation (MSc Computer Science)--University of Pretoria, 2005. / Computer Science / unrestricted
465

Modéliser l'insertion territoriale du Miscanthus x giganteus à partir des décisions des agriculteurs : une approche exploitant le modèle du raisonnement à partir de cas / Modelling miscanthus allocation in farmland based on farmers’ decisions : a framework using the case-based reasoning model

Martin, Laura 01 December 2014 (has links)
Le Miscanthus x giganteus est une culture pérenne, nouvellement produite en Europe et présentant un intérêt fort pour son usage énergétique. Son implantation présage donc une réorganisation territoriale pérenne. Pour anticiper cette réorganisation, de nombreuses études modélisent les dynamiques spatialement explicites de son insertion. Notre thèse se positionne dans ce courant de recherche. Celle-ci vise à proposer un nouveau cadre de modélisation des processus de décision des agriculteurs, permettant la dissémination horizontale (scaling out) de ces processus issus d’études de cas, vers des territoires élargis. Pour cela, la thèse exploite le modèle du raisonnement à partir de cas. Elle articule (i) une démarche d’acquisition de connaissances sur les processus de décision des agriculteurs relatifs à l’insertion territoriale du miscanthus et (ii) la conception et évaluation d’un prototype ad hoc de raisonnement à partir de cas. La phase d’acquisition des connaissances montre que le processus d’insertion territoriale du miscanthus est complexe : celui-ci est étroitement lié aux contraintes parcellaires du territoire. Ces connaissances nous conduisent alors à discuter du choix des variables biophysiques et humaines intégrées à ce jour dans les modèles spatialement explicites. La phase de conception et d’évaluation du prototype de raisonnement à partir de cas montre que le modèle du raisonnement à partir de cas est particulièrement bien adapté pour modéliser un phénomène contextualisé. Evalués sur nos données d’enquêtes, ces résultats nous conduisent à discuter des modalités d’application du prototype sur d’autres bassins de production de miscanthus / Miscanthus x giganteus is the perennial crop, newly produced in Europe. Even if miscanthus is not so heavily produced nowadays, this crop would be of great interest for energy use. However, the allocation of miscanthus could produce a sustainable reorganization of the landscape. Therefore, many studies aim to model the land use change caused by miscanthus, in order to identify sustainable supply areas: our research belongs to this field. In our research, we propose a new framework for modeling decision-making process of farmers, relying on scaling out. More accurately, we propose to use the case-based reasoning model which solves problems based on an analogical reasoning. Then our research is structured: (i) by a knowledge acquisition step about decision-making process of farmers, based on farm surveys, conducted in the Côte d'Or department (Burgundy region) and (ii) by the design and evaluation of an ad hoc prototype of case-based reasoning. On the one hand, results of knowledge acquisition phase show that miscanthus allocation process is complex, more accurately, that miscanthus allocation process is closely related to land constraints, particularly in terms of logistic and environmental preservation of plots. These results lead us to discuss the selection of biophysical and human variables included to the current spatially explicit models. On the other hand, the design and evaluation phase of our prototype shows that case-based reasoning is particularly well suited to model a contextual phenomenon. These results lead us to discuss the modalities for implementing the prototype in other production areas of miscanthus
466

Système de Mesure Mobile Adaptif Qualifié / Mobile System for Adaptive Qualified Measurement

Bourgeois, Florent 21 March 2018 (has links)
Les dispositifs matériels mobiles proposent des capacités de mesure à l'aide de capteurs soit embarqués, soit connectés. Ils ont vocation à être de plus en plus utilisés dans des processus de prises de mesures. Ils présentent un caractère critique dans le sens où ces informations doivent être fiables, car potentiellement utilisées dans un contexte exigeant. Malgré une grande demande, peu d'applications proposent d'assister les utilisateurs lors de relevés exploitant ces capacités. Idéalement, ces applications devraient proposer des méthodes de visualisation, de calcul, des procédures de mesure et des fonctions de communications permettant la prise en charge de capteurs connectés ou encore la génération de rapports. La rareté de ces applications se justifie par les connaissances nécessaires pour permettre la définition de procédures de mesure correctes. Ces éléments sont apportés par la métrologie et la théorie de la mesure et sont rarement présents dans les équipes de développement logiciel. De plus, chaque utilisateur effectue des activités de mesure spécifiques au domaine de son champ d'activités, ce qui implique le développement d'applications spécifiques de qualité pouvant être certifiées par des experts. Ce postulat apporte la question de recherche à laquelle les travaux présentés répondent: Comment proposer une approche pour la conception d’applications adaptées à des procédures de mesures spécifiques. Les procédures de mesure pouvant être configurées par un utilisateur final La réponse développée est une "plateforme" de conception d'applications d'assistance à la mesure. Elle permet d'assurer la conformité des procédures de mesures sans l'intervention d'expert de la métrologie. Pour cela elle est construite en utilisant des concepts issus de la métrologie, de l'Ingénierie Dirigée par les Modèles et de la logique du premier ordre. Une étude du domaine de la métrologie permet de mettre en évidence la nécessité d'une expertise des procédures de mesure impliquées dans les applications. Cette expertise comprend des termes et des règles assurant l'intégrité et la cohérence d'une procédure de mesure. Un modèle conceptuel du domaine de la métrologie est proposé. Ce modèle conceptuel est ensuite intégré au processus de développement d'une application. Cette intégration se fait par un encodage de ce modèle conceptuel sous la forme d'un schéma des connaissances de la métrologie en logique du premier ordre. Il permet, la vérification du respect des contraintes inhérentes à la métrologie dans une procédure de mesure. Cette vérification est réalisée en confrontant les procédures de mesures au schéma sous forme de requêtes. Ces requêtes sont décrites à l'aide d'un langage proposé par le schéma. Les applications d'assistance à la mesure nécessitent d'exposer à l'utilisateur un processus de mesure impliquant relevés et affichages de mesures étape par étape. Cela implique de pouvoir décrire un processus de mesure et d'en définir les interfaces et le schéma d'évolution. Pour cela, un éditeur d'application est proposé. Cet éditeur propose un langage spécifique dédié à la description d'applications d'assistance à la mesure. Ce langage est construit à partir des concepts, formalismes et outils proposés par l'environnement de métamodélisation Diagrammatic Predicate Framework (DPF). Le langage comporte des contraintes syntaxiques prévenant les erreurs de construction au niveau logiciel tout en réduisant l'écart sémantique entre l'architecte logiciel l'utilisant et un potentiel expert de la métrologie. [...] / Mobile devices offer measuring capabilities using embedded or connected sensors. They are more and more used in measuring processes. They are critical because the performed measurements must be reliable because possibly used in rigorous context. Despite a real demand, there are relatively few applications assisting users with their measuring processes that use those sensors. Such assistant should propose methods to visualise and to compute measuring procedures while using communication functions to handle connected sensors or to generate reports. Such rarity of applications arises because of the knowledges required to define correct measuring procedures. Those knowledges are brought by metrology and measurement theory and are rarely found in software development teams. Moreover, every user has specific measuring activities depending on his field of work. That implies many quality applications developments which could request expert certification. These premises bring the research question the presented works answer : What approach enables the conception of applications suitable to specific measurement procedures considering that the measurement procedures could be configured by the final user. The presented works propose a platform for the development of measuring assistant applications. The platform ensure the conformity of measuring processes without involving metrology experts. It is built upon metrology, model driven engineering and first order logic concepts. A study of metrology enables to show the need of applications measuring process expert evaluation. This evaluation encompasses terms and rules that ensure the process integrity and coherence. A conceptual model of the metrology domain is proposed. That model is then employed in the development process of applications. It is encoded into a first order logic knowledge scheme of the metrology concepts. That scheme enables to verify that metrology constraints holds in a given measuring process. The verification is performed by confronting measuring processes to the knowledge scheme in the form of requests. Those requests are described with a request language proposed by the scheme. Measuring assistant applications require to propose to the user a measuring process that sequences measuring activities. This implies to describe a measuring process, and also to define interactive interfaces and sequencing mechanism. An application editor is proposed. That editor uses a domain specific language dedicated to the description of measuring assistant applications. The language is built upon concepts, formalisms and tools proposed by the metamodeling environment : Diagrammatic Predicat Framework (DPF). The language encompasses syntactical constraints that prevent construction errors on the software level while reducing the semantical gap between the software architect using it and a potential metrology expert. Then, mobile platforms need to execute a behaviour conforming to the editor described one. An implementation modelling language is proposed. This language enables to describe measuring procedures as sequences of activities. Activities imply to measure, compute and present values. Quantities are all abstracted by numerical values. This eases their computation and the use of sensors. The implementation model is made up of software agents. A mobile application is also proposed. The application is built upon a framework of agents, an agent network composer and a runtime system. The application is able to consider an implementation model and to build the corresponding agent network in order to propose a behaviour matching the end users needs. This enables to answer to any user needs, considering he can access to the implementation model, without requiring to download several applications.
467

VIKA - Konzeptstudien eines virtuellen Konstruktionsberaters für additiv zu fertigende Flugzeugstrukturbauteile

Steffen, Johann 06 September 2021 (has links)
Gegenstand der Arbeit ist die konzeptionelle Ausarbeitung einer virtuellen Anwendung, die es den Anwendern in der Flugzeugstrukturkonstruktion im Kontext der additiven Fertigung ermöglicht, interaktiv und intuitiv wichtige Entscheidungen für den Bauteilentstehungsprozess zu treffen. Dabei soll sich die Anwendung adaptiv je nach Anwendungsfall in der Informationsbereitstellung an die jeweils benötigten Anforderungen und Bedürfnisse des Anwenders anpassen können.
468

Learning-based Attack and Defense on Recommender Systems

Agnideven Palanisamy Sundar (11190282) 06 August 2021 (has links)
The internet is the home for massive volumes of valuable data constantly being created, making it difficult for users to find information relevant to them. In recent times, online users have been relying on the recommendations made by websites to narrow down the options. Online reviews have also become an increasingly important factor in the final choice of a customer. Unfortunately, attackers have found ways to manipulate both reviews and recommendations to mislead users. A Recommendation System is a special type of information filtering system adapted by online vendors to provide suggestions to their customers based on their requirements. Collaborative filtering is one of the most widely used recommendation systems; unfortunately, it is prone to shilling/profile injection attacks. Such attacks alter the recommendation process to promote or demote a particular product. On the other hand, many spammers write deceptive reviews to change the credibility of a product/service. This work aims to address these issues by treating the review manipulation and shilling attack scenarios independently. For the shilling attacks, we build an efficient Reinforcement Learning-based shilling attack method. This method reduces the uncertainty associated with the item selection process and finds the most optimal items to enhance attack reach while treating the recommender system as a black box. Such practical online attacks open new avenues for research in building more robust recommender systems. When it comes to review manipulations, we introduce a method to use a deep structure embedding approach that preserves highly nonlinear structural information and the dynamic aspects of user reviews to identify and cluster the spam users. It is worth mentioning that, in the experiment with real datasets, our method captures about 92\% of all spam reviewers using an unsupervised learning approach.<br>
469

Data-driven Uncertainty Analysis in Neural Networks with Applications to Manufacturing Process Monitoring

Bin Zhang (11073474) 12 August 2021 (has links)
<p>Artificial neural networks, including deep neural networks, play a central role in data-driven science due to their superior learning capacity and adaptability to different tasks and data structures. However, although quantitative uncertainty analysis is essential for training and deploying reliable data-driven models, the uncertainties in neural networks are often overlooked or underestimated in many studies, mainly due to the lack of a high-fidelity and computationally efficient uncertainty quantification approach. In this work, a novel uncertainty analysis scheme is developed. The Gaussian mixture model is used to characterize the probability distributions of uncertainties in arbitrary forms, which yields higher fidelity than the presumed distribution forms, like Gaussian, when the underlying uncertainty is multimodal, and is more compact and efficient than large-scale Monte Carlo sampling. The fidelity of the Gaussian mixture is refined through adaptive scheduling of the width of each Gaussian component based on the active assessment of the factors that could deteriorate the uncertainty representation quality, such as the nonlinearity of activation functions in the neural network. </p> <p>Following this idea, an adaptive Gaussian mixture scheme of nonlinear uncertainty propagation is proposed to effectively propagate the probability distributions of uncertainties through layers in deep neural networks or through time in recurrent neural networks. An adaptive Gaussian mixture filter (AGMF) is then designed based on this uncertainty propagation scheme. By approximating the dynamics of a highly nonlinear system with a feedforward neural network, the adaptive Gaussian mixture refinement is applied at both the state prediction and Bayesian update steps to closely track the distribution of unmeasurable states. As a result, this new AGMF exhibits state-of-the-art accuracy with a reasonable computational cost on highly nonlinear state estimation problems subject to high magnitudes of uncertainties. Next, a probabilistic neural network with Gaussian-mixture-distributed parameters (GM-PNN) is developed. The adaptive Gaussian mixture scheme is extended to refine intermediate layer states and ensure the fidelity of both linear and nonlinear transformations within the network so that the predictive distribution of output target can be inferred directly without sampling or approximation of integration. The derivatives of the loss function with respect to all the probabilistic parameters in this network are derived explicitly, and therefore, the GM-PNN can be easily trained with any backpropagation method to address practical data-driven problems subject to uncertainties.</p> <p>The GM-PNN is applied to two data-driven condition monitoring schemes of manufacturing processes. For tool wear monitoring in the turning process, a systematic feature normalization and selection scheme is proposed for the engineering of optimal feature sets extracted from sensor signals. The predictive tool wear models are established using two methods, one is a type-2 fuzzy network for interval-type uncertainty quantification and the other is the GM-PNN for probabilistic uncertainty quantification. For porosity monitoring in laser additive manufacturing processes, convolutional neural network (CNN) is used to directly learn patterns from melt-pool patterns to predict porosity. The classical CNN models without consideration of uncertainty are compared with the CNN models in which GM-PNN is embedded as an uncertainty quantification module. For both monitoring schemes, experimental results show that the GM-PNN not only achieves higher prediction accuracies of process conditions than the classical models but also provides more effective uncertainty quantification to facilitate the process-level decision-making in the manufacturing environment.</p><p>Based on the developed uncertainty analysis methods and their proven successes in practical applications, some directions for future studies are suggested. Closed-loop control systems may be synthesized by combining the AGMF with data-driven controller design. The AGMF can also be extended from a state estimator to the parameter estimation problems in data-driven models. In addition, the GM-PNN scheme may be expanded to directly build more complicated models like convolutional or recurrent neural networks.</p>
470

Fuzzy Petriho sítě pro expertní systémy / Fuzzy Petri Nets for Expert systems

Maksant, Jindřich January 2009 (has links)
The object of this thesis is proposal and practical implementation of expert system, whose knowledge base will be modeling by fuzzy Petri nets. The proposal is based on knowledge in theoretical analysis of diagnostic expert system and fuzzy Petri nets. This proposal is realised in programming language C#. There are described functions of program and it is made a model consultation with using two different knowledge base.

Page generated in 0.4005 seconds