Spelling suggestions: "subject:"aemantic 3dmodeling"" "subject:"aemantic bymodeling""
1 |
Extending the Abstract Data Model.Winegar, Matthew Bryston 07 May 2005 (has links) (PDF)
The Abstract Data Model (ADM) was developed by Sanderson [19] to model and predict semantic loss in data translation between computer languages. In this work, the ADM was applied to eight languages that were not considered as part of the original work. Some of the languages were found to support semantic features, such as the restriction semantics for inheritance found in languages like XML Schemas and Java, which could not be represented in the ADM. A proposal was made to extend the ADM to support these semantic features, and the requirements and implications of implementing that proposal were considered.
|
2 |
Functional Ontologies and Their Application to Hydrologic Modeling: Development of an Integrated Semantic and Procedural Knowledge Model and Reasoning EngineByrd, Aaron R. 01 August 2013 (has links)
This dissertation represents the research and development of new concepts and techniques for modeling the knowledge about the many concepts we as hydrologists must understand such that we can execute models that operate in terms of conceptual abstractions and have those abstractions translate to the data, tools, and models we use every day. This hydrologic knowledge includes conceptual (i.e. semantic) knowledge, such as the hydrologic cycle concepts and relationships, as well as functional (i.e. procedural) knowledge, such as how to compute the area of a watershed polygon, average basin slope or topographic wetness index. This dissertation is presented as three papers and a reference manual for the software created. Because hydrologic knowledge includes both semantic aspects as well as procedural aspects, we have developed, in the first paper, a new form of reasoning engine and knowledge base that extends the general-purpose analysis and problem-solving capability of reasoning engines by incorporating procedural knowledge, represented as computer source code, into the knowledge base. The reasoning engine is able to compile the code and then, if need be, execute the procedural code as part of a query. The potential advantage to this approach is that it simplifies the description of procedural knowledge in a form that can be readily utilized by the reasoning engine to answer a query. Further, since the form of representation of the procedural knowledge is source code, the procedural knowledge has the full capabilities of the underlying language. We use the term "functional ontology" to refer to the new semantic and procedural knowledge models. The first paper applies the new knowledge model to describing and analyzing polygons. The second and third papers address the application of the new functional ontology reasoning engine and knowledge model to hydrologic applications. The second paper models concepts and procedures, including running external software, related to watershed delineation. The third paper models a project scenario that includes integrating several models. A key advance demonstrated in this paper is the use of functional ontologies to apply metamodeling concepts in a manner that both abstracts and fully utilizes computational models and data sets as part of the project modeling process.
|
3 |
Streams, Structures, Spaces,Scenarios, and Societies (5S): A Formal Digital Library Framework and Its ApplicationsGonçcalves, Marcos André 08 December 2004 (has links)
Digital libraries (DLs) are complex information systems and therefore demand formal foundations lest development efforts diverge and interoperability suffers. In this dissertation, we propose the fundamental abstractions of Streams, Structures, Spaces, Scenarios, and Societies (5S), which allow us to define digital libraries rigorously and usefully. Streams are sequences of arbitrary items used to describe both static and dynamic (e.g., video) content. Structures can be viewed as labeled directed graphs, which impose organization. Spaces are sets with operations that obey certain constraints. Scenarios consist of sequences of events or actions that modify states of a computation in order to accomplish a functional requirement. Societies are sets of entities and activities, and the relationships among them. Together these abstractions provide a formal foundation to define, relate, and unify concepts -- among others, of digital objects, metadata, collections, and services -- required to formalize and elucidate ``digital libraries''. A digital library theory based on 5S is defined by proposing a formal ontology that defines the fundamental concepts, relationships, and axiomatic rules that govern the DL domain. The ontology is an axiomatic, formal treatment of DLs, which distinguishes it from other approaches that informally define a number of architectural invariants. The applicability, versatility, and unifying power of the 5S theory are demonstrated through its use in a number of distinct applications including: 1) building and interpreting a DL taxonomy; 2) informal and formal analysis of case studies of digital libraries (NDLTD and OAI); 3)utilization as a formal basis for a DL description language, digital library visualization and generation tools, and a log format specific for DLs; and 4) defining a quality model for DLs. / Ph. D.
|
4 |
Technologies sémantiques pour un système actif d’apprentissage / Semantic Technologies for an Active Learning SystemSzilagyi, Ioan 26 March 2014 (has links)
Les méthodes d’apprentissage évoluent et aux modèles classiques d’enseignement viennent s’ajouter de nouveaux paradigmes, dont les systèmes d’information et de communication, notamment le Web, sont une partie essentielle. Afin améliorer la capacité de traitement de l’information de ces systèmes, le Web sémantique définit un modèle de description de ressources (Resource Description Framework – RDF), ainsi qu’un langage pour la définition d’ontologies (Web Ontology Language – OWL). Partant des concepts, des méthodes, des théories d’apprentissage, en suivant une approche systémique, nous avons utilisé les technologies du Web sémantique pour réaliser une plateforme d’apprentissage capable d’enrichir et de personnaliser l’expérience de l’apprenant. Les résultats de nos travaux sont concrétisés dans la proposition d’un prototype pour un Système Actif et Sémantique d’Apprentissage (SASA). Suite à l’identification et la modélisation des entités participant à l’apprentissage, nous avons construit six ontologies, englobant les caractéristiques de ces entités. Elles sont les suivantes : (1) ontologie de l’apprenant, (2) ontologie de l’objet pédagogique, (3) ontologie de l’objectif d’apprentissage, (4) ontologie de l’objet d’évaluation, (5) ontologie de l’objet d’annotation et (6) ontologie du cadre d’enseignement. L’intégration des règles au niveau des ontologies déclarées, cumulée avec les capacités de raisonnement des moteurs d’inférences incorporés au niveau du noyau sémantique du SASA, permettent l’adaptation du contenu d’apprentissage aux particularités des apprenants. L’utilisation des technologies sémantiques facilite l’identification des ressources d’apprentissage existant sur le Web ainsi que l’interprétation et l’agrégation de ces ressources dans le cadre du SASA / Learning methods keep evolving and new paradigms are added to traditional teaching models where the information and communication systems, particularly the Web, are an essential part. In order to improve the processing capacity of information systems, the Semantic Web defines a model for describing resources (Resource Description Framework - RDF), and a language for defining ontologies (Web Ontology Language – OWL). Based on concepts, methods, learning theories, and following a systemic approach, we have used Semantic Web technologies in order to provide a learning system that is able to enrich and personalize the experience of the learner. As a result of our work we are proposing a prototype for an Active Semantic Learning System (SASA). Following the identification and modeling of entities involved in the learning process, we created the following six ontologies that summarize the characteristics of these entities: (1) learner ontology, (2) learning object ontology, (3) learning objective ontology, (4) evaluation object ontology, (5) annotation object ontology and (6) learning framework ontology. Integrating certain rules in the declared ontologies combined with reasoning capacities of the inference engines embedded in the kernel of the SASA, allow the adaptation of learning content to the characteristics of learners. The use of semantic technologies facilitates the identification of existing learning resources on the web as well as the interpretation and aggregation of these resources within the context of SASA
|
5 |
Specification, Configuration and Execution of Data-intensive Scientific ApplicationsKumar, Vijay Shiv 14 December 2010 (has links)
No description available.
|
6 |
Modélisation des connaissances et raisonnement à base d'ontologies spatio-temporelles : application à la robotique ambiante d'assistance / Knowledge modeling and reasoning based on spatio-temporal ontologies : application to ambient assisted-roboticsAyari, Naouel 15 December 2016 (has links)
Dans cette thèse, nous proposons un cadre générique pour la modélisation et la gestion du contexte dans le cadre des systèmes intelligents ambiants et robotiques. Les connaissances contextuelles considérées sont de plusieurs types et issues de perceptions multimodales : connaissances spatiales et/ou temporelles, changement d’états et de propriétés d’entités, énoncés en langage naturel. Pour ce faire, nous avons proposé une extension du langage NKRL (Narrative Knowledge Representation and Reasoning) pour parvenir à une représentation unifiée des connaissances contextuelles qu’elles soient spatiales, temporelles ou spatio-temporelles et effectuer les raisonnements associés. Nous avons exploité l’expressivité des ontologies n-aires sur lesquelles repose le langage NKRL pour pallier aux problèmes rencontrés dans les approches de représentation des connaissances spatiales et dynamiques à base d’ontologies binaires, communément utilisées en intelligence ambiante et en robotique. Il en résulte une modélisation plus riche, plus fine et plus cohérente du contexte permettant une meilleure adaptation des services d’assistance à l’utilisateur dans le cadre des systèmes intelligents ambiants et robotiques. La première contribution concerne la modélisation des connaissances spatiales et/ou temporelles et des changements de contexte, et les inférences spatiales, temporelles ou spatio-temporelles. La deuxième contribution concerne, quant à elle, le développement d’une méthodologie permettant d’effectuer un traitement syntaxique et une annotation sémantique pour extraire, à partir d’un énoncé en langage naturel, des connaissances contextuelles spatiales ou temporelles en NKRL. Ces contributions ont été validées et évaluées en termes de performances (temps de traitement, taux d’erreurs, et taux de satisfaction des usagers) dans le cadre de scénarios mettant en œuvre différentes formes de services : assistance au bien-être, assistance de type aide sociale, assistance à la préparation d’un repas / In this thesis, we propose a generic framework for modeling and managing the context in ambient and robotic intelligent systems. The contextual knowledge considered is of several types and derived from multimodal perceptions : spatial and / or temporal knowledge, change of states and properties of entities, statements in natural language. To do this, we proposed an extension of the Narrative Knowledge Representation and Reasoning (NKRL) language to reach a unified representation of contextual knowledge whether spatial, temporal or spatio-temporal and perform the associated reasoning. We have exploited the expressiveness of the n-ary ontologies on which the NKRL language is based to bearing on the problems encountered in the spatial and dynamic knowledge representation approaches based on binary ontologies, commonly used in ambient intelligence and robotics. The result is a richer, finer and more coherent modeling of the context allowing a better adaptation of user assistance services in the context of ambient and robotic intelligent systems. The first contribution concerns the modeling of spatial and / or temporal knowledge and contextual changes, and spatial, temporal or spatial-temporal inferences. The second contribution concerns the development of a methodology allowing to carry out a syntactic treatment and a semantic annotation to extract, from a statement in natural language, spatial or temporal contextual knowledge in NKRL. These contributions have been validated and evaluated in terms of performance (processing time, error rate, and user satisfaction rate) in scenarios involving different forms of services: wellbeing assistance, social assistance, assistance with the preparation of a meal
|
7 |
Spatial decision support in urban environments using machine learning, 3D geo-visualization and semantic integration of multi-source data / Aide à la décision spatiale dans les environnements urbains à l'aide du machine learning, de la géo-visualisation 3D et de l'intégration sémantique de données multi-sourcesSideris, Nikolaos 26 November 2019 (has links)
La quantité et la disponibilité sans cesse croissantes de données urbaines dérivées de sources variées posent de nombreux problèmes, notamment la consolidation, la visualisation et les perspectives d’exploitation maximales des données susmentionnées. Un problème prééminent qui affecte l’urbanisme est le choix du lieu approprié pour accueillir une activité particulière (service social ou commercial commun) ou l’utilisation correcte d’un bâtiment existant ou d’un espace vide. Dans cette thèse, nous proposons une approche pour aborder les défis précédents rencontrés avec les techniques d’apprentissage automatique, le classifieur de forêts aléatoires comme méthode dominante dans un système qui combine et fusionne divers types de données provenant de sources différentes, et les code à l’aide d’un nouveau modèle sémantique. qui peut capturer et utiliser à la fois des informations géométriques de bas niveau et des informations sémantiques de niveau supérieur et les transmet ensuite au classifieur de forêts aléatoires. Les données sont également transmises à d'autres classificateurs et les résultats sont évalués pour confirmer la prévalence de la méthode proposée. Les données extraites proviennent d’une multitude de sources, par exemple: fournisseurs de données ouvertes et organisations publiques s’occupant de planification urbaine. Lors de leur récupération et de leur inspection à différents niveaux (importation, conversion, géospatiale, par exemple), ils sont convertis de manière appropriée pour respecter les règles du modèle sémantique et les spécifications techniques des sous-systèmes correspondants. Des calculs géométriques et géographiques sont effectués et des informations sémantiques sont extraites. Enfin, les informations des étapes précédentes, ainsi que les résultats des techniques d’apprentissage automatique et des méthodes multicritères, sont intégrés au système et visualisés dans un environnement Web frontal capable d’exécuter et de visualiser des requêtes spatiales, permettant ainsi la gestion de trois processus. objets géoréférencés dimensionnels, leur récupération, transformation et visualisation, en tant que système d'aide à la décision. / The constantly increasing amount and availability of urban data derived from varying sources leads to an assortment of challenges that include, among others, the consolidation, visualization, and maximal exploitation prospects of the aforementioned data. A preeminent problem affecting urban planning is the appropriate choice of location to host a particular activity (either commercial or common welfare service) or the correct use of an existing building or empty space. In this thesis we propose an approach to address the preceding challenges availed with machine learning techniques with the random forests classifier as its dominant method in a system that combines, blends and merges various types of data from different sources, encode them using a novel semantic model that can capture and utilize both low-level geometric information and higher level semantic information and subsequently feeds them to the random forests classifier. The data are also forwarded to alternative classifiers and the results are appraised to confirm the prevalence of the proposed method. The data retrieved stem from a multitude of sources, e.g. open data providers and public organizations dealing with urban planning. Upon their retrieval and inspection at various levels (e.g. import, conversion, geospatial) they are appropriately converted to comply with the rules of the semantic model and the technical specifications of the corresponding subsystems. Geometrical and geographical calculations are performed and semantic information is extracted. Finally, the information from individual earlier stages along with the results from the machine learning techniques and the multicriteria methods are integrated into the system and visualized in a front-end web based environment able to execute and visualize spatial queries, allow the management of three-dimensional georeferenced objects, their retrieval, transformation and visualization, as a decision support system.
|
8 |
IFC-Based Systems and Methods to Support Construction Cost EstimationTemitope Akanbi (10776249) 10 May 2021 (has links)
<div>Cost estimation is an integral part of any project, and accuracy in the cost estimation process is critical in achieving a successful project. Manually computing cost estimates is mentally draining, difficult to compute, and error-prone. Manual cost estimate computation is a task that requires experience. The use of automated techniques can improve the accuracy of estimates and vastly improve the cost estimation process. Two main gaps in the automation of construction cost estimation are: (1) the lack of interoperability between different software platforms, and (2) the need for manual inputs to complete quantity take-off (QTO) and cost estimation. To address these gaps, this research proposed a new systems to support the computing of cost estimation using Model View Definition (MVD)-based checking, industry foundation classes (IFC) geometric analysis, logic-based reasoning, natural language processing (NLP), and automated 3D image generation to reduce/eliminate the labor-intensive, tedious, manual efforts needed in completing construction cost estimation. In this research, new IFC-based systems were developed: (1) Modeling – an automated IFC-based system for generating 3D information models from 2D PDF plans; (2) QTO - a construction MVD specification for IFC model checking to prepare for cost estimation analysis and a new algorithm development method that computes quantities using the geometric analysis of wooden building objects in an IFC-based building information modeling (BIM) and extracts the material variables needed for cost estimation through item matching based on natural language processing; and (3) Costing – an ontology-based cost model for extracting design information from construction specifications and using the extracted information to retrieve the pricing of the materials for a robust cost information provision.</div><div><br></div><div>These systems developed were tested on different projects. Compared with the industry’s current practices, the developed systems were more robust in the automated processing of drawings, specifications, and IFC models to compute material quantities and generate cost estimates. Experimental results showed that: (1) Modeling - the developed component can be utilized in developing algorithms that can generate 3D models and IFC output files from Portable Document Format (PDF) bridge drawings in a semi-automated fashion. The developed algorithms utilized 3.33% of the time it took using the current state-of-the-art method to generate a 3D model, and the generated models were of comparative quality; (2) QTO – the results obtained using the developed component were consistent with the state-of-the-art commercial software. However, the results generated using the proposed component were more robust about the different BIM authoring tools and workflows used; (3) Extraction – the algorithms developed in the extraction component achieved 99.2% precision and 99.2% recall (i.e., 99.2% F1-measure) for extracted design information instances; 100% precision and 96.5% recall (i.e., 98.2% F1-measure) for extracted materials from the database; and (4) Costing - the developed algorithms in the costing component successfully computed the cost estimates and reduced the need for manual input in matching building components with cost items.</div>
|
9 |
Representação de conhecimento : programação em lógica e o modelo das hiperredes / Knowledge representation: logic programming and the hypernets modelPalazzo, Luiz Antonio Moro January 1991 (has links)
Apesar de sua inerente indecidibilidade e do problema da negação, extensões da lógica de primeira ordem tem se mostrado capazes de superar a questão da monotonicidade, vindo a constituir esquemas de representação de conhecimento de expressividade virtualmente universal. Resta entretanto solucionar ou pelo menos amenizar as conseqüências do problema do controle, que limitam o seu emprego a aplicações de pequeno a médio porte. Investigações nesse sentido [BOW 85] [MON 88] indicam que a chave para superar a explosão inferencial passa obrigatoriamente pela estruturação do conhecimento, de modo a permitir o exercício de algum controle sobre as possíveis derivações dele decorrentes. O modelo das hiperredes [GEO 85] parece atingir tal objetivo, dado o seu elevado potencial de estruturação e o instrumental que oferece para o tratamento de construções descritivas, operacionais e organizacionais. Além disso, a simplicidade e uniformidade sintática de suas entidades primitivas possibilita uma interpretação semântica bastante clara do modelo original, por exemplo, baseada em grafos. O presente trabalho representa uma tentativa de associar a programação em lógica ao formalismo das hiperredes, visando obter um novo modelo capaz de preservar as expressividade da primeira, beneficiando-se simultaneamente do potencial heurístico e estrutura do segundo. Inicialmente procura-se obter uma noção clara da natureza do conhecimento e de seus mecanismos com o objetivo de caracterizar o problema da representação de conhecimento. Diferentes esquemas correntemente empregados para esse fim (sistemas de produções, redes semânticas, sistemas de frames, programação em lógica e a linguagem Krypton) são estudados e caracterizados do ponto de vista de sua expressividade, potencial heurístico e conveniência notacional. A programação em lógica é objeto de um estudo em maior profundidade, sob os enfoques modelo-teorético e prova-teorético. Sistemas de programação em lógica - particularmente a linguagem Prolog e extensões em nível meta - são investigados como esquemas de representação de conhecimento, considerando seus aspectos sintáticos e semânticos e a sua retação com Sistemas Gerenciadores de Bases de Dados. O modelo das hiperredes é apresentado introduzindo-se, entre outros, os conceitos de hipernodo, hiperrelação e protótipo, assim como as propriedades particutares de tais entidades. A linguagem Hyper, para o tratamento de hiperredes, é formalmente especificada. Emprega-se a linguagem Prolog como formalismo para a representação de Bases de Conhecimento estruturadas segundo o modelo das hiperredes. Sob tal abordagem uma Base de Conhecimento é vista como um conjunto (possivelmente vazio) de objetos estruturados ou peças de conhecimento, que por sua vez são classificados como hipernodos, hiperrelações ou protótipos. Um mecanismo top-down para a produção de inferências em hiperredes é proposto, introduzindo-se os conceitos de aspecto e visão sobre hiperredes, os quais são tomados como objetos de primeira classe, no sentido de poderem ser valores atribuídos a variáveis. Estuda-se os requisitos que um Sistema Gerenciador de Bases de Conhecimento deve apresentar, do ponto de vista da aplicação, da engenharia de conhecimento e da implementação, para suportar efetivamente os conceitos e abstrações (classificação, generalização, associação e agregação) associadas ao modelo proposto. Com base nas conclusões assim obtidas, um Sistema Gerenciador de Bases de Conhecimento (denominado Rhesus em alusão à sua finalidade experimental é proposto e especificado, objetivando confirmar a viabilidade técnica do desenvolvimento de aplicações baseadas em lógica e hiperredes. / In spite of its inherent undecidability and the negation problem, extensions of first-order logic have been shown to be able to overcome the question of the monotonicity, establishing knowledge representation schemata with virtuatLy universal expressiviness. However, one still has to solve, or at Least to reduce the consequences of the control problem, which constrains the use of Logic-based systems to either small or medium-sized applications. Investigations in this direction [BOW 85] [MON 88] indicate that the key to overcome the inferential explosion resides in the proper knowledge structure representation, in order to have some control over possible derivations. The Hypernets Model [GEO 85] seems to reach such goat, considering its high structural power and the features that it offers to deal with descriptive, operational and organizational knowledge. Besides, the simplicity and syntactical uniformity of its primitive notions allows a very clear definition for its semantics, based, for instance, on graphs. This work is an attempt to associate logic programming with the hypernets formalism, in order to get a new model, preserving the expressiveness of the former and the heuristic and structural power of the latter. First we try to get a clear notion of the nature of knowledge and its main aspects, intending to characterize the knowledge representation problem. Some knowledge representation schemata (production systems, semantic networks, frame systems, Logic programming and the Krypton Language) are studied and characterized from the point of view of their expressiveness, heuristic power and notational convenience. Logic programming is the subject of a deeper study, under the model-theoretic and proof-theoretic approaches. Logic programming systems - in particular the Prolog Language and metateuel extensions- - are investigated as knowledge representation schemata, considering its syntactic and semantic aspects and its relations with Data Base Management Systems. The hypernets model is presented, introducing the concepts of hypernode, hyperrelation and prototype, as well as the particular properties of those entities. The Hyper language, for the handling of h y pernets, is formally specified. Prolog is used as a formalism for the representation of Knowledge Bases which are structured as hypernets. Under this approach a Knowledge Brie is seen rrG a (possibly empty) set of structured objects, which are classified as hypernodes, hyperreLations or prototypes. A mechanism for top-down reasoning on hypernets is proposed, introducing the concepts of aspect and vision, which are taken as first-class objects in the sense that they could be (-Ysigned as values to variables. We study the requirements for the construction of a Knowledge Base Management System from the point of view of the user's need-1', knowledge engineering support and implementation issues, actually supporting the concepts and abstractions (classification, generalization, association and aggregation) rYsociated with the proposed model. Based on the conclusions of this study, a Knowledge Base Management System (called Rhesus, refering to its experimental objectives) is proposed, intending to confirm the technical viability of the development of applications based on logic and hypernets.
|
10 |
Representação de conhecimento : programação em lógica e o modelo das hiperredes / Knowledge representation: logic programming and the hypernets modelPalazzo, Luiz Antonio Moro January 1991 (has links)
Apesar de sua inerente indecidibilidade e do problema da negação, extensões da lógica de primeira ordem tem se mostrado capazes de superar a questão da monotonicidade, vindo a constituir esquemas de representação de conhecimento de expressividade virtualmente universal. Resta entretanto solucionar ou pelo menos amenizar as conseqüências do problema do controle, que limitam o seu emprego a aplicações de pequeno a médio porte. Investigações nesse sentido [BOW 85] [MON 88] indicam que a chave para superar a explosão inferencial passa obrigatoriamente pela estruturação do conhecimento, de modo a permitir o exercício de algum controle sobre as possíveis derivações dele decorrentes. O modelo das hiperredes [GEO 85] parece atingir tal objetivo, dado o seu elevado potencial de estruturação e o instrumental que oferece para o tratamento de construções descritivas, operacionais e organizacionais. Além disso, a simplicidade e uniformidade sintática de suas entidades primitivas possibilita uma interpretação semântica bastante clara do modelo original, por exemplo, baseada em grafos. O presente trabalho representa uma tentativa de associar a programação em lógica ao formalismo das hiperredes, visando obter um novo modelo capaz de preservar as expressividade da primeira, beneficiando-se simultaneamente do potencial heurístico e estrutura do segundo. Inicialmente procura-se obter uma noção clara da natureza do conhecimento e de seus mecanismos com o objetivo de caracterizar o problema da representação de conhecimento. Diferentes esquemas correntemente empregados para esse fim (sistemas de produções, redes semânticas, sistemas de frames, programação em lógica e a linguagem Krypton) são estudados e caracterizados do ponto de vista de sua expressividade, potencial heurístico e conveniência notacional. A programação em lógica é objeto de um estudo em maior profundidade, sob os enfoques modelo-teorético e prova-teorético. Sistemas de programação em lógica - particularmente a linguagem Prolog e extensões em nível meta - são investigados como esquemas de representação de conhecimento, considerando seus aspectos sintáticos e semânticos e a sua retação com Sistemas Gerenciadores de Bases de Dados. O modelo das hiperredes é apresentado introduzindo-se, entre outros, os conceitos de hipernodo, hiperrelação e protótipo, assim como as propriedades particutares de tais entidades. A linguagem Hyper, para o tratamento de hiperredes, é formalmente especificada. Emprega-se a linguagem Prolog como formalismo para a representação de Bases de Conhecimento estruturadas segundo o modelo das hiperredes. Sob tal abordagem uma Base de Conhecimento é vista como um conjunto (possivelmente vazio) de objetos estruturados ou peças de conhecimento, que por sua vez são classificados como hipernodos, hiperrelações ou protótipos. Um mecanismo top-down para a produção de inferências em hiperredes é proposto, introduzindo-se os conceitos de aspecto e visão sobre hiperredes, os quais são tomados como objetos de primeira classe, no sentido de poderem ser valores atribuídos a variáveis. Estuda-se os requisitos que um Sistema Gerenciador de Bases de Conhecimento deve apresentar, do ponto de vista da aplicação, da engenharia de conhecimento e da implementação, para suportar efetivamente os conceitos e abstrações (classificação, generalização, associação e agregação) associadas ao modelo proposto. Com base nas conclusões assim obtidas, um Sistema Gerenciador de Bases de Conhecimento (denominado Rhesus em alusão à sua finalidade experimental é proposto e especificado, objetivando confirmar a viabilidade técnica do desenvolvimento de aplicações baseadas em lógica e hiperredes. / In spite of its inherent undecidability and the negation problem, extensions of first-order logic have been shown to be able to overcome the question of the monotonicity, establishing knowledge representation schemata with virtuatLy universal expressiviness. However, one still has to solve, or at Least to reduce the consequences of the control problem, which constrains the use of Logic-based systems to either small or medium-sized applications. Investigations in this direction [BOW 85] [MON 88] indicate that the key to overcome the inferential explosion resides in the proper knowledge structure representation, in order to have some control over possible derivations. The Hypernets Model [GEO 85] seems to reach such goat, considering its high structural power and the features that it offers to deal with descriptive, operational and organizational knowledge. Besides, the simplicity and syntactical uniformity of its primitive notions allows a very clear definition for its semantics, based, for instance, on graphs. This work is an attempt to associate logic programming with the hypernets formalism, in order to get a new model, preserving the expressiveness of the former and the heuristic and structural power of the latter. First we try to get a clear notion of the nature of knowledge and its main aspects, intending to characterize the knowledge representation problem. Some knowledge representation schemata (production systems, semantic networks, frame systems, Logic programming and the Krypton Language) are studied and characterized from the point of view of their expressiveness, heuristic power and notational convenience. Logic programming is the subject of a deeper study, under the model-theoretic and proof-theoretic approaches. Logic programming systems - in particular the Prolog Language and metateuel extensions- - are investigated as knowledge representation schemata, considering its syntactic and semantic aspects and its relations with Data Base Management Systems. The hypernets model is presented, introducing the concepts of hypernode, hyperrelation and prototype, as well as the particular properties of those entities. The Hyper language, for the handling of h y pernets, is formally specified. Prolog is used as a formalism for the representation of Knowledge Bases which are structured as hypernets. Under this approach a Knowledge Brie is seen rrG a (possibly empty) set of structured objects, which are classified as hypernodes, hyperreLations or prototypes. A mechanism for top-down reasoning on hypernets is proposed, introducing the concepts of aspect and vision, which are taken as first-class objects in the sense that they could be (-Ysigned as values to variables. We study the requirements for the construction of a Knowledge Base Management System from the point of view of the user's need-1', knowledge engineering support and implementation issues, actually supporting the concepts and abstractions (classification, generalization, association and aggregation) rYsociated with the proposed model. Based on the conclusions of this study, a Knowledge Base Management System (called Rhesus, refering to its experimental objectives) is proposed, intending to confirm the technical viability of the development of applications based on logic and hypernets.
|
Page generated in 0.1019 seconds