• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 218
  • 71
  • 32
  • 19
  • 10
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 526
  • 526
  • 146
  • 138
  • 122
  • 121
  • 118
  • 109
  • 102
  • 100
  • 96
  • 82
  • 79
  • 64
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Toward semantic interoperability for software systems

Lister, Kendall January 2008 (has links)
“In an ill-structured domain you cannot, by definition, have a pre-compiled schema in your mind for every circumstance and context you may find ... you must be able to flexibly select and arrange knowledge sources to most efficaciously pursue the needs of a given situation.” [57] / In order to interact and collaborate effectively, agents, whether human or software, must be able to communicate through common understandings and compatible conceptualisations. Ontological differences that occur either from pre-existing assumptions or as side-effects of the process of specification are a fundamental obstacle that must be overcome before communication can occur. Similarly, the integration of information from heterogeneous sources is an unsolved problem. Efforts have been made to assist integration, through both methods and mechanisms, but automated integration remains an unachieved goal. Communication and information integration are problems of meaning and interaction, or semantic interoperability. This thesis contributes to the study of semantic interoperability by identifying, developing and evaluating three approaches to the integration of information. These approaches have in common that they are lightweight in nature, pragmatic in philosophy and general in application. / The first work presented is an effort to integrate a massive, formal ontology and knowledge-base with semi-structured, informal heterogeneous information sources via a heuristic-driven, adaptable information agent. The goal of the work was to demonstrate a process by which task-specific knowledge can be identified and incorporated into the massive knowledge-base in such a way that it can be generally re-used. The practical outcome of this effort was a framework that illustrates a feasible approach to providing the massive knowledge-base with an ontologically-sound mechanism for automatically generating task-specific information agents to dynamically retrieve information from semi-structured information sources without requiring machine-readable meta-data. / The second work presented is based on reviving a previously published and neglected algorithm for inferring semantic correspondences between fields of tables from heterogeneous information sources. An adapted form of the algorithm is presented and evaluated on relatively simple and consistent data collected from web services in order to verify the original results, and then on poorly-structured and messy data collected from web sites in order to explore the limits of the algorithm. The results are presented via standard measures and are accompanied by detailed discussions on the nature of the data encountered and an analysis of the strengths and weaknesses of the algorithm and the ways in which it complements other approaches that have been proposed. / Acknowledging the cost and difficulty of integrating semantically incompatible software systems and information sources, the third work presented is a proposal and a working prototype for a web site to facilitate the resolving of semantic incompatibilities between software systems prior to deployment, based on the commonly-accepted software engineering principle that the cost of correcting faults increases exponentially as projects progress from phase to phase, with post-deployment corrections being significantly more costly than those performed earlier in a project’s life. The barriers to collaboration in software development are identified and steps taken to overcome them. The system presented draws on the recent collaborative successes of social and collaborative on-line projects such as SourceForge, Del.icio.us, digg and Wikipedia and a variety of techniques for ontology reconciliation to provide an environment in which data definitions can be shared, browsed and compared, with recommendations automatically presented to encourage developers to adopt data definitions compatible with previously developed systems. / In addition to the experimental works presented, this thesis contributes reflections on the origins of semantic incompatibility with a particular focus on interaction between software systems, and between software systems and their users, as well as detailed analysis of the existing body of research into methods and techniques for overcoming these problems.
372

Αναπαράσταση γνώσης : επεκτάσεις στην αλλαγή πεποιθήσεων

Φωτεινόπουλος, Αναστάσιος Μιχαήλ 19 May 2011 (has links)
Η Αλλαγή Πεποιθήσεων είναι το πεδίο που ασχολείται, μελετά και τυποποιεί ένα πλήθος διαδικασιών της συλλογιστικής σκέψης. Οι θεμελιώδεις αρχές της βρίσκονται σε διάφορα φιλοσοφικά συστήματα της περιόδου της αρχαιότητας. Ωστόσο, η σύγχρονη προβληματική που αναπτύσσεται γύρω από το πεδίο αυτό και που καλείται να αντιμετωπίσει εντάσσεται στην ευρύτερη περιοχή της Αναπαράστασης της Γνώσης. Στα μέσα της δεκαετίας του 80 και ύστερα από την προσπάθεια μετάβασης σε πιο συστηματικές και μαθηματικές προσεγγίσεις, η Αλλαγή Πεποιθήσεων αποκτά την τελική της μορφή. Ο όρος Αλλαγή διαιρείται σε τρεις ευρείες υπό-ενότητες: την πρόσθεση, την αφαίρεση και την αναθεώρηση. Η πρόσθεση αναφέρεται στη συλλογή νέων πληροφοριών (επέκταση πεποιθήσεων), η αφαίρεση την απώλεια πληροφορίας, ενώ η αναθεώρηση ερμηνεύει τη μερική ή ολική αλλαγή στο σύνολο των πεποιθήσεών μας, εξαιτίας της εμφάνισης μίας νέας πεποίθησης. Κάθε διαδικασία Αλλαγής συνοδεύεται από ένα σύνολο ορθολογικών αξιωμάτων. Τα αξιώματα διατυπώθηκαν με κύριο σκοπό την ομαδοποίηση, ταξινόμηση και περιορισμό των συλλογιστικών μας ενεργειών. Εκτός από τους τύπους αλλαγών και τα σύνολα των αξιωμάτων που αναφέρθηκαν στο χώρο της Αλλαγής Πεποιθήσεων υπάρχουν και άλλες σημαντικές - συμπληρωματικές διαδικασίες. Μία από τις πιο γνωστές και επωφελείς είναι αυτή της Επαναλαμβανόμενης Αναθεώρησης. Ενώ η απλή αναθεώρηση ερμηνεύει καταστάσεις που προξενούνται από την εμφάνιση μίας και μόνο πληροφορίας, η επαναλαμβανόμενη αναθεώρηση διασαφηνίζει περιπτώσεις μάθησης μέσα από το φάσμα των διαδοχικών πεποιθήσεων. Η παρούσα διατριβή θα μπορούσε να διαιρεθεί σε τρεις μεγάλες κατηγορίες. Η πρώτη εξετάζει συστηματικά τις διάφορες μεθόδους και τεχνικές που αναφέρονται στη διεθνή βιβλιογραφία. Η δεύτερη περιλαμβάνει την κυριότερη ερευνητική μας συνεισφορά καθώς και οι προτάσεις μας πάνω σε ανοικτά προβλήματα της Αλλαγής των Πεποιθήσεων. Πιο συγκεκριμένα, στο αρχικό στάδιο της έρευνάς μας αποτυπώνεται η προσπάθεια σύνδεσης της αναθεώρησης με την επαναλαμβανόμενη αναθεώρηση πεποιθήσεων. Η σύνδεση αυτή επιτυγχάνεται με την εισαγωγή ενός νέου αξιώματος που ονομάζουμε αξίωμα επαναλαμβανόμενης ανάκτησης. Αποδεικνύεται ότι το αξίωμα της επαναλαμβανόμενης ανάκτησης μπορεί να χρησιμοποιηθεί σε πολλές περιπτώσεις κατά τις οποίες το δεύτερο αξίωμα (DP2) των Darwiche και Pearl χαρακτηρίζεται αρκετά ισχυρό. Αποδεικνύουμε επίσης την ορθότητα και πληρότητα του παραπάνω αξιώματος μέσα από το σύστημα σφαιρών του Adam Grove. Στη συνέχεια η έρευνά μας στρέφεται στην προσπάθεια σύνδεσης δύο πολύ σημαντικών περιοχών στην αλλαγή πεποιθήσεων: την Επαναλαμβανόμενη και τη Relevance-Sensitive αναθεώρηση πεποιθήσεων. Τα αποτελέσματα της απόδειξης αφενός αποκαλύπτουν την ύπαρξη μη-συνέπειας μεταξύ τους αξιώματος (P) για τη Relevanse-Sensitive αναθεώρηση πεποιθήσεων με κάθε ένα από τα (DP) αξιώματα της επαναλαμβανόμενης αναθεώρησης πεποιθήσεων, αφετέρου αξιώνουν μία αναγκαία και γενικότερη αποκατάσταση στα τυπικά μοντέλα της αλλαγής πεποιθήσεων. Ακόμη μπορεί να αποδοθεί στη δική μας έρευνα και κάτι διαφορετικό, σε σχέση με τις άλλες: ότι η διαδικασία της αφαίρεσης πεποιθήσεων βασίζεται σε Horn Clauses. Ωστόσο, η αμιγής ερευνητική μας προσπάθεια αναφέρεται την παροχή σημασιολογίας βασιζόμενη σε διατάξεις πιθανών κόσμων για τη διαδικασία του e-contraction που εισήγαγε ο James Delgrande. Η Τρίτη κατηγορία, τέλος, αποβλέπει στην παρουσίαση της κλασσικής θεωρίας της Αναθεώρησης Πεποιθήσεων μέσα από την εφαρμογή της στην επιστήμη των υπολογιστών και πιο συγκεκριμένα, μέσω του Σημασιολογικού Ιστού. / Belief Change is an area that studies and standardizes several reasoning processes. However, the problems it has to confront rest in the wider area of Knowledge Representation. In the mid-1980s and after a transition effort to more systematic and mathematical Approaches, the Belief Change gets into its final form. The term “change” splits in three wide subgroups: expansion, contraction and revision. The expansion regards the collection of new information (belief expansion), while contraction concerns the loss of information. Finally, the revision explains the partial or total change in our beliefs, deriving from the appearance of new information. Every Change process is coupled with several rational postulates. Those were mainly formulated to group, classify and constrain our reasoning. Apart from the change formulas and the postulates mentioned above, in the field of Belief Change there are other important – additional processes. One of the most known and useful is the Iterated Revision. While the simple Revision explains conditions that are induced from the emergence of one and only information, the Iterated Revision clarifies cases of learning through the spectrum of successive beliefs. The present dissertation is classified in three major categories. The first one concerns the systematic study of several methods and techniques found in the international bibliography. The second incorporates our main contribution in research and our propositions with regard in open problems of the Belief Change. More specifically, the initial stage of our research is an effort to connect the revision with the iterated belief revision. This connection is achieved with the introduction of a new postulate called “iterated recovery postulate”. It is also established that the iterated recovery postulate (IR) can be used in many cases where the second postulate DP2, by Darwiche and Pearl, is qualified as rather strong. Moreover, we prove hereby that the postulate is sound and complete through the Adam Grove’s System of Spheres. Our research continues to connect two very important areas in the Belief Change: the Iterated and the Relevance-Sensitive belief revision. The conclusions of this proof reveal the inconsistency between the (P) postulate, regarding the Relevance-Sensitive belief revision, with “each and every one” of the DP postulates of the iterated belief revision. Likewise, they urge for a broad and imperative recovery of the “belief change” typical models. Unlike others, our contribution in research has to do with the belief contraction process, based on Horn Clauses. Our pure research regards the provision of semantics based on possible worlds orderings for the process of e-contraction, introduced by James Delgrande. Finally, the third category tries to present the classical theory of Belief Revision through its application in the computer science and specifically through the Semantic Web.
373

Contribui??o ao desenvolvimento de ontologias para processos petroqu?micos : estudo de caso em uma planta DEA

Diniz, Anthony Andrey Ramalho 03 September 2010 (has links)
Made available in DSpace on 2014-12-17T14:08:41Z (GMT). No. of bitstreams: 1 AnthonyARD_DISSERT.pdf: 2982432 bytes, checksum: 71a6d988a19819a67932642761ac3dda (MD5) Previous issue date: 2010-09-03 / In the last decades, the oil, gas and petrochemical industries have registered a series of huge accidents. Influenced by this context, companies have felt the necessity of engaging themselves in processes to protect the external environment, which can be understood as an ecological concern. In the particular case of the nuclear industry, sustainable education and training, which depend too much on the quality and applicability of the knowledge base, have been considered key points on the safely application of this energy source. As a consequence, this research was motivated by the use of the ontology concept as a tool to improve the knowledge management in a refinery, through the representation of a fuel gas sweetening plant, mixing many pieces of information associated with its normal operation mode. In terms of methodology, this research can be classified as an applied and descriptive research, where many pieces of information were analysed, classified and interpreted to create the ontology of a real plant. The DEA plant modeling was performed according to its process flow diagram, piping and instrumentation diagrams, descriptive documents of its normal operation mode, and the list of all the alarms associated to the instruments, which were complemented by a non-structured interview with a specialist in that plant operation. The ontology was verified by comparing its descriptive diagrams with the original plant documents and discussing with other members of the researchers group. All the concepts applied in this research can be expanded to represent other plants in the same refinery or even in other kind of industry. An ontology can be considered a knowledge base that, because of its formal representation nature, can be applied as one of the elements to develop tools to navigate through the plant, simulate its behavior, diagnose faults, among other possibilities / Nas ?ltimas d?cadas, o segmento de ?leo & g?s e petroqu?mica tem registrado uma s?rie de grandes acidentes. Influenciadas por esse contexto, as empresas tem sentido a necessidade de se engajar em processos de prote??o do ambiente externo, que se traduz na preocupa??o ecol?gica. No caso particular da ind?stria nuclear mundial, a educa??o sustent?vel e o treinamento, que dependem muito da qualidade e utilidade da base de conhecimento, t?m sido considerados pontos chave para utiliza??o desse tipo de energia com seguran?a. Dessa forma, a motiva??o dessa pesquisa foi aplicar o conceito de ontologia como ferramenta para melhorar a gest?o do conhecimento em uma refinaria, atrav?s da representa??o de uma planta de ado?amento de g?s combust?vel, condensando os v?rios tipos informa??es associados com o seu modo de opera??o normal. Em termos de metodologia, este estudo pode ser classificado como uma pesquisa aplicada e descritiva, em que foram analisadas, classificadas e interpretadas informa??es que possibilitaram criar a ontologia descritiva de uma planta real. A modelagem da planta DEA foi realizada de acordo com os fluxogramas de processo, fluxogramas de tubula??o e instrumenta??o, documentos descritivos de seu modo de opera??o e a rela??o de alarmes associados, que foram complementadas com uma entrevista n?o estruturada de um especialista em seu modo de opera??o. A valida??o aconteceu atrav?s da compara??o de grafos montados a partir da ontologia com a documenta??o original e debatidos com o grupo de trabalho. Os conceitos utilizados nesta pesquisa podem ser expandidos para representar outras plantas da pr?pria refinaria ou mesmo de outras ind?strias. A ontologia pode ser considerada uma base de conhecimento, que devido ao seu car?ter formal, pode ser aplicada como um dos elementos no desenvolvimento de ferramentas de navega??o da planta, simula??o de comportamento, diagn?stico de falhas, dentre outras possibilidades
374

Représentation et traitement des connaissances en logique multivalente : cas d'une répartition non uniforme des degrés de vérité / Representation and management of imperfect knowledge in multivalued logic : Case of unbalanced truth degrees

Chaoued, Nouha 30 November 2017 (has links)
Dans la plupart des activités quotidiennes, l’Homme a tendance à utiliser des connaissances imparfaites. L’imperfection se rapporte à trois volets : l’imprécision, l’incertitude et l’incomplétude. Nous thèse concerne les connaissances imprécises. En particulier, nous nous intéressons au traitement qualitatif de l’information imprécise dans les systèmes à base de connaissances. Diverses approches ont été proposées pour traiter les connaissances imprécises, en particulier, la logique floue et la logique multivalente. Les théories des ensembles flous et des multi-ensembles sont un moyen très approprié pour la représentation et la modélisation de l’imprécision.Notre travail s’inscrit dans le contexte de la logique multivalente. Celle-ci permet de représenter symboliquement des connaissances imprécises en utilisant des expressions adverbiales ordonnées du langage naturel. L’utilisation de ces degrés symboliques est plus compréhensible par les experts. Ce type de représentation de données est indépendant du type de leurs domaines de discours. Ainsi, la manipulation des connaissances abstraites ou faisant référence à des échelles numériques se fait de la même manière.Dans la littérature, le traitement de l’information imprécise repose sur une hypothèse implicite de la répartition uniforme des degrés de vérité sur une échelle de 0 à 1. Néanmoins, dans certains cas, un sous-domaine de cette échelle peut être plus informatif et peut inclure plus de termes. Dans ce cas, l’information est définie par des termes déséquilibrés, c’est-à-dire qui ne sont pas uniformément répartis et/ou symétriques par rapport à un terme milieu. Par exemple, pour l’évaluation des apprenants, il est possible de considérer un seul terme négatif F correspondant à l’échec. Quant à la réussite, elle est décrite par plusieurs valeurs de mention, i.e. D, C, B et A. Ainsi, si le terme D est le seuil de la réussite, il est considéré comme le terme milieu avec un seul terme à sa gauche et trois à sa droite. Il s’agit alors d’un ensemble non uniforme.Dans ce travail, nous nous concentrons sur l'extension de la logique multivalente au cas des ensembles non uniformes. En s'appuyant sur notre étude de l'art, nous proposons de nouvelles approches pour représenter et traiter ces ensembles de termes. Tout d'abord, nous introduisons des algorithmes qui permettent de représenter des termes non uniformes à l'aide de termes uniformes et inversement. Ensuite, nous décrivons une méthode pour utiliser des modificateurs linguistiques initialement définis pour les termes uniformes avec des ensembles de termes non uniformes. Par la suite, nous présentons une approche de raisonnement basée sur le modèle du Modus Ponens Généralisé à l'aide des Modificateurs Symboliques Généralisés. Les modèles proposés sont mis en œuvre dans un nouveau système de décision fondé sur des règles pour la reconnaissance de l'odeur de camphre. Nous développons également un outil pour le diagnostic de l'autisme infantile. Les degrés de sévérité de l'atteinte par ce trouble autistique sont représentés par l'échelle d'évaluation de l'autisme infantile (CARS). Il s'agit d'une échelle non uniforme. / In most daily activities, humans use imprecise information derived from appreciation instead of exact measurements to make decisions. Various approaches were proposed to deal with imperfect knowledge, in particular, fuzzy logic and multi-valued logic. In this work, we treat the particular case of imprecise knowledge.Taking into account imprecise knowledge by computer systems is based on their representation by means of linguistic variables. Their values form a set of words expressing the different nuances of the treated information. For example, to judge the beauty of the Mona Lisa or the smell of a flower, it is not possible to give an exact value but an appreciation is given by a term like "beautiful" or "floral".In the literature, dealing with imprecise information relies on an implicit assumption: the distribution of terms is uniform on a scale ranging from 0 to 1. Nevertheless, in some cases, a sub-domain of this scale may be more informative and may include more terms. In this case, knowledge are represented by means of an unbalanced terms set, that is, not uniformly nor symmetrically distributed.We have noticed, in the literature, that in the context of fuzzy logic many researchers have dealt with these term sets. However, it is not the case for multi-valued logic. Thereby, in our work, we aim to establish a methodology to represent and manage this kind of data in the context of multi-valued logic. Two aspects are treated. The first one concerns the representation of terms within an unbalanced multi-set. The second deals with the treatment of such kind of imprecise knowledge, i.e. with symbolic modifiers and in reasoning process.In this work, we focus on unbalanced sets in the context of multi-valued logic. Basing on our study of art, we propose new approaches to represent and treat such term sets. First of all, we introduce algorithms that allow representing unbalanced terms within uniform ones and the inverse way. Then, we describe a method to use linguistic modifiers within unbalanced multi-sets. Afterward, we present a reasoning approach based on the Generalized Modus Ponens model using Generalized Symbolic Modifiers. The proposed models are implemented in a novel rule-based decision system for the camphor odor recognition within unbalanced multi-set. We also develop a tool for child autism diagnosis by means of unbalanced severity degrees of the Childhood Autism Rating Scale (CARS).
375

[en] PATIENT-BUDDY-BUILD: CUSTOMIZED MOBILE MONITORING FOR PATIENTS WITH CHRONIC DISEASES / [pt] PATIENT-BUDDY-BUILD: ACOMPANHAMENTO REMOTO MÓVEL CUSTOMIZÁVEL DE PACIENTES COM DOENÇAS CRÔNICAS

VITOR PINHEIRO DE ALMEIDA 13 January 2017 (has links)
[pt] Este trabalho consiste do desenvolvimento de uma ferramenta para a geração de aplicativos móveis, que possibilita um monitoramento customizado, para o acompanhamento à distância de pacientes com doenças crônicas. A customização ocorre a partir de parâmetros e descrições formais, tais como: preferências do paciente, tipo da doença crônica, processo de acompanhamento desejado pelo seu médico, medicação prescrita e dados sobre o contexto (o entorno) do paciente, estes últimos obtidos de sensores. Com base nestes dados, o sistema irá determinar quais informações são mais relevantes para serem adquiridas do paciente através de questionários ou de sensores disponíveis no dispositivo móvel. Informações relevantes são informações que melhor ajudam a identificar possíveis alterações no processo de monitoramento de um paciente. Estas informações serão enviadas pelo dispositivo móvel, juntamente com os dados dos sensores, para o médico responsável. O processo de acompanhamento médico e a natureza da doença crônica definir ao o conjunto de informações que serão coletadas. É importante ressaltar que o objetivo não é realizar diagnósticos, mas sim, prover informações atualizadas aos médicos sobre os seu pacientes, possibilitando assim, realizar um acompanhamento preventivo à distância. / [en] This thesis consists of the development of a tool for generating mobile applications that enables a customized form of remote monitoring of patients with chronic diseases. The customization is based on parameters and formal descriptions of patient preferences, the type of chronic disease, monitoring procedure required by the doctor, prescribed medication and information about the context (i.e. environment) of the patient, where the later is to be obtained from sensors. Bases on this data, the system will determine which information are more relevant to be acquired from the patient through questionnaires and sensors embedded or connected to the smart phone. Relevant information are information that best helps to identify possible changes in the monitoring process of a patient. This set of information will be sent by the mobile application to the responsible physician. The medical treatment and the kind of chronic disease will define the set of information to be collected. It should be stressed that the goal is not to support automatic diagnosis, but only to provide means for physicians to obtain updated information about their patients, so as to allow remote monitoring of patients.
376

Vers une approche non orientée pour l'évaluation de la qualité des odeurs / Towards a non oriented approach of the evaluation of the odor quality

Medjkoune, Massissilia 30 March 2018 (has links)
Caractériser la qualité d’une odeur est une tâche complexe qui consiste à identifier un ensemble de descripteurs qui synthétise au mieux la sensation olfactive au cours de séances d’analyse sensorielle. Généralement, cette caractérisation est une liste de descripteurs extraite d’un vocabulaire imposé par les industriels d’un domaine pour leurs analyses sensorielles. Ces analyses représentent un coût significatif pour les industriels chaque année. En effet, ces approches dites orientées reposent sur l’apprentissage de vocabulaires, limitent singulièrement les descripteurs pour un public non initié et nécessitent de couteuses phases d’apprentissage. Si cette caractérisation devait être confiée à des évaluateurs naïfs, le nombre de participants pourrait être significativement augmenté tout en réduisant le cout des analyses sensorielles. Malheureusement, chaque description libre n’est alors plus associée à un ensemble de descripteurs non ambigus, mais à un simple sac de mots en langage naturel (LN). Deux problématiques sont alors rattachées à la caractérisation d’odeurs. La première consiste à transformer des descriptions en LN en descripteurs structurés ; la seconde se donne pour objet de résumer un ensemble de descriptions formelles proposées par un panel d’évaluateurs en une synthèse unique et cohérente à des fins industrielles. Ainsi, la première partie de notre travail se focalise sur la définition et l’évaluation de modèles qui peuvent être utilisés pour résumer un ensemble de mots en un ensemble de descripteurs désambiguïsés. Parmi les différentes stratégies envisagées dans cette contribution, nous proposons de comparer des approches hybrides exploitant à la fois des bases de connaissances et des plongements lexicaux définis à partir de grands corpus de textes. Nos résultats illustrent le bénéfice substantiel à utiliser conjointement représentation symbolique et plongement lexical. Nous définissons ensuite de manière formelle le processus de synthèse d’un ensemble de concepts et nous proposons un modèle qui s’apparente à une forme d’intelligence humaine pour évaluer les résumés alternatifs au regard d’un objectif de synthèse donné. L’approche non orientée que nous proposons dans ce manuscrit apparait ainsi comme l’automatisation cognitive des tâches confiées aux opérateurs des séances d’analyse sensorielle. Elle ouvre des perspectives intéressantes pour développer des analyses sensorielles à grande échelle sur de grands panels d’évaluateurs lorsque l’on essaie notamment de caractériser les nuisances olfactives autour d’un site industriel. / Characterizing the quality of smells is a complex process that consists in identifying a set of descriptors best summarizing the olfactory sensation. Generally, this characterization results in a limited set of descriptors provided by sensorial analysis experts. These sensorial analysis sessions are however very costly for industrials. Indeed, such oriented approaches based on vocabulary learning limit, in a restrictive manner, the possible descriptors available for any uninitiated public, and therefore require a costly vocabulary-learning phase. If we could entrust this characterization to neophytes, the number of participants of a sensorial analysis session would be significantly enlarged while reducing costs. However, in that setting, each individual description is not related to a set of non-ambiguous descriptors anymore, but to a bag of terms expressed in natural language (NL). Two issues are then related to smell characterization implementing this approach. The first one is how to translate such NL descriptions into structured descriptors; the second one being how to summarize a set of individual characterizations into a consistent and synthetic unique characterization meaningful for professional purposes. Hence, this work focuses first on the definition and evaluation of models that can be used to summarize a set of terms into unambiguous entity identifiers selected from a given ontology. Among the several strategies explored in this contribution, we propose to compare hybrid approaches taking advantages of knowledge bases (symbolic representations) and word embeddings defined from large text corpora analysis. The results we obtain highlight the relative benefits of mixing symbolic representations with classic word embeddings for this task. We then formally define the problem of summarizing sets of concepts and we propose a model mimicking Human-like Intelligence for scoring alternative summaries with regard to a specific objective function. Interestingly, this non-oriented approach for identifying the quality of odors appears to be an actual cognitive automation of the task today performed by expert operators in sensorial analysis. It therefore opens interesting perspectives for developing scalable sensorial analyses based on large sets of evaluators when assessing, for instance, olfactory pollution around industrial sites.
377

Organização de conhecimento e informações para integração de componentes em um arcabouço de projeto orientado para a manufatura

Ramos, André Luiz Tietböhl January 2015 (has links)
A constante evolução de métodos, tecnologias e ferramentas associadas na área de projeto fornece maior capacidade para o projetista. Entretanto, ela também aumenta os requisitos de interfaces e controle do conjunto de componentes de projeto consideravelmente. Tipicamente, este aspecto está presente na área de Projeto Orientado para a Manufatura (DFM) onde existem diversos distintos componentes. Cada um dos componentes existentes, ou futuros, pode ter foco diferente, consequentemente com requisitos de informação, utilização e execução distintos. Este trabalho propõe a utilização de padrões conceituais flexíveis de informação e controle de forma abrangente em uma arquitetura de Projeto Orientado para a Manufatura (DFM). O objetivo principal é auxiliar a análise e resolução de DFM, bem como dar suporte à atividade de projeto estruturando e propondo uma solução em relevantes aspectos em DFM: estruturação do contexto das informações (ou conhecimento) em DFM. A arquitetura utiliza as seguintes atividades de projeto em processos de usinagem: Tolerância, Custo, Acessibilidade da ferramenta, Disponibilidade de máquinas e ferramentas e Análise de materiais para demonstrar a relevância da correta contextualização e utilização da informação no domínio DFM . Sob forma geral, concomitantemente, as amplas necessidades de compreensão dos distintos tipos e formas da informação em DFM demandam que uma arquitetura de projeto tenha capacidade de gerenciar/administrar diferentes contextos de informações de projeto. Este é um tópico relevante tendo em vista que existem diferentes atividades DFM que eventualmente devem ser incluídas no ato de projetar. Tipicamente, cada uma delas tem requisitos distintos em termos de dados e conhecimento, ou contextualização do projeto, que idealmente poderiam ser gerenciados através da arquitetura de informação atual – STEP.Aarquitetura proposta gerencia contextos de informações de projeto através de ontologias direcionadas no domínio DFM. Através dela, será possível compreender e utilizar melhor as intrínsecas interfaces existentes nas informações deste domínio, além de, através disto, aumentar a flexibilidade e eficácia de sistemas DFM. / This work proposes the use of industry standards to support the utilization of Design for Manufacturing (DFM) techniques in a comprehensive scale in the design field. The specific aspect being considered in an architecture is the definition and structure of DFM information context. In order to demonstrate the research concepts, some design activities are implemented the framework (which is focused in machining processes): Tolerancing model, Cost model based on material remove processes, Tool Accessibility model taking into consideration the part being designed, Availability of Machines and Tools model, and Material analysis. The broad needs of design–based frameworks, in general, require that its architecture must have the capabilities to handle di erent framework design information utilization contexts, or information context concepts. This is a relevant aspect since there are severalDFMcomponents/activities that preferably should be included in the design process. Traditionally, each one of them might have distinct data & knowledge requirements, which can be handled by the current information architecture – STEP – only in part. Additionally, each one of them might have, or need, di erent forms of understanding DFM information (information context). The framework handles information context concepts through the use of the ontologies targeted to the DFMfield. It is expected that a better comprehension and usage of the intrinsic information interfaces existent in its domain be achieved. Through it, more flexible and e ective DFM systems information-wise can be obtained.
378

MAS Ontology: uma ontologia de métodos orientados a agentes / MAS Ontology: an oriented method ontology

Felipe Cordeiro de Paula 21 August 2014 (has links)
Fundação Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro / A modelagem orientada a agentes surge como paradigma no desenvolvimento de software, haja vista a quantidade de iniciativas e estudos que remetem à utilização de agentes de software como solução para tratar de problemas mais complexos. Apesar da popularidade de utilização de agentes, especialistas esbarram na falta de universalidade de uma metodologia para construção dos Sistemas Multiagentes (MAS), pois estas acabam pecando pelo excesso ou falta de soluções para modelar o problema. Esta dissertação propõe o uso de uma Ontologia sobre Metodologias Multiagentes, seguindo os princípios da Engenharia de Métodos Situacionais que se propõe a usar fragmentos de métodos para construção de metodologias baseados na especificidade do projeto em desenvolvimento. O objetivo do estudo é sedimentar o conhecimento na área de Metodologias Multiagentes, auxiliando o engenheiro de software a escolher a melhor metodologia ou o melhor fragmento de metodologia capaz de modelar um Sistema Multiagentes. / The agent-oriented modeling emerges as a paradigm in software development, considering the amount of initiatives and studies that refer to the use of software agents as a solution to address more complex problems. Despite the popularity of using agents, experts bump in the lack of universality of a methodology for the construction of Multiagent Systems (MAS), as they end up sinning by excess or lack of solutions to model the problem. This thesis proposes the use of an Ontology based in Methodologies for Multi-Agent Systems, following the principles of Situational Method Engineering, which proposes to use fragments of methods for constructing methodologies based on the specificity of the development project. The aim of this work is to consolidate the knowledge in the area of Multiagent Methodologies by helping the software engineering to choose the best methodology or the best fragment method able to model a specific Multiagent System.
379

Um estudo sobre objetos com comportamento inteligente / A study on objects with intelligent behavior

Amaral, Janete Pereira do January 1993 (has links)
Diversos estudos têm sido realizados com o objetivo de definir estruturas para construção de ambientes de desenvolvimento de software. Alguns desses estudos indicam a necessidade de prover inteligência a tais ambientes, para que estes, efetivamente, coordenem e auxiliem o processo de desenvolvimento de software. O paradigma da orientação a objetos (POO) vem sendo utilizado na implementação de sistemas inteligentes, com diferentes enfoques. O POO tem sido experimentado, também, como estrutura para construção de ambientes. A abordagem da construção de sistemas, na qual a inteligência se encontra distribuída, como proposto por Hewitt, Minsky e Lieberman, suscita a idéia de modelar objetos que atuem como solucionadores de problemas, trabalhando cooperativamente para atingir os objetivos do sistema, e experimentar essa abordagem na construção de ambientes inteligentes. Nesta dissertação, é apresentado um estudo sobre a utilização do POO na implementação de sistemas inteligentes, e proposta uma extensão ao conceito de objeto. Essa extensão visa permitir flexibilidade no seu comportamento, autonomia nas suas ações, aquisição de novos conhecimentos e interação com o ambiente externo. A existência de objetos com tais características permite a construção de sistemas inteligentes, modularizados e evolutivos, facilitando, assim, seu projeto, implementação e manutenção. Visando esclarecer os termos utilizados no decorrer desta dissertação, são discutidos os conceitos básicos do POO e suas principais extensões. São apresentadas algumas abordagens sobre inteligência e comportamento inteligente, destacando-se a importância de conhecimento, aprendizado e comportamento flexível. Observa-se que este último decorre da aquisição de novos conhecimentos e da análise das condições do ambiente. Buscando fornecer embasamento para análise das características representacionais do POO, são apresentados os principais esquemas de representação de conhecimento e algumas estratégias para resolução de problemas, utilizados em sistemas inteligentes. E analisado o uso do POO como esquema de representação de conhecimento, destacando-se suas vantagens e deficiências. É sintetizado um levantamento de propostas de utilização do POO na implementação de sistemas inteligentes, realizado com o objetivo de identificar os mecanismos empregados na construção desses sistemas. Observa-se a tendência em apoiar a abordagem da inteligência distribuída, utilizando-se a estruturação do conhecimento propiciado pelo POO e características positivas de outros paradigmas. Propõe-se um modelo de objetos com comportamento inteligente. Nesse modelo, além dos aspectos declarativos e procedimentais do conhecimento, representados através de variáveis de instância e de métodos, são encapsulados mecanismos para prover autonomia e comportamento flexível, permitir a aquisição de novos conhecimentos, e propiciar a comunicação com usuários. Para prover autonomia, foi projetado um gerenciador de mensagens, que recebe solicitações enviadas ao objeto, colocando-as numa fila e atendendo-as segundo seu conhecimento e análise das condições do ambiente. Utilizando-se recursos da programação em lógica, são introduzidas facilidades para flexibilização do comportamento, através de regras comportamentais em encadeamento regressivo. A aquisição de novos conhecimentos é obtida através da inclusão/retirada de fatos, de procedimentos e de regras comportamentais na base de conhecimento do objeto. Para fornecer auxílio e relato de suas atividades, os objetos exibem o status da ativação de suas regras comportamentais, e listas das solicitações atendidas e das mantidas em sua fila de mensagens. Para experimentar o modelo proposto, é implementado um protótipo de um assistente inteligente para as atividades do processo de desenvolvimento de software. Sua implementação utiliza a linguagem Smalltalk/V com recursos da programação em lógica, integrados através de Prolog/V. A experiência obtida na utilização desse modelo mostrou a viabilidade da inclusão de características complementares ao modelo de objetos do POO, e a simplicidade de sua implementação, utilizando-se recursos multiparadigmáticos. Esse modelo constitui, assim, uma alternativa viável para construção de ambientes inteligentes. / Aiming at defining structures for Software Engineering Environments (SEE) much research has been accomplished. Some of this research results have pointed out the need to provide intelligence to coordinate and assist effectively the software development process. The object-oriented paradigm (OOP) has been applied to implement intelligent systems with several approaches. The OOP as SEE structure has been experimented as well. The system construction approach in which the intelligence is distributed among its elements, proposed by Hewitt, Minsky and Lieberman, elicits the idea of modelling objects that act as problem-solvers, working cooperatively to reach the system objectives, and to experiment this approach in the construction of intelligent environments. In this dissertation, a study of the OOP use in the implementation of intelligent systems is presented. An extension to the object concept is proposed to allow objects to exhibit a flexible behavior, to have autonomy in their tasks fulfillment, to acquire new knowledge, and to interact with the external environment. The existence of objects with this ability, enables the construction of modulated and evolutionary intelligent systems, making its design, implementation and maintenance easier. The OOP basic concepts and main extensions are discussed to elucidate the concepts that will be used throughout this dissertation. Some intelligence and intelligent behavior approaches are presented, emphasizing knowledge, learning and flexible behavior. This flexible behavior comes from new knowledge acquisition and from the analysis of environment conditions. The main knowledge representation schemes and several problem solving strategies used in intelligent systems are presented to provide background for representational characteristics analysis of the OOP. The OOP used as a knowledge representation scheme is analyzed and emphasized its advantages and shortcomings. In order to identify mechanisms engaged in the implementation of intelligent systems, a survey of proposals of the OOP used in that systems is synthesized. In that survey the emphasis to support the distributed intelligence approach through the use of the knowledge representation model provided by OOP and positive characteristics of other paradigms is observed. An object model with intelligent behavior is proposed, in which, besides the declarative and procedural aspects of knowledge represented through instance variables and methods, mechanisms are encapsulated to provide autonomy and flexible behavior, to allow new knowledge acquisition, and to promote communications with users. To provide autonomy a message manager which receives requests from other objects was developed. The message manager puts messages in a queue and dispatches them according to its knowledge and the analysis of environment conditions. Using programming in logic resources, facilities are introduced to get behavior flexibility through behavioral rules in backward chaining. Knowledge is acquired through facts, procedures, and behavioral rules asserted/retracted in the object's knowledge-base. To provide assistance and report on their activities, the objects exhibit the status of their behavioral rules firing, and lists of granted requests as well as the ones kept in its message queue. To explore the proposed model properties, one intelligent assistant prototype to support the activities of the system development process was implemented. For its implementation, the Smalltalk/V language with programming in logic resources integrated by Prolog/V was used. The experience acquired in using this model, indicated the feasibility of the inclusion of additional characteristics to the OOP model, and the clearness of its implementation using multiparadigm resources. Therefore, this model is a viable alternative to the construction of intelligent environments.
380

SDIP: um ambiente inteligente para a localização de informações na internet / SDIP: an intelligent system to discover information on the internet

Fernandez, Luis Fernando Nunes January 1995 (has links)
A proposta do trabalho descrito detalhadamente neste texto é implementar um sistema inteligente, que seja capaz de auxiliar os seus usuários na tarefa de localizar e recuperar informações, dentro da rede Internet. Com o intuito de alcançar o objetivo proposto, construímos um sistema que oferece aos seus usuários duas formas distintas, porem integradas, de interfaces: língua natural e gráfica (baseada em menus, janelas etc.). Adicionalmente, a pesquisa das informações é realizada de maneira inteligente, ou seja, baseando-se no conhecimento gerenciado pelo sistema, o qual é construído e estruturado dinamicamente pelo próprio usuário. Em linhas gerais, o presente trabalho está estruturado logicamente em quatro partes, a saber: 1. Estudo introdutório dos mais difundidos sistemas de pesquisa e recuperação de informações, hoje existentes dentro da Internet. Com o crescimento desta rede, aumentaram enormemente a quantidade e a variedade das informações por ela mantidas, e disponibilizadas aos seus usuários. Concomitantemente, diversificaram-se os sistemas que permitem o acesso a este conjunto de informações, distribuídas em centenas de servidores por todo o mundo. Nesse sentido, com o intuito de situar e informar o leitor a respeito do tema, discutimos detidamente os sistemas Archie, gopher, WAIS e WWW; 2. Estudo introdutório a respeito da Discourse Representation Theory (DRT). Em linhas gerais, a DRT é um formalismo para a representação do discurso que faz use de modelos para a avaliação semântica das estruturas geradas, que o representam. Por se tratar de um estudo introdutório, neste trabalho discutiremos tão somente os aspectos relativos a representação do discurso que são propostos pela teoria, dando ênfase a, forma de se representar sentenças simples, notadamente aquelas de interesse do sistema; 3. Estudo detalhado da implementação, descrevendo cada um dos processos que formam o sistema. Neste estudo são abordados os seguintes módulos: Processo Archie: modulo onde está implementadas as facilidades que permitem ao sistema interagir com os servidores Archie; Processo FTP: permite ao SDIP recuperar arquivos remotos, utilizando o protocolo padrão da Internet FTP; Front-end e Interface SABI: possibilitam a realização de consultas bibliográficas ao sistema SABI, instalado na Universidade Federal do Rio Grande do Sul; Servidor de Correio Eletrônico: implementa uma interface alternativa para o acesso ao sistema, realizado, neste caso, por intermédio de mensagens do correio eletrônico; Interface Gráfica: oferece aos usuários um ambiente gráfico para a interação com o sistema; Processo Inteligente: Modulo onde está implementada a parte inteligente do sistema, provendo, por exemplo, as facilidades de interpretação de sentenças da língua portuguesa. 4. Finalmente, no epilogo deste trabalho, mostramos exemplos que ilustram a utilização das facilidades oferecidas pelo ambiente gráfico do SDIP. Descrevendo sucinta.mente o funcionamento do sistema, os comandos e consultas dos usuários podem ser formuladas de duas maneiras distintas. No primeiro caso, o sistema serve apenas como um intermediário para o acesso aos servidores Archie e SABI, oferecendo aos usuários um ambiente gráfico para a interação com estes dois sistemas. Na segunda modalidade, os usuários formulam as suas consultas ou comandos, utilizando-se de sentenças em língua natural. Neste Ultimo caso, quando se tratar de uma consulta, o sistema, utilizando-se de sua base de conhecimento, procurara aperfeiçoar a consulta efetuada pelo usuário, localizando, desta forma, as informações que melhor atendam as necessidades do mesmo. / The proposal of the work describe detailedly in this master dissertation is to implement an intelligent system that will be capable of to help of its users in the task of locate and retrieve informations, inside of the Internet. With the object of reach this goal, was builded a system that offer to its users two distincts types, however integrated, of interfaces: natural language and graphic ( based in menus, windows, etc ). Furthermore, the search of the informations is realized of intelligent way, based it in the knowledgement managed by system, which is builded and structured dinamically by the users. In general lines, the present work are structured logically in four parts, which are listed below: 1. Introdutory study of the most divulgated systems of search and retrieval of informations, today existent inside of the Internet. With growth of this net, increase greatfull the quantity and variety of the informations keeped and published for users by it. Beside it, has appeared to many systems that allow the access to this set of informations, distributed on hundreds of servers in the whole world. In these sense, with the intuit of situate and to inform the reader about the subject, we describe formally the systems archie, gopher, WAIS and WWW , respectively; 2. An Introdutory study of the Discourse Representation Theory (DRT). In this work, the DRT is the formalism utilized for the representation of the discourse that uses models to evaluate semanticly the structures generated, which represent it. In fact, we will discusse in this work so only the aspects relatives to discourse representation that are purposes by theory, given emphasis for the way to represent simple sentences, notory those recognized and important for the system ; 3. Detailed study of the implementation, describing each of the process that compose the system. In this study are described the following modules : Archie Process: Module where are implemented the facilities that allow the system to interact whit the Archie Servers in the Internet; FTP Process: it allows the SDIP to retrieve remote files, utilizing the standard protocol of the Internet, called FTP (File Transfer Protocol); Front-end and Interface SABI: these components are used by system to realize bibliographic queries to SABI manager, installed at Universidade Federal do Rio Grande do Sul; Eletronic Mail Server: it implements an alternative interface to access SDIP, realized in this case, throught eletronic mail messages, which transport firstly the user's query and secondly the system's response; Graphic Interface : it offers to the users a graphical environment for the interaction with the system ; Intelligent Process: module where are implemented the intelligent part of the system, providing, for instance, the facilities for interpretation of sentences wrote in portuguese language. 4. Finally, in the epilogue of this work, we show samples that illustrate the utilization of the facilities implemented at SDIP's graphical environment. Describing the functionability of the system, the users's commands and queries could be formulated of two disctincts ways. In the first case, the system serves only as the intermediary for the access to Archie servers and SABI, offering for its users a graphical environment for the interaction with these two others systems. In the second modality, the users formulate their queries or commands, utilizing sentences in natural language. In this last case, when it is a query, the system utilizing its base of knowledgement, will try to refine the user's question, localizing the set of information that better satisfies his needs.

Page generated in 0.2442 seconds