• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 2
  • 1
  • Tagged with
  • 7
  • 7
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Semantic Validation of T&E XML Data

Moskal, Jakub, Kokar, Mieczyslaw, Morgan, John 10 1900 (has links)
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV / It is anticipated that XML will heavily dominate the next generation of telemetry systems. The syntax of XML-based languages can be constrained by a schema that describes the structure of valid documents. However, the schemas cannot express all dependencies between XML elements and attributes, both within a single document and across multiple documents. This prohibits the XML validation process from being fully automated with standard schema processors. This paper presents an approach that is based on the W3C Semantic Web technologies and allows different vendors and system integrators to independently develop their own semantic validation rules. The rules are equipped with powerful semantics, which allows for specification and validation of complex types of constraints. The approach is not specific to a particular T&E standard and is entirely standards-based.
2

Consolidation endogène de réseaux lexico-sémantiques : Inférence et annotation de relations, règles d'inférence et langage dédié / Endogenous consolidation of lexico-semantic networks

Zarrouk, Manel 03 November 2015 (has links)
Développer des ressources lexico-sémantiques pour le Traitement Automatique des Langues Naturelles est un enjeu majeur du domaine. Ces ressources explicitant notamment des connaissances que seuls les humains possèdent, ont pour but de permettre aux applications de TALNune compréhension de texte assez fine et complète. De nouvelles approches populaires de construction de ces dernières impliquant l'externalisation ouverte (crowdsourcing) émergent en TALN. Elles ont confirmé leur efficacité et leur pertinence. Cependant, les ressources obtenues ne sont pas exemptes d'informations erronées ou de silences causés par l'absence de certaines relations sémantiques pertinentes et primordiales pour la bonne qualité. Dans ce travail de recherche, nous prenons comme exemple d'étude le réseau lexico-sémantique du projet JeuxDeMots et nous proposons un système de consolidation endogène pour ce type de réseaux.Ce système se base principalement sur l'enrichissement du réseau par l'inférence et l'annotation de nouvelles relations à partir de celles existantes, ainsi que l'extraction de règles d'inférence permettant de (re)générer une grande partie du réseau. Enfin, un langage dédié de manipulation du système de consolidation et du réseau lexico-sémantique est conçu et un premier prototype a été implémenté. / Developing lexico-semantic resources is a major issue in the Natural Language Processing field.These resources, by making explicit inter alia some knowledge possessed only by humans, aim at providing the ability of a precise and complete text understanding to NLP tasks. Popular resources-building strategies involving crowdsourcing are flowering in NLP and are proved to be successful. However, the resulted resources are not free of errors and lack some important semantic relations. In this PhD thesis, we used the french lexico-semantic network from the project JeuxDeMots as a case-study. We designed an endogenous consolidation system for this type of networks based on inferring and annotating new semantic relations using the already existing ones, as well as extracting and proposing inference rules able to (re)generate a considerable part of the network. In addition, we conceived a domain specific language for manipulating the consolidation system along with the network itself and a prototype was implemented.
3

The Role of High-Level Reasoning and Rule-Based Representations in the Inverse Base-Rate Effect

Wennerholm, Pia January 2001 (has links)
<p>The inverse base-rate effect is the observation that on certain occasions people classify new objects as belonging to rare base-rate categories rather than common ones (e.g., D. L. Medin & S. M. Edelson, 1988). This finding is inconsistent with normative prescriptions of rationality, and provides an anomaly for current theories of human knowledge representation, such as the exemplar-based models of categorization, which predict a consistent use of base-rates (e.g., D. L. Medin & M. M. Schaffer, 1978). This thesis presents a novel explanation of the inverse base-rate effect. The proposal is that participants sometimes eliminate category options that are inconsistent with well-supported inference rules. These assumptions contrast with those by attentional theory (J. K. Kruschke, in press), according to which the inverse base-rate effect is the outcome of rapid attention shifts operating on cue-category associations. Study I, II, and III verified seven qualitative predictions derived from the eliminative inference idea. None of these phenomena can be explained by attentional theory. The most important of these findings were that elimination of well-known, common categories mediate the inverse base-rate effect rather than the strongest cue-category associations (Study I), that only participants with a rule-based mode of generalization exhibit the inverse base-rate effect (Study II), and that rapid attentional shifts per se do not accelerate learning, but rather decelerate it (Study III). In addition, Study I provided a quantitative implementation of the eliminative inference idea, ELMO, that demonstrated that this high-level reasoning process can produce the basic pattern of base-rate effects in the inverse base-rate design. Taken together, as an account of the inverse base-rate effect the empirical evidence of this thesis suggest that rule-based elimination is a powerful component of the inverse base-rate effect. But previous studies have indicated that attentional shifts affect the inverse base-rate effect, too. Therefore, a complete account of the inverse base-rate effect needs to integrate inductive and eliminative inferences operating on rule-based representations with attentional shifts. The Discussion of this thesis propose a number of suggestions for such integrative work. </p>
4

The Role of High-Level Reasoning and Rule-Based Representations in the Inverse Base-Rate Effect

Wennerholm, Pia January 2001 (has links)
The inverse base-rate effect is the observation that on certain occasions people classify new objects as belonging to rare base-rate categories rather than common ones (e.g., D. L. Medin &amp; S. M. Edelson, 1988). This finding is inconsistent with normative prescriptions of rationality, and provides an anomaly for current theories of human knowledge representation, such as the exemplar-based models of categorization, which predict a consistent use of base-rates (e.g., D. L. Medin &amp; M. M. Schaffer, 1978). This thesis presents a novel explanation of the inverse base-rate effect. The proposal is that participants sometimes eliminate category options that are inconsistent with well-supported inference rules. These assumptions contrast with those by attentional theory (J. K. Kruschke, in press), according to which the inverse base-rate effect is the outcome of rapid attention shifts operating on cue-category associations. Study I, II, and III verified seven qualitative predictions derived from the eliminative inference idea. None of these phenomena can be explained by attentional theory. The most important of these findings were that elimination of well-known, common categories mediate the inverse base-rate effect rather than the strongest cue-category associations (Study I), that only participants with a rule-based mode of generalization exhibit the inverse base-rate effect (Study II), and that rapid attentional shifts per se do not accelerate learning, but rather decelerate it (Study III). In addition, Study I provided a quantitative implementation of the eliminative inference idea, ELMO, that demonstrated that this high-level reasoning process can produce the basic pattern of base-rate effects in the inverse base-rate design. Taken together, as an account of the inverse base-rate effect the empirical evidence of this thesis suggest that rule-based elimination is a powerful component of the inverse base-rate effect. But previous studies have indicated that attentional shifts affect the inverse base-rate effect, too. Therefore, a complete account of the inverse base-rate effect needs to integrate inductive and eliminative inferences operating on rule-based representations with attentional shifts. The Discussion of this thesis propose a number of suggestions for such integrative work.
5

Max-résolution et apprentissage pour la résolution du problème de satisfiabilité maximum / Max-resolution and learning for solving the Max-SAT problem

Abramé, André 25 September 2015 (has links)
Cette thèse porte sur la résolution du problème d'optimisation Maximum Satisfiability (Max-SAT). Nous y étudions en particulier les mécanismes liés à la détection et à la transformation des sous-ensembles inconsistants par la règle de la max-résolution. Dans le contexte des solveurs de type séparation et évaluation, nous présentons plusieurs contributions liées au calcul de la borne inférieure. Cela va du schéma d'application de la propagation unitaire utilisé pour détecter les sous-ensembles inconsistants à l'extension des critères d'apprentissage et à l'évaluation de l'impact des transformations par max-résolution sur l'efficacité des solveurs. Nos contributions ont permis l'élaboration d'un nouvel outil de résolution compétitif avec les meilleurs solveurs de l'état de l'art. Elles permettent également de mieux comprendre le fonctionnement des méthodes de type séparation et évaluation et apportent des éléments théoriques pouvant expliquer l'efficacité et les limites des solveurs existants. Cela ouvre de nouvelles perspectives d'amélioration, en particulier sur l'augmentation de l'apprentissage et la prise en compte de la structure interne des instances. Nous présentons également un exemple d'utilisation de la règle de la max-résolution dans un algorithme de recherche local. / This PhD thesis is about solving the Maximum Satisfiability (Max-SAT) problem. We study the mechanisms related to the detection and transformations of the inconsistent subsets by the max-resolution rule. In the context of the branch and bound (BnB) algorithms, we present several contributions related to the lower bound computation. They range from the study of the unit propagation scheme used to detect inconsistent subsets to the extension of the learning criteria and to the evaluation of the impact of the max-resolution transformations on the BnB solvers efficiency. Thanks to our contributions, we have implemented a new solver which is competitive with the state of art ones. We give insights allowing a better understanding of the behavior of BnB solvers as well as theoretical elements which contribute to explain the efficiency of these solvers and their limits. It opens new development perspectives on the learning mechanisms used by BnB solvers which may lead to a better consideration of the instances structural properties. We also present an example of integration of the max-resolution inference rule in a local search algorithm.
6

Security and privacy model for association databases

Kong, Yibing Unknown Date (has links)
With the rapid development of information technology, data availability is improved greatly. Data may be accessed at anytime by people from any location. However,threats to data security and privacy arise as one of the major problems of the development of information systems, especially those information systems which contain personal information. An association database is a personal information system which contains associations between persons. In this thesis, we identify the security and privacy problems of association databases. In order to solve these problems, we propose a new security and privacy model for association databases equipped with both direct access control and inference control mechanisms. In this model, there are multiple criteria including, not only confidentiality, but also privacy and other aspects of security to classify the association. The methods used in the system are: The direct access control method is based on the mandatory model; The inference control method is based on both logic reasoning and probabilistic reasoning (Belief Networks). My contributions to security and privacy model for association databases and to inference control in the model include: Identification of security and privacy problems in association databases; Formal definition of association database model; Representation association databases as directed multiple graphs; Development of axioms for direct access control; Specification of the unauthorized inference problem; A method for unauthorized inference detection and control that includes: Development of logic inference rules and probabilistic inference rule; Application of belief networks as a tool for unauthorized inference detection and control.
7

Security and privacy model for association databases

Kong, Yibing Unknown Date (has links)
With the rapid development of information technology, data availability is improved greatly. Data may be accessed at anytime by people from any location. However,threats to data security and privacy arise as one of the major problems of the development of information systems, especially those information systems which contain personal information. An association database is a personal information system which contains associations between persons. In this thesis, we identify the security and privacy problems of association databases. In order to solve these problems, we propose a new security and privacy model for association databases equipped with both direct access control and inference control mechanisms. In this model, there are multiple criteria including, not only confidentiality, but also privacy and other aspects of security to classify the association. The methods used in the system are: The direct access control method is based on the mandatory model; The inference control method is based on both logic reasoning and probabilistic reasoning (Belief Networks). My contributions to security and privacy model for association databases and to inference control in the model include: Identification of security and privacy problems in association databases; Formal definition of association database model; Representation association databases as directed multiple graphs; Development of axioms for direct access control; Specification of the unauthorized inference problem; A method for unauthorized inference detection and control that includes: Development of logic inference rules and probabilistic inference rule; Application of belief networks as a tool for unauthorized inference detection and control.

Page generated in 0.0694 seconds