• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 218
  • 71
  • 32
  • 19
  • 10
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 527
  • 527
  • 147
  • 138
  • 123
  • 122
  • 119
  • 110
  • 102
  • 101
  • 97
  • 83
  • 80
  • 64
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
281

Answering Object Queries over Knowledge Bases with Expressive Underlying Description Logics

Wu, Jiewen January 2013 (has links)
Many information sources can be viewed as collections of objects and descriptions about objects. The relationship between objects is often characterized by a set of constraints that semantically encode background knowledge of some domain. The most straightforward and fundamental way to access information in these repositories is to search for objects that satisfy certain selection criteria. This work considers a description logics (DL) based representation of such information sources and object queries, which allows for automated reasoning over the constraints accompanying objects. Formally, a knowledge base K=(T, A) captures constraints in the terminology (a TBox) T, and objects with their descriptions in the assertions (an ABox) A, using some DL dialect L. In such a setting, object descriptions are L-concepts and object identifiers correspond to individual names occurring in K. Correspondingly, object queries are the well known problem of instance retrieval in the underlying DL knowledge base K, which returns the identifiers of qualifying objects. This work generalizes instance retrieval over knowledge bases to provide users with answers in which both identifiers and descriptions of qualifying objects are given. The proposed query paradigm, called assertion retrieval, is favoured over instance retrieval since it provides more informative answers to users. A more compelling reason is related to performance: assertion retrieval enables a transfer of basic relational database techniques, such as caching and query rewriting, in the context of an assertion retrieval algebra. The main contributions of this work are two-fold: one concerns optimizing the fundamental reasoning task that underlies assertion retrieval, namely, instance checking, and the other establishes a query compilation framework based on the assertion retrieval algebra. The former is necessary because an assertion retrieval query can entail a large volume of instance checking requests in the form of K|= a:C, where "a" is an individual name and "C" is a L-concept. This work thus proposes a novel absorption technique, ABox absorption, to improve instance checking. ABox absorption handles knowledge bases that have an expressive underlying dialect L, for instance, that requires disjunctive knowledge. It works particularly well when knowledge bases contain a large number of concrete domain concepts for object descriptions. This work further presents a query compilation framework based on the assertion retrieval algebra to make assertion retrieval more practical. In the framework, a suite of rewriting rules is provided to generate a variety of query plans, with a focus on plans that avoid reasoning w.r.t. the background knowledge bases when sufficient cached results of earlier requests exist. ABox absorption and the query compilation framework have been implemented in a prototypical system, dubbed CARE Assertion Retrieval Engine (CARE). CARE also defines a simple yet effective cost model to search for the best plan generated by query rewriting. Empirical studies of CARE have shown that the proposed techniques in this work make assertion retrieval a practical application over a variety of domains.
282

Action Logic Programs

Drescher, Conrad 12 May 2011 (has links) (PDF)
We discuss a new concept of agent programs that combines logic programming with reasoning about actions. These agent logic programs are characterized by a clear separation between the specification of the agent’s strategic behavior and the underlying theory about the agent’s actions and their effects. This makes it a generic, declarative agent programming language, which can be combined with an action representation formalism of one’s choice. We present a declarative semantics for agent logic programs along with (two versions of) a sound and complete operational semantics, which combines the standard inference mechanisms for (constraint) logic programs with reasoning about actions.
283

Digital evidence : representation and assurance

Schatz, Bradley Lawrence January 2007 (has links)
The field of digital forensics is concerned with finding and presenting evidence sourced from digital devices, such as computers and mobile phones. The complexity of such digital evidence is constantly increasing, as is the volume of data which might contain evidence. Current approaches to interpreting and assuring digital evidence rely implicitly on the use of tools and representations made by experts in addressing the concerns of juries and courts. Current forensics tools are best characterised as not easily verifiable, lacking in ease of interoperability, and burdensome on human process. The tool-centric focus of current digital forensics practise impedes access to and transparency of the information represented within digital evidence as much as it assists, by nature of the tight binding between a particular tool and the information that it conveys. We hypothesise that a general and formal representational approach will benefit digital forensics by enabling higher degrees of machine interpretation, facilitating improvements in tool interoperability and validation. Additionally, such an approach will increase human readability. This dissertation summarises research which examines at a fundamental level the nature of digital evidence and digital investigation, in order that improved techniques which address investigation efficiency and assurance of evidence might be identified. The work follows three themes related to this: representation, analysis techniques, and information assurance. The first set of results describes the application of a general purpose representational formalism towards representing diverse information implicit in event based evidence, as well as domain knowledge, and investigator hypotheses. This representational approach is used as the foundation of a novel analysis technique which uses a knowledge based approach to correlate related events into higher level events, which correspond to situations of forensic interest. The second set of results explores how digital forensic acquisition tools scale and interoperate, while assuring evidence quality. An improved architecture is proposed for storing digital evidence, analysis results and investigation documentation in a manner that supports arbitrary composition into a larger corpus of evidence. The final set of results focus on assuring the reliability of evidence. In particular, these results focus on assuring that timestamps, which are pervasive in digital evidence, can be reliably interpreted to a real world time. Empirical results are presented which demonstrate how simple assumptions cannot be made about computer clock behaviour. A novel analysis technique for inferring the temporal behaviour of a computer clock is proposed and evaluated.
284

A framework for modelling spatial proximity

Brennan, Jane, Computer Science & Engineering, Faculty of Engineering, UNSW January 2009 (has links)
The concept of proximity is an important aspect of human reasoning. Despite the diversity of applications that require proximity measures, the most intuitive notion is that of spatial nearness. The aim of this thesis is to investigate the underpinnings of the notion of nearness, propose suitable formalisations and apply them to the processing of GIS data. More particularly, this work offers a framework for spatial proximity that supports the development of more intuitive tools for users of geographic data processing applications. Many of the existing spatial reasoning formalisms do not account for proximity at all while others stipulate it by using natural language expressions as symbolic values. Some approaches suggest the association of spatial relations with fuzzy membership grades to be calculated for locations in a map using Euclidean distance. However, distance is not the only factor that influences nearness perception. Hence, previous work suggests that nearness should be defined from a more basic notion of influence area. I argue that this approach is flawed, and that nearness should rather be defined from a new, richer notion of impact area that takes both the nature of an object and the surrounding environment into account. A suitable notion of nearness considers the impact areas of both objects whose degree of nearness is assessed. This is opposed to the common approach of only taking one of both objects, seen as a reference to assess the nearness of the other to it, into consideration. Cognitive findings are incorporated to make the framework more relevant to the users of Geographic Information Systems (GIS) with respect to their own spatial cognition. GIS users bring a wealth of knowledge about physical space, particularly geographic space, into the processing of GIS data. This is taken into account by introducing the notion of context. Context represents either an expert in the context field or information from the context field as collated by an expert. In order to evaluate and to show the practical implications of the framework, experiments are conducted on a GIS dataset incorporating expert knowledge from the Touristic Road Travel domain.
285

Decision mechanism, knowledge representation, and software architecture for an intelligent control system

Malaviya, Anoop Kumar January 1998 (has links)
[Truncated abstract] This thesis analyses the problem of Intelligent Control for large industrial plants and suggests a hierarchical, distributed, object-oriented architecture for Intelligent Control. The architecture is called MLIAC (Multi Level Intelligent Adaptive Control) Architecture. The MLIAC architecture is inspired by biological control systems (which are flexible, and are capable of adapting to unstructured environments with ease) and the success of the distributed architecture SCADA (Supervisory Control and Data Acquisition) Systems. The MLIAC Architecture structures the decision and control mechanism for the real-time properties namely safety, liveliness, and timeliness . . . In addition, three case studies have been reported. The case studies cover the control of a Flexible Manufacturing System and the Mine Products Quality Control. The results show that MLIAC Knowledge Representation model meets the requirements of the Roth-Hayes benchmark regarding Knowledge Representation. The decisions taken are logically tractable. The software architecture is effective and easily implemented. The actual performance has been found to depend upon a number of factors discussed in this thesis. For the specification and design of Potline MLIAC software, a CASE package ("Software Through Pictures") has been used. The Potline MLIAC software has been developed using C⁄C++, SQL, 4 GL and RDBMS based on a Client-Server model. For computer simulation the Potline MLIAC software has been integrated with the MATLAB⁄SIMULINK package.
286

A SLDNF formalization for updates and abduction /

Lakkaraju, Sai Kiran. January 2001 (has links)
Thesis (M.Sc. (Hons.)) -- University of Western Sydney, 2001. / "A thesis submitted for the degree of Master of Science (Honours) - Computing and Information Technology at University of Western Sydney" Bibliography : leaves 93-98.
287

Relational transfer across reinforcement learning tasks via abstract policies. / Transferência relacional entre tarefas de aprendizado por reforço via políticas abstratas.

Marcelo Li Koga 21 November 2013 (has links)
When designing intelligent agents that must solve sequential decision problems, often we do not have enough knowledge to build a complete model for the problems at hand. Reinforcement learning enables an agent to learn behavior by acquiring experience through trial-and-error interactions with the environment. However, knowledge is usually built from scratch and learning the optimal policy may take a long time. In this work, we improve the learning performance by exploring transfer learning; that is, the knowledge acquired in previous source tasks is used to accelerate learning in new target tasks. If the tasks present similarities, then the transferred knowledge guides the agent towards faster learning. We explore the use of a relational representation that allows description of relationships among objects. This representation simplifies the use of abstraction and the extraction of the similarities among tasks, enabling the generalization of solutions that can be used across different, but related, tasks. This work presents two model-free algorithms for online learning of abstract policies: AbsSarsa(λ) and AbsProb-RL. The former builds a deterministic abstract policy from value functions, while the latter builds a stochastic abstract policy through direct search on the space of policies. We also propose the S2L-RL agent architecture, containing two levels of learning: an abstract level and a ground level. The agent simultaneously builds a ground policy and an abstract policy; not only the abstract policy can accelerate learning on the current task, but also it can guide the agent in a future task. Experiments in a robotic navigation environment show that these techniques are effective in improving the agents learning performance, especially during the early stages of the learning process, when the agent is completely unaware of the new task. / Na construção de agentes inteligentes para a solução de problemas de decisão sequenciais, o uso de aprendizado por reforço é necessário quando o agente não possui conhecimento suficiente para construir um modelo completo do problema. Entretanto, o aprendizado de uma política ótima é em geral muito lento pois deve ser atingido através de tentativa-e-erro e de repetidas interações do agente com o ambiente. Umas das técnicas para se acelerar esse processo é possibilitar a transferência de aprendizado, ou seja, utilizar o conhecimento adquirido para se resolver tarefas passadas no aprendizado de novas tarefas. Assim, se as tarefas tiverem similaridades, o conhecimento prévio guiará o agente para um aprendizado mais rápido. Neste trabalho é explorado o uso de uma representação relacional, que explicita relações entre objetos e suas propriedades. Essa representação possibilita que se explore abstração e semelhanças estruturais entre as tarefas, possibilitando a generalização de políticas de ação para o uso em tarefas diferentes, porém relacionadas. Este trabalho contribui com dois algoritmos livres de modelo para construção online de políticas abstratas: AbsSarsa(λ) e AbsProb-RL. O primeiro constrói uma política abstrata determinística através de funções-valor, enquanto o segundo constrói uma política abstrata estocástica através de busca direta no espaço de políticas. Também é proposta a arquitetura S2L-RL para o agente, que possui dois níveis de aprendizado: o nível abstrato e o nível concreto. Uma política concreta é construída simultaneamente a uma política abstrata, que pode ser utilizada tanto para guiar o agente no problema atual quanto para guiá-lo em um novo problema futuro. Experimentos com tarefas de navegação robótica mostram que essas técnicas são efetivas na melhoria do desempenho do agente, principalmente nas fases inicias do aprendizado, quando o agente desconhece completamente o novo problema.
288

Shapes of Knowledge : A multimodal study of six Swedish upper secondary students' meaning making and transduction of knowledge across essays and audiovisual presentations

Florén, Henrika January 2018 (has links)
No description available.
289

The impact of disjunction on reasoning under existential rules

Morak, Michael January 2014 (has links)
Ontological database management systems are a powerful tool that combine traditional database techniques with ontological reasoning methods. In this setting, a classical extensional database is enriched with an ontology, or a set of logical assertions, that describe how new, intensional knowledge can be derived from the extensional data. Conjunctive queries are therefore answered against this combined knowledge base of extensional and intensional data. Many languages that represent ontologies have been introduced in the literature. In this thesis we will focus on existential rules (also called tuple-generating dependencies or Datalog<sup>&plusmn;</sup> rules), and three established languages in this area, namely guarded-based rules, sticky rules and weakly-acyclic rules. The main goal of the thesis is to enrich these languages with non-deterministic constructs (i.e. disjunctions) and investigate the complexity of the answering conjunctive queries under these extended languages. As is common in the literature, we will distinguish between combined complexity, where the database, the ontology and the query are considered as input, and data complexity, where only the database is considered as input. The latter case is relevant in practice, as usually the ontology and the query can be considered as fixed, and are usually much smaller than the database itself. After giving appropriate definitions to extend the considered languages to disjunctive existential rules, we establish a series of complexity results, completing the complexity picture for each of the above languages, and four different query languages: arbitrary conjunctive queries, bounded (hyper-)treewidth queries, acyclic queries and atomic queries. For the guarded-based languages, we show a strong 2EXPTIME lower bound for general queries that holds even for fixed ontologies, and establishes 2EXPTIME-completeness of the query answering problem in this case. For acyclic queries, the complexity can be reduced to EXPTIME, if the predicate arity is bounded, and the problem even becomes tractable for certain restricted languages, if only atomic queries are used. For ontologies represented by sticky disjunctive rules, we show that the problem becomes undecidable, even in the case of data complexity and atomic queries. Finally, for weakly-acyclic rules, we show that the complexity increases from 2EXPTIME to coN2EXPTIME in general, and from tractable to coNP in case of the data complexity, independent of which query language is used. After answering the open complexity questions, we investigate applications and relevant consequences of our results for description logics and give two generic complexity statements, respectively, for acyclic and general conjunctive query answering over description logic knowledge bases. These generic results allow for an easy determination of the complexity of this reasoning task, based on the expressivity of the considered description logic.
290

Taxonomia e etiquetagem: análise dos processos de organização e representação da informação jurídica na web

Santos, Naiara Andrade Malta January 2014 (has links)
Submitted by Valdinei Souza (neisouza@hotmail.com) on 2015-10-08T20:54:14Z No. of bitstreams: 1 TAXONOMIA E ETIQUETAGEM - NAIARA ANDRADE MALTA SANTOS.pdf: 3461653 bytes, checksum: de172816740035ba4556e8642e1e1b10 (MD5) / Approved for entry into archive by Urania Araujo (urania@ufba.br) on 2016-03-04T20:06:01Z (GMT) No. of bitstreams: 1 TAXONOMIA E ETIQUETAGEM - NAIARA ANDRADE MALTA SANTOS.pdf: 3461653 bytes, checksum: de172816740035ba4556e8642e1e1b10 (MD5) / Made available in DSpace on 2016-03-04T20:06:01Z (GMT). No. of bitstreams: 1 TAXONOMIA E ETIQUETAGEM - NAIARA ANDRADE MALTA SANTOS.pdf: 3461653 bytes, checksum: de172816740035ba4556e8642e1e1b10 (MD5) / A pesquisa foi realizada com o objetivo de analisar a taxonomia e etiquetagem, empregadas na organização e representação do conhecimento da informação jurídica nos websites jurídicos do Brasil. Para isso, procedeu-se, inicialmente, pelo mapeamento dos websites jurídicos brasileiros que se encontravam entre os 500 mais acessados do país em dezembro de 2013, localizando 02 websites jurídicos (portal JusBrasil e o Portal do Tribunal de Justiça do Estado de São Paulo), que foi verificado quanto a disponibilidade das tipologias da documentação jurídica. Em seguida, identificou-se os níveis de taxonomia e etiquetagem empregadas na organização e representação do conhecimento nos websites selecionados comparando desta forma os mesmos. Foi verificado também se os termos que compõem a tabela do conhecimento da CAPES da área de Direito são encontrados nas taxonomia e na etiquetagem no tesauro jurídico do STF. Desta forma, o instrumento utilizado para coleta dos dados foi à observação participante e o formulário, quanto ao tratamento dos dados obtidos, a pesquisa é caracterizada como uma abordagem qualitativa e apresenta como resultados a taxonomia e a etiquetagem como aliadas na organização e representação do conhecimento jurídico nos portais estudados. Além dos usuários do Portal JusBrasil que participam de forma colaborativa na organização e representação do conhecimento jurídico disponível no portal. / ABSTRACT The research was performed with the aim of analyzing the taxonomy and tagging, employed in the organization and knowledge representation of juridical information in the juridical websites in Brazil. For this, proceeded, initially, by mapping of Brazilian legal websites witch were among the 500, that is the most accessed of the country on December, in 2013, localizing 02 juridical websites (Portal JusBrasil and the Portal of the Court of Justice of the State of Sao Paulo), which was verified for the availability of the types of legal documentation. Then, levels of taxonomy and tagging used in the organization and representation of knowledge on selected websites comparing this way the same. It was also verified that the terms that compose the table of knowledge of CAPES of the area of law are found in the taxonomy and tagging in legal thesaurus of STF. This way, the tool used for the data collection was participant observation and the form, regarding the treatment of data, the research is characterized as a qualitative approach and presents as results the taxonomy and tagging as allies in the organization and representation of legal knowledge in the portals studied. In addition to the Portal users JusBrasil participating collaboratively in the organization and representation of legal knowledge available in the portal.

Page generated in 0.1169 seconds