51 |
Advances in Answer Set PlanningPolleres, Axel 27 August 2003 (has links) (PDF)
Planning is a challenging research area since the early days of Artificial Intelligence. The planning problem is the task of finding a sequence of actions leading an agent from a given initial state to a desired goal state. Whereas classical planning adopts restricting assumptions such as complete knowledge about the initial state and deterministic action effects, in real world scenarios we often have to face incomplete knowledge and non-determinism. Classical planning languages and algorithms do not take these facts into account. So, there is a strong need for formal languages describing such non-classical planning problems on the one hand and for (declarative) methods for solving these problems on the other hand.In this thesis, we present the action language Kc, which is based on flexible action languages from the knowledge representation community and extends these by useful concepts from logic programming.We define two basic semantics for this language which reflect optimistic and secure (i.e. sceptical) plans in presence of incomplete information or nondeterminism. These basic semantics are furthermore extended to planning with action costs, where each action can have an assigned cost value. Here, we address optimal plans as well as plans which stay within a certain overall cost limit.Next, we develop efficient (i.e. polynomial) transformations from planning problems described in our language Kc to disjunctive logic programs which are then evaluated under the so-called Answer Set Semantics. In this context, we introduce a general new method for problem solving in Answer Set Programming (ASP) which takes the genuine "guess and check" paradigm in ASP into account and allows us to integrate separate "guess" and "check" programs into a single logic program. Based on these methods, we have implemented the planning system DLVK. We discuss problem solving and knowledge representation in Kc using DLVK by means of several examples. The proposed methods and the DLVK system are also evaluated experimentally and compared against related approaches. Finally, we present a practical application scenario from the area of design and monitoring of multi-agent systems. As we will see, this monitoring approach is not restricted to our particular formalism. / Austrian Science Funds (FWF)
|
52 |
The DLVK System for Planning with Incomplete KnowledgePolleres, Axel 01 February 2001 (has links) (PDF)
This thesis presents the Planning System DLVK, which supports the novel Planning Language K. The language allows to represent AI planning problems in a declarative way and is capable of representing incomplete knowledge as well as nondeterministic effects of actions.After explaining some basics, the syntax and semantics of this language will be formally described and some results on the computational complexity of our language will be given, proving that K is capable of expressing hard planning problems, possibly involving incomplete knowledge or uncertainty, such as secure (conformant) planning.A translation from various planning tasks specified in K to a logic programming framework will be shown subsequently. We have implemented a prototype of a planning system, DLVK, on top of the disjunctive logic programming system DLV, to show the practical use of our translation. This prototype will be presented in detail. Finally, examples and experimental results will be given, together with an outlook to further research. / Austrian Science Funds (FWF)
|
53 |
Answer set programming probabilístico / Probabilistic Answer Set ProgrammingMorais, Eduardo Menezes de 10 December 2012 (has links)
Este trabalho introduz uma técnica chamada Answer Set Programming Probabilístico (PASP), que permite a modelagem de teorias complexas e a verificação de sua consistência em relação a um conjunto de dados estatísticos. Propomos métodos de resolução baseados em uma redução para o problema da satisfazibilidade probabilística (PSAT) e um método de redução de Turing ao ASP. / This dissertation introduces a technique called Probabilistic Answer Set Programming (PASP), that allows modeling complex theories and check its consistence with respect to a set of statistical data. We propose a method of resolution based in the reduction to the probabilistic satisfiability problem (PSAT) and a Turing reduction method to ASP.
|
54 |
Complétion combinatoire pour la reconstruction de réseaux métaboliques, et application au modèle des algues brunes Ectocarpus siliculosus / Combinatorial completion for metabolic network reconstruction, and application to the model organism for brown algae Ectocarpus siliculosusPrigent, Sylvain 14 November 2014 (has links)
Durant cette thèse nous nous sommes attachés au développement d'une méthode globale de création de réseaux métaboliques chez des espèces biologiques non classiques pour lesquelles nous possédons peu d'informations. Classiquement cette reconstruction s'articule en trois points : la création d'une ébauche métabolique à partir d'un génome, la complétion du réseau et la vérification du résultat obtenu. Nous nous sommes particulièrement intéressés au problème d'optimisation combinatoire difficile que représente l'étape de complétion du réseau, en utilisant un paradigme de programmation par contraintes pour le résoudre : la programmation par ensemble réponse (ou ASP). Les modifications apportées à une méthode préexistante nous ont permis d'améliorer à la fois le temps de calcul pour résoudre ce problème combinatoire et la qualité de la modélisation. L'ensemble de ce processus de reconstruction de réseau métabolique a été appliqué au modèle des algues brunes, Ectocarpus siliculosus, nous permettant ainsi de reconstruire le premier réseau métabolique chez une macro-algue brune. La reconstruction de ce réseau nous a permis d'améliorer notre compréhension du métabolisme de cette espèce et d'améliorer l'annotation de son génome. / In this thesis we focused on the development of a comprehensive approach to reconstruct metabolic networks applied to unconventional biological species for which we have little information. Traditionally, this reconstruction is based on three points : the creation of a metabolic draft from a genome, the completion of this draft and the verification of the results. We have been particularly interested in the hard combinatorial optimization problem represented by the gap-filling step. We used Answer Set Programming (or ASP) to solve this combinatorial problem. Changes to an existing method allowed us to improve both the computational time and the quality of modeling. This entire process of metabolic network reconstruction was applied to the model of brown algae, Ectocarpus siliculosus, allowing us to reconstruct the first metabolic network of a brown macro-algae. The reconstruction of this network allowed us to improve our understanding of the metabolism of this species and to improve annotation of its genome.
|
55 |
Answer set programming probabilístico / Probabilistic Answer Set ProgrammingEduardo Menezes de Morais 10 December 2012 (has links)
Este trabalho introduz uma técnica chamada Answer Set Programming Probabilístico (PASP), que permite a modelagem de teorias complexas e a verificação de sua consistência em relação a um conjunto de dados estatísticos. Propomos métodos de resolução baseados em uma redução para o problema da satisfazibilidade probabilística (PSAT) e um método de redução de Turing ao ASP. / This dissertation introduces a technique called Probabilistic Answer Set Programming (PASP), that allows modeling complex theories and check its consistence with respect to a set of statistical data. We propose a method of resolution based in the reduction to the probabilistic satisfiability problem (PSAT) and a Turing reduction method to ASP.
|
56 |
En webbundersökning med panel : Vilka variabler påverkar om, hur tidigt och vad panelmedlemmar svarar? / A web panel survey : Which variables influence if, how early and what panel members respond?Elmdahl, Martin, Tärnemark, Jonas January 2014 (has links)
Denna rapport ger en bakgrundsbeskrivning av datainsamlingsföretaget Norstat och hur de genomför en tracking-undersökning med panel via internet. Vidare analyseras samband mellan variabler som beskriver personer som ingår i undersökningen och hur dessa personer besvarar undersökningen. I rapporten tas också reda på hur länge en undersökning behöver vara igång och om inkomna svar skiljer sig åt beroende på när en person har svarat. En ingående beskrivning av bearbetning och de variabler som ingår i det använda datamaterialet kommer också att ges. Tidigare forskning med anknytning till paneler och webbundersökningar tas upp för att ge läsaren en nyanserad bild av för- och nackdelar med webbundersökningar. Logistiska regressionsmetoder har använts för att utreda vilka variabler som påverkar om en person besvarar undersökningen eller ej, samt vilka variabler som gör att en person besvarar undersökningen tidigt eller sent. Andra metoder som har använts är deskriptiv statistik och χ2-test. Resultaten visar att faktorer som påverkar hur mycket fritid en person har får störst betydelse för om och hur tidigt undersökningen besvaras. Vidare konstateras att det ofta räcker med en fältperiod t.o.m. 6 dagar efter att inbjudan till undersökningen skickats ut för att alla kategorier av personer ska vara relativt lika representerade. Den optimala fältperioden skiljer sig åt beroende på om en undersökning syftar till att ge en bild av hela rikets befolkning eller enbart specifika kategorier av denna. För en speciell kategori av personer kan det ibland räcka att fältperioden sträcker sig t.o.m. dagen efter inbjudan till enkäten skickats ut för att tillräckligt många svar ska ha inkommit. / This report gives a background description of the data collection company Norstat and how they implement a tracking survey with a panel via the internet. Furthermore connections between variables describing persons in the survey and the way these persons answer the survey will be investigated. The report also intends to find out how long a survey needs to be running and if there are differences between received answers depending on when a person has answered. A detailed description of the processing and variables included in the data material being used will also be given. Earlier research concerning panels and web surveys are covered to give the reader a nuanced picture of the pros and cons with opinion surveys. Logistic regression methods have been used to examine which variables influence whether a person will answer the survey or not, and the variables that make a person answer the survey early or late. Other methods used are descriptive statistics and a χ2-test. The results show that factors influencing how much spare time a person has give the greatest impact on whether and how early the survey gets completed. It can be noted that it is often enough with a field period up to 6 days after the invitation to the survey has been sent out until all categories of persons are relatively equally represented. The optimal field period differs depending on whether a study is aimed at providing a picture of the entire country's population or only specific categories of this. For a special category of the persons, it can sometimes be enough to let the field period run until the day after the invitation to the survey had been sent out for enough answers to be submitted.
|
57 |
Innovative qPCR using interfacial effects to enable low threshold cycle detection and inhibition reliefHarshman, D. K., Rao, B. M., McLain, J. E., Watts, G. S., Yoon, J.-Y. 04 September 2015 (has links)
UA Open Access Publishing Fund / Molecular diagnostics offers quick access to information but fails to operate at a speed required for clinical decision-making. Our novel methodology, droplet-on-thermocouple silhouette real-time polymerase chain reaction (DOTS qPCR), uses interfacial effects for droplet actuation, inhibition relief, and amplification sensing. DOTS qPCR has sample-to-answer times as short as 3 min 30 s. In infective endocarditis diagnosis, DOTS qPCR demonstrates reproducibility, differentiation of antibiotic susceptibility, subpicogram limit of detection, and thermocycling speeds of up to 28 s/cycle in the presence of tissue contaminants. Langmuir and Gibbs adsorption isotherms are used to describe the decreasing interfacial tension upon amplification. Moreover, a log-linear relationship with low threshold cycles is presented for real-time quantification by imaging the droplet-on-thermocouple silhouette with a smartphone. DOTS qPCR resolves several limitations of commercially available real-time PCR systems, which rely on fluorescence detection, have substantially higher threshold cycles, and require expensive optical components and extensive sample preparation. Due to the advantages of low threshold cycle detection, we anticipate extending this technology to biological research applications such as single cell, single nucleus, and single DNA molecule analyses. Our work is the first demonstrated use of interfacial effects for sensing reaction progress, and it will enable point-of-care molecular diagnosis of infections.
|
58 |
Why-Query Support in Graph DatabasesVasilyeva, Elena 28 March 2017 (has links) (PDF)
In the last few decades, database management systems became powerful tools for storing large amount of data and executing complex queries over them. In addition to extended functionality, novel types of databases appear like triple stores, distributed databases, etc. Graph databases implementing the property-graph model belong to this development branch and provide a new way for storing and processing data in the form of a graph with nodes representing some entities and edges describing connections between them. This consideration makes them suitable for keeping data without a rigid schema for use cases like social-network processing or data integration. In addition to a flexible storage, graph databases provide new querying possibilities in the form of path queries, detection of connected components, pattern matching, etc.
However, the schema flexibility and graph queries come with additional costs. With limited knowledge about data and little experience in constructing the complex queries, users can create such ones, which deliver unexpected results. Forced to debug queries manually and overwhelmed by the amount of query constraints, users can get frustrated by using graph databases. What is really needed, is to improve usability of graph databases by providing debugging and explaining functionality for such situations. We have to assist users in the discovery of what were the reasons of unexpected results and what can be done in order to fix them.
The unexpectedness of result sets can be expressed in terms of their size or content. In the first case, users have to solve the empty-answer, too-many-, or too-few-answers problems. In the second case, users care about the result content and miss some expected answers or wonder about presence of some unexpected ones. Considering the typical problems of receiving no or too many results by querying graph databases, in this thesis we focus on investigating the problems of the first group, whose solutions are usually represented by why-empty, why-so-few, and why-so-many queries. Our objective is to extend graph databases with debugging functionality in the form of why-queries for unexpected query results on the example of pattern matching queries, which are one of general graph-query types. We present a comprehensive analysis of existing debugging tools in the state-of-the-art research and identify their common properties.
From them, we formulate the following features of why-queries, which we discuss in this thesis, namely: holistic support of different cardinality-based problems, explanation of unexpected results and query reformulation, comprehensive analysis of explanations, and non-intrusive user integration. To support different cardinality-based problems, we develop methods for explaining no, too few, and too many results. To cover different kinds of explanations, we present two types: subgraph- and modification-based explanations. The first type identifies the reasons of unexpectedness in terms of query subgraphs and delivers differential graphs as answers. The second one reformulates queries in such a way that they produce better results. Considering graph queries to be complex structures with multiple constraints, we investigate different ways of generating explanations starting from the most general one that considers only a query topology through coarse-grained rewriting up to fine-grained modification that allows fine changes of predicates and topology. To provide a comprehensive analysis of explanations, we propose to compare them on three levels including a syntactic description, a content, and a size of a result set. In order to deliver user-aware explanations, we discuss two models for non-intrusive user integration in the generation process.
With the techniques proposed in this thesis, we are able to provide fundamentals for debugging of pattern-matching queries, which deliver no, too few, or too many results, in graph databases implementing the property-graph model.
|
59 |
SlimRank: um modelo de seleção de respostas para perguntas de consumidores / SlimRank: an answer selection model for consumer questionsCriscuolo, Marcelo 16 November 2017 (has links)
A disponibilidade de conteúdo gerado por usuários em sites colaborativos de perguntas e respostas tem impulsionado o avanço de modelos de Question Answering (QA) baseados em reúso. Essa abordagem pode ser implementada por meio da tarefa de seleção de respostas (Answer Selection, AS), que consiste em encontrar a melhor resposta para uma dada pergunta em um conjunto pré-selecionado de respostas candidatas. Nos últimos anos, abordagens baseadas em vetores distribucionais e em redes neurais profundas, em particular em redes neurais convolutivas (CNNs), têm apresentado bons resultados na tarefa de AS. Contudo, a maioria dos modelos é avaliada sobre córpus de perguntas objetivas e bem formadas, contendo poucas palavras. Raramente estruturas textuais complexas são consideradas. Perguntas de consumidores, comuns em sites colaborativos, podem ser bastante complexas. Em geral, são representadas por múltiplas frases inter-relacionadas, que apresentam pouca objetividade, vocabulário leigo e, frequentemente, contêm informações em excesso. Essas características aumentam a dificuldade da tarefa de AS. Neste trabalho, propomos um modelo de seleção de respostas para perguntas de consumidores. São contribuições deste trabalho: (i) uma definição para o objeto de pesquisa perguntas de consumidores; (ii) um novo dataset desse tipo de pergunta, chamado MilkQA; e (iii) um modelo de seleção de respostas, chamado SlimRank. O MilkQA foi criado a partir de um arquivo de perguntas e respostas coletadas pelo serviço de atendimento de uma renomada instituição pública de pesquisa agropecuária (Embrapa). Anotadores guiados pela definição de perguntas de consumidores proposta neste trabalho selecionaram 2,6 mil pares de perguntas e respostas contidas nesse arquivo. A análise dessas perguntas levou ao desenvolvimento do modelo SlimRank, que combina representação de textos na forma de grafos semânticos com arquiteturas de CNNs. O SlimRank foi avaliado no dataset MilkQA e comparado com baselines e dois modelos do estado da arte. Os resultados alcançados pelo SlimRank foram bastante superiores aos resultados dos baselines, e compatíveis com resultados de modelos do estado da arte; porém, com uma significativa redução do tempo computacional. Acreditamos que a representação de textos na forma de grafos semânticos combinada com CNNs seja uma abordagem promissora para o tratamento dos desafios impostos pelas características singulares das perguntas de consumidores. / The increasing availability of user-generated content in community Q&A sites has led to the advancement of Question Answering (QA) models that relies on reuse. Such approach can be implemented by the task of Answer Selection (AS), which consists in finding the best answer for a given question in a pre-selected pool candidate answers. Recently, good results have been achieved by AS models based on distributed word vectors and deep neural networks that are used to rank answers for a given question. Convolutinal Neural Networks (CNNs) are particularly succesful in this task. Most of the AS models are built over datasets that contains only short and objective questions expressed as interrogative sentences containing few words. Complex text structures are rarely considered. However, consumer questions may be really complex. This kind of question is the main form of seeking information in community Q&A sites, forums and customer services. Consumer questions have characteristics that increase the difficulty of the answer selection task. In general, they are composed of multiple interrelated sentences that are usually subjective, and contains laymans terms and excess of details that may be not particulary relevant. In this work, we propose an answer selection model for consumer questions. Specifically the contributions of this work are: (i) a definition for the consumer questions research object; (ii) a new dataset of this kind of question, which we call MilkQA; and (iii) an answer selection model, named SlimRank. MilkQA was created from an archive of questions and answers collected by the customer service of a well-known public agricultural research institution (Embrapa). It contains 2.6 thousand question-answer pairs selected and anonymized by human annotators guided by the definition proposed in this work. The analysis of questions in MilkQA led to the development of SlimRank, which combines semantic textual graphs with CNN architectures. SlimRank was evaluated on MilkQA and compared to baselines and two state-of-the-art answer selection models. The results achieved by our model were much higher than the baselines and comparable to the state of the art, but with significant reduction of computational time. Our results suggest that combining semantic text graphs with convolutional neural networks are a promising approach for dealing with the challenges imposed by consumer questions unique characteristics.
|
60 |
A proporcionalidade como princípio epocal do direito: o (des)velamento da discricionariedade judicial a partir da perspectiva da nova crítica do direitoMorais, Fausto Santos de 18 February 2010 (has links)
Made available in DSpace on 2015-03-05T17:21:55Z (GMT). No. of bitstreams: 0
Previous issue date: 18 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / A hermenêutica jurídica brasileira vem procurando alternativas para lidar com os desafios impostos pelo neoconstitucionalismo, através do qual as normas de Direito Fundamental reclamam o máximo de eficácia possível, e, por isso, acabam por ter sua concretização garantida por instrumentos próprios disponíveis à jurisdição constitucional. Todavia, diante dessa revolução concretizadora, falta ao direito brasileiro uma teorização mais sólida quanto ao papel das fontes, das normas e da interpretação. Pretendendo suprir esta lacuna, a hermenêutica jurídica de uma forma geral, e a brasileira, especificamente, acabou por assumir a proporcionalidade como critério hermenêutico condutor do pensamento jurídico, adotando como seu guru teórico, Robert Alexy. Vê-se, então, a proporcionalidade ser utilizada para resolver os mais diversos problemas impostos ao direito, servindo, por exemplo, como controle de conveniência das decisões legislativas, critério para responder sobre a inconstitucionalidade das normas, elemento para / The Brazilian Law Hermeneutic has been searching alternatives to deal with the imposed challenges by new constitutionalism, by which, the norms of the Fundamental Rights claim the most effectiveness possible, having its guaranteed concretization by own available resources to the constitutional jurisdiction. It occurs on face that concretizer revolution, it lacks to the Brasilian Law such a more sophisticated theorization over the role of sources, of the norms and interpretation. Intending to supply this gap the Law Hermeneutic in a general point, and the Brazilian Law mainly, ended by taking over the proportionality as hermeneutic criterium conductor of the Law thought, adopting as its theorical mentor, Robert Alexy. One observes, then, the proportionality to be used to solve the most different problems imposed to the Law, serving, for instance: as convenience control of the legislations decisions, criterium to respond about the inconstitutionality of the norms, element to fixate the essential core of the Fun
|
Page generated in 0.0982 seconds