• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 21
  • 7
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 109
  • 44
  • 19
  • 16
  • 15
  • 13
  • 13
  • 13
  • 13
  • 12
  • 12
  • 12
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Investigations into Satisfiability Search

Slater, Andrew, andrew.slater@csl.anu.edu.au January 2003 (has links)
In this dissertation we investigate theoretical aspects of some practical approaches used in solving and understanding search problems. We concentrate on the Satisfiability problem, which is a strong representative from search problem domains. The work develops general theoretical foundations to investigate some practical aspects of satisfiability search. This results in a better understanding of the fundamental mechanics for search algorithm construction and behaviour. A theory of choice or branching heuristics is presented, accompanied by results showing a correspondence of both parameterisations and performance when the method is compared to previous empirically motivated branching techniques. The logical foundations of the backtracking mechanism are explored alongside formulations for reasoning in relevant logics which results in the development of a malleable backtracking mechanism that subsumes other intelligent backtracking proof construction techniques and allows the incorporation of proof rearrangement strategies. Moreover, empirical tests show that relevant backtracking outperforms all other forms of intelligent backtracking search tree construction methods. An investigation into modelling and generating world problem instances justifies a modularised problem model proposal which is used experimentally to highlight the practicability of search algorithms for the proposed model and related domains.
32

Noun and prepositional phrases in English and Vietnamese : a contrastive analysis

Bang, Nguyen, n/a January 1985 (has links)
This study aims to discuss the noun and prepositional phrases in English and in Vietnamese and their impact upon teaching and learning English in the Vietnamese situation. Attempts have been made to state the similarities and differences in noun and prepositional phrases in the two languages and raise and solve some difficulties and problems arising particularly from differences between English and Vietnamese. In this study, Contrastive Linguistics is concerned with the comparison of the two languages with a view to determining the differences and similarities between them. With this practical aim the study tries to provide a model for comparison and determine how and which of the phrases are comparable. It is hoped to provide as much information as is possible in a limited study of this kind on English noun and prepositional phrases, then on Vietnamese noun phrases. The study draws attention to differences with examples. It analyses the heads of noun phrases in the two languages as well as the pre and postmodifications and their positions. It also analyses the uses of the prepositional phrases in the two languages. At the same time, it points out the kinds of errors made by Vietnamese learners in the above-mentioned areas and their causes. Finally, some suggestions are made for those who may be responsible for teaching English as a Foreign Language to younger pupils as well as adults, or to students at universities or colleges
33

A General View of Normalisation through Atomic Flows

Gundersen, Tom 10 November 2009 (has links) (PDF)
Atomic flows are a geometric invariant of classical propositional proofs in deep inference. In this thesis we use atomic flows to describe new normal forms of proofs, of which the traditional normal forms are special cases, we also give several normalisation procedures for obtaining the normal forms. We define, and use to present our results, a new deep-inference formalism called the functorial calculus, which is more flexible than the traditional calculus of structures. To our surprise we are able to 1) normalise proofs without looking at their logical connectives or logical rules; and 2) normalise proofs in less than exponential time.
34

A probabilistic architecture for algorithm portfolios

Silverthorn, Bryan Connor 05 April 2013 (has links)
Heuristic algorithms for logical reasoning are increasingly successful on computationally difficult problems such as satisfiability, and these solvers enable applications from circuit verification to software synthesis. Whether a problem instance can be solved, however, often depends in practice on whether the correct solver was selected and its parameters appropriately set. Algorithm portfolios leverage past performance data to automatically select solvers likely to perform well on a given instance. Existing portfolio methods typically select only a single solver for each instance. This dissertation develops and evaluates a more general portfolio method, one that computes complete solver execution schedules, including repeated runs of nondeterministic algorithms, by explicitly incorporating probabilistic reasoning into its operation. This modular architecture for probabilistic portfolios (MAPP) includes novel solutions to three issues central to portfolio operation: first, it estimates solver performance distributions from limited data by constructing a generative model; second, it integrates domain-specific information by predicting instances on which solvers exhibit similar performance; and, third, it computes execution schedules using an efficient and effective dynamic programming approximation. In a series of empirical comparisons designed to replicate past solver competitions, MAPP outperforms the most prominent alternative portfolio methods. Its success validates a principled approach to portfolio operation, offers a tool for tackling difficult problems, and opens a path forward in algorithm portfolio design. / text
35

Labels and Tags: A New Look at Naming

Slabey, Margaretta January 2007 (has links)
What meaning does a name have in a sentence? How do we escape the inevitable difficulties that arise in delineating an individual's meaning through one's speech? The need arises for a distinction between proper names on the basis of the kinds of objects to which they refer. This distinction can provide the theoretical tools needed to solve the problems of empty names, negative existential statements, cognitive significance and substitution failure. Through a study of these issues, the fallacies inherent in current theories of meaning for proper names becomes apparent, as they fail to provide adequate or complete solutions. By elucidating a distinction between two kinds of proper names, labels and tags, we are able to provide solutions to the problems of naming where other theories fail.
36

Solving MAXSAT by Decoupling Optimization and Satisfaction

Davies, Jessica 08 January 2014 (has links)
Many problems that arise in the real world are difficult to solve partly because they present computational challenges. Many of these challenging problems are optimization problems. In the real world we are generally interested not just in solutions but in the cost or benefit of these solutions according to different metrics. Hence, finding optimal solutions is often highly desirable and sometimes even necessary. The most effective computational approach for solving such problems is to first model them in a mathematical or logical language, and then solve them by applying a suitable algorithm. This thesis is concerned with developing practical algorithms to solve optimization problems modeled in a particular logical language, MAXSAT. MAXSAT is a generalization of the famous Satisfiability (SAT) problem, that associates finite costs with falsifying various desired conditions where these conditions are expressed as propositional clauses. Optimization problems expressed in MAXSAT typically have two interacting components: the logical relationships between the variables expressed by the clauses, and the optimization component involving minimizing the falsified clauses. The interaction between these components greatly contributes to the difficulty of solving MAXSAT. The main contribution of the thesis is a new hybrid approach, MaxHS, for solving MAXSAT. Our hybrid approach attempts to decouple these two components so that each can be solved with a different technology. In particular, we develop a hybrid solver that exploits two sophisticated technologies with divergent strengths: SAT for solving the logical component, and Integer Programming (IP) solvers for solving the optimization component. MaxHS automatically and incrementally splits the MAXSAT problem into two parts that are given to the SAT and IP solvers, which work together in a complementary way to find a MAXSAT solution. The thesis investigates several improvements to the MaxHS approach and provides empirical analysis of its behaviour in practise. The result is a new solver, MaxHS, that is shown to be the most robust existing solver for MAXSAT.
37

Solving MAXSAT by Decoupling Optimization and Satisfaction

Davies, Jessica 08 January 2014 (has links)
Many problems that arise in the real world are difficult to solve partly because they present computational challenges. Many of these challenging problems are optimization problems. In the real world we are generally interested not just in solutions but in the cost or benefit of these solutions according to different metrics. Hence, finding optimal solutions is often highly desirable and sometimes even necessary. The most effective computational approach for solving such problems is to first model them in a mathematical or logical language, and then solve them by applying a suitable algorithm. This thesis is concerned with developing practical algorithms to solve optimization problems modeled in a particular logical language, MAXSAT. MAXSAT is a generalization of the famous Satisfiability (SAT) problem, that associates finite costs with falsifying various desired conditions where these conditions are expressed as propositional clauses. Optimization problems expressed in MAXSAT typically have two interacting components: the logical relationships between the variables expressed by the clauses, and the optimization component involving minimizing the falsified clauses. The interaction between these components greatly contributes to the difficulty of solving MAXSAT. The main contribution of the thesis is a new hybrid approach, MaxHS, for solving MAXSAT. Our hybrid approach attempts to decouple these two components so that each can be solved with a different technology. In particular, we develop a hybrid solver that exploits two sophisticated technologies with divergent strengths: SAT for solving the logical component, and Integer Programming (IP) solvers for solving the optimization component. MaxHS automatically and incrementally splits the MAXSAT problem into two parts that are given to the SAT and IP solvers, which work together in a complementary way to find a MAXSAT solution. The thesis investigates several improvements to the MaxHS approach and provides empirical analysis of its behaviour in practise. The result is a new solver, MaxHS, that is shown to be the most robust existing solver for MAXSAT.
38

Emotion and trauma : underlying emotions and trauma symptoms in two flooded populations

Nesbitt, Catherine January 2010 (has links)
Flood literature presents an inconsistent account of post-disaster distress; debating whether distress is pathological or normal and attempting to understand distress in terms of disaster variables. The literature therefore provides little guidance as to how to formulate difficulties in a clinically meaningful way reflective of individual’s experiences. The SPAARS model is presented as a model by which to reconcile these differences and quantitative support for its concepts were studied within two flooded samples. Participants who were flooded in Carlisle in 2005 (n=32) and participants flooded in Morpeth in 2008 (n=29) provided two samples at different stages in flood recovery and facilitated a quasi-longitudinal sample for comparison of flood-related distress over time. Participants were asked to complete a survey pertaining to: basic emotions experienced during the flood event, basic emotions experienced after the flood, Impact of Events Scale-Revised (IES-R), Regulation of Emotions Questionnaire (REQ) and the Trauma Symptom Inventory (TSI). Findings suggest that a third of participants who were flooded experienced clinically significant levels of distress, even after four years. Both samples showed higher levels of impact symptoms on the IES compared to symptoms on the TSI. Anxiety and anger were significant in reported flood experiences both during and after the flooding. Flood-related variables and previous experiences had no effect on increased distress but greater use of internal-dysfunctional emotion regulation strategies was related to increased impact and distress symptoms. Study findings and the SPAARS model are discussed in relation to previous flooding and PTSD literature, as well as clinical implications for the treatment of post-disaster distress and for the future management of flood-affected populations.
39

ON SIMPLE BUT HARD RANDOM INSTANCES OF PROPOSITIONAL THEORIES AND LOGIC PROGRAMS

Namasivayam, Gayathri 01 January 2011 (has links)
In the last decade, Answer Set Programming (ASP) and Satisfiability (SAT) have been used to solve combinatorial search problems and practical applications in which they arise. In each of these formalisms, a tool called a solver is used to solve problems. A solver takes as input a specification of the problem – a logic program in the case of ASP, and a CNF theory for SAT – and produces as output a solution to the problem. Designing fast solvers is important for the success of this general-purpose approach to solving search problems. Classes of instances that pose challenges to solvers can help in this task. In this dissertation we create challenging yet simple benchmarks for existing solvers in ASP and SAT.We do so by providing models of simple logic programs as well as models of simple CNF theories. We then randomly generate logic programs as well as CNF theories from these models. Our experimental results show that computing answer sets of random logic programs as well as models of random CNF theories with carefully chosen parameters is hard for existing solvers. We generate random logic programs with 2-literals, and our experiments show that it is hard for ASP solvers to obtain answer sets of purely negative and constraint-free programs, indicating the importance of these programs in the development of ASP solvers. An easy-hard-easy pattern emerges as we compute the average number of choice points generated by ASP solvers on randomly generated 2-literal programs with an increasing number of rules. We provide an explanation for the emergence of this pattern in these programs. We also theoretically study the probability of existence of an answer set for sparse and dense 2-literal programs. We consider simple classes of mixed Horn formulas with purely positive 2- literal clauses and purely negated Horn clauses. First we consider a class of mixed Horn formulas wherein each formula has m 2-literal clauses and k-literal negated Horn clauses. We show that formulas that are generated from the phase transition region of this class are hard for complete SAT solvers. The second class of Mixed Horn Formulas we consider are obtained from completion of a certain class of random logic programs. We show the appearance of an easy-hard-easy pattern as we generate formulas from this class with increasing numbers of clauses, and that the formulas generated in the hard region can be used as benchmarks for testing incomplete SAT solvers.
40

Learning to think, thinking to learn : dispositions, identity and communities of practice : a comparative study of six N.Z. farmers as practitioners.

Allan, Janet K January 2002 (has links)
The aim of this research is to explore the question of how farmers learn, in constructing knowledge both in and for practice. It seeks to identify how they gain new ideas, make changes, develop to a level of expertise and who and what contribute to this process. The rapidity of change in a high tech environment, combined with globalisation, the new economy and the knowledge age, means that farmers are living their lives in 'fast forward' mode. There is so much new technology, research and development available that the ability to identify information relevant to a particular farming practice and to process it to knowledge is an increasing challenge. Six central South Island (N.Z.) farmers were selected purposively as case studies. The range of case profiles provides for comparison and contrast of the relative importance of formal qualifications, differences between sheep/beef farmers and dairy farmers, levels of expertise, age and experiences. The self-rating of the farmers enables a comparison of lower and higher performers, identifying characteristics which enable insight into why some farmers consistently lead new practice and why others are reluctant followers. The research is qualitative in design and approached from a constructlVIst and interpretive paradigm. Socially and experientially based, it seeks to understand the experiences of the subjects through in-depth interviews and observations. This study identifies farmers as social learners although working independently, in relative geographical isolation and often, social isolation. It concludes that these farmers learn through participation in the practice of farming. This practice includes a constellation of cOmInunities of practice, which may be resource-rich or resource-poor, depending on the range and depth of the farmer's involvement. Through full and committed participation in these practice communities and associate constellations, the practitioner's identity evolves, encouraging new practices, ideas and innovation. This study emphasises that expertise is not a permanent state but requires evolving identity, knowledge and dispositional ability; for maintenance and growth within a culture of practice. Emergent grounded theory suggests that dispositional knowledge underpins construction and use of all knowledge; that construction and use of high-order propositional and procedural knowledge requires higher-order dispositional knowledge and that mastery is developed through evolving identity, dispositions, leadership and learning, socioculturally constructed through resource-rich constellations of communities of practice.

Page generated in 0.1115 seconds