• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 50
  • 50
  • 15
  • 10
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Multiagent classical planning

Crosby, Matthew David January 2014 (has links)
Classical planning problems consist of an environment in a predefined state; a set of deterministic actions that, under certain conditions, change the state of the environment; and a set of goal conditions. A solution to a classical planning problem is a sequence of actions that leads from the initial state to a state satisfying the problem’s goal conditions. There are many methods for finding solutions to classical planning problems, and a popular technique is to exploit structures that commonly occur. One such structure, apparent in many planning domains, is a breakdown of the problem into multiple agents. However, methods for finding and exploiting multiagent structures are not prevalent in the literature and are currently not competitive. This thesis sets out to rectify this problem. Its first main contribution, is to introduce a domain independent algorithm for extracting multiagent structure from classical planning problems. The algorithm relies on identifying a generalisable property of agents in planning; namely, that agents are entities with an internal state, a part of the planning problem that, under a certain distribution of actions, only they can modify. Once this is appropriately formalised, the decomposition algorithm is introduced and is shown to produce identifiably multiagent decompositions over all of the classical planning domains used in the International Planning Competitions, even finding more detailed decompositions than are used by humans in certain cases. Solving multiagent planning problems can be challenging because a solution may require complex inter-agent coordination. The second main contribution of the thesis is a heuristic planning algorithm that effectively exploits the structure of decomposed domains. The algorithm transforms the coordination problem into a process of subgoal generation that can be solved efficiently under a well-known relaxation in planning. The generated subgoals guide the search so that it is always performed by one single-agent subproblem at a time. The algorithm is evaluated and shown to greatly outperform current state-of-the-art planners over decomposable domains. The thesis also includes discussion of the possible extensions of this work, to include the multiagent concepts of self-interested agents and concurrent actions. Results from the multiagent planning literature are improved upon and a new solution concept is presented that accounts for the ‘farsightedness’ inherent in planning. A method is then presented that can find stable solutions for a certain class of multiagent planning problems. A new method is introduced for modelling concurrent actions that allows them to be written without requiring knowledge of each other agent in the domain, and it is shown how such problems can be solved by a translation to single-agent planning.
2

Additive abstraction-based heuristics

Yang, Fan 06 1900 (has links)
In this thesis, we study theoretically and empirically the additive abstraction-based heuristics. First we present formal general definitions for abstractions that extend to general additive abstractions. We show that the general definition makes proofs of admissibility, consistency, and additivity easier, by proving that several previous methods for defining abstractions and additivity satisfy three imple conditions. Then we investigate two general methods for defining additive abstractions and run experiments to determine the effectiveness of these methods for two benchmark state spaces: TopSpin and the Pancake puzzle. Third, we propose that the accuracy of the heuristics generated by abstraction can be improved by checking for infeasibility. The theory and experiments demonstrate the approach to detect infeasibility and the application of this technique to different domains. Finally, we explore the applications of additive abstraction-based heuristics in two state spaces with nonuniform edge costs: the Sequential Ordering Problem (SOP) and the weighted Pancake puzzle. We formalize a novel way of generating additive and non-additive heuristics for these state spaces. Furthermore, we investigate the key concepts to generate good additive and non-additive abstractions. Experiments show that compared to some simple alternative heuristics, well chosen abstractions can enhance the quality of suboptimal solutions for large SOP instances and reduce search time for the weighted Pancake problems.
3

Multimapping Abstraction and State-set Search Theory

Pang, Bo Unknown Date
No description available.
4

Additive abstraction-based heuristics

Yang, Fan Unknown Date
No description available.
5

Symbolic Bidirectional Breadth-First Heuristic Search

Richards, Simon Kim 11 December 2004 (has links)
A Reduced Ordered Binary Decision Diagram (BDD) is a symbolic data structure introduced to the model checking community by Bryant in 1986 to help verify properties of systems with very large state spaces. Recently, BDDs have been used in heuristic search algorithms as an approach to representing and solving search problems with very large state spaces. However, these algorithms are still not memory efficient. This thesis presents a symbolic heuristic search algorithm that uses BDDs in a memory efficient way by performing bidirectional breadthirst heuristic search. The approach is evaluated empirically against existing symbolic methods and is shown to provide a significant improvement in performance.
6

Bootstrap Learning of Heuristic Functions

Jabbari Arfaee, Shahab 11 1900 (has links)
We investigate the use of machine learning to create effective heuristics for single-agent search. Our method aims to generate a sequence of heuristics from a given weak heuristic h{0} and a set of unlabeled training instances using a bootstrapping procedure. The training instances that can be solved using h{0} provide training examples for a learning algorithm that produces a heuristic h{1} that is expected to be stronger than h{0}. If h{0} is so weak that it cannot solve any of the given instances we use random walks backward from the goal state to create a sequence of successively more difficult training instances starting with ones that are guaranteed to be solvable by h{0}. The bootstrap process is then repeated using h{i} instead of h{i-1} until a sufficiently strong heuristic is produced. We test this method on the 15- and 24-sliding tile puzzles, the 17- , 24- , and 35-pancake puzzles, Rubik's Cube, and the 15- and 20-blocks world. In every case our method produces heuristics that allow IDA* to solve randomly generated problem instances quickly with solutions very close to optimal. The total time for the bootstrap process to create strong heuristics for large problems is several days. To make the process efficient when only a single test instance needs to be solved, we look for a balance in the time spent on learning better heuristics and the time needed to solve the test instance using the current set of learned heuristics. %We use two threads in parallel, We alternate between the execution of two threads, namely the learning thread (to learn better heuristics) and the solving thread (to solve the test instance). The solving thread is split up into sub-threads. The first solving sub-thread aims at solving the instance using the initial heuristic. When a new heuristic is learned in the learning thread, an additional solving sub-thread is started which uses the new heuristic to try to solve the instance. The total time by which we evaluate this process is the sum of the times used by both threads up to the point when the instance is solved in one sub-thread. The experimental results of this method on large search spaces demonstrate that the single instance of large problems are solved substantially faster than the total time needed for the bootstrap process while the solutions obtained are still very close to optimal.
7

Bootstrap Learning of Heuristic Functions

Jabbari Arfaee, Shahab Unknown Date
No description available.
8

Guided random-walk based model checking

Bui, Hoai Thang, Computer Science & Engineering, Faculty of Engineering, UNSW January 2009 (has links)
The ever increasing use of computer systems in society brings emergent challenges to companies and system designers. The reliability of software and hardware can be financially critical, and lives can depend on it. The growth in size and complexity of software, and increasing concurrency, compounds the problem. The potential for errors is greater than ever before, and the stakes are higher than ever before. Formal methods, particularly model checking, is an approach that attempts to prove mathematically that a model of the behaviour of a product is correct with respect to certain properties. Certain errors can therefore be proven never to occur in the model. This approach has tremendous potential in system development to provide guarantees of correctness. Unfortunately, in practice, model checking cannot handle the enormous sizes of the models of real-world systems. The reason is that the approach requires an exhaustive search of the model to be conducted. While there are exceptions, in general model checkers are said not to scale well. In this thesis, we deal with this scaling issue by using a guiding technique that avoids searching areas of the model, which are unlikely to contain errors. This technique is based on a process of model abstraction in which a new, much smaller model is generated that retains certain important model information but discards the rest. This new model is called a heuristic. While model checking using a heuristic as a guide can be extremely effective, in the worst case (when the guide is of no help), it performs the same as exhaustive search, and hence it also does not scale well in all cases. A second technique is employed to deal with the scaling issue. This technique is based on the concept of random walks. A random walk is simply a `walk' through the model of the system, carried out by selecting states in the model randomly. Such a walk may encounter an error, or it may not. It is a non-exhaustive technique in the sense that only a manageable number of walks are carried out before the search is terminated. This technique cannot replace the conventional model checking as it can never guarantee the correctness of a model. It can however, be a very useful debugging tool because it scales well. From this point of view, it relieves the system designer from the difficult task of dealing with the problem of size in model checking. Using random walks, the effort goes instead into looking for errors. The effectiveness of model checking can be greatly enhanced if the above two techniques are combined: a random walk is used to search for errors, but the walk is guided by a heuristic. This in a nutshell is the focus of this work. We should emphasise that the random walk approach uses the same formal model as model checking. Furthermore, the same heuristic technique is used to guide the random walk as a guided model checker. Together, guidance and random walks are shown in this work to result in vastly improved performance over conventional model checking. Verification has been sacrificed of course, but the new technique is able to find errors far more quickly, and deal with much larger models.
9

Guided random-walk based model checking

Bui, Hoai Thang, Computer Science & Engineering, Faculty of Engineering, UNSW January 2009 (has links)
The ever increasing use of computer systems in society brings emergent challenges to companies and system designers. The reliability of software and hardware can be financially critical, and lives can depend on it. The growth in size and complexity of software, and increasing concurrency, compounds the problem. The potential for errors is greater than ever before, and the stakes are higher than ever before. Formal methods, particularly model checking, is an approach that attempts to prove mathematically that a model of the behaviour of a product is correct with respect to certain properties. Certain errors can therefore be proven never to occur in the model. This approach has tremendous potential in system development to provide guarantees of correctness. Unfortunately, in practice, model checking cannot handle the enormous sizes of the models of real-world systems. The reason is that the approach requires an exhaustive search of the model to be conducted. While there are exceptions, in general model checkers are said not to scale well. In this thesis, we deal with this scaling issue by using a guiding technique that avoids searching areas of the model, which are unlikely to contain errors. This technique is based on a process of model abstraction in which a new, much smaller model is generated that retains certain important model information but discards the rest. This new model is called a heuristic. While model checking using a heuristic as a guide can be extremely effective, in the worst case (when the guide is of no help), it performs the same as exhaustive search, and hence it also does not scale well in all cases. A second technique is employed to deal with the scaling issue. This technique is based on the concept of random walks. A random walk is simply a `walk' through the model of the system, carried out by selecting states in the model randomly. Such a walk may encounter an error, or it may not. It is a non-exhaustive technique in the sense that only a manageable number of walks are carried out before the search is terminated. This technique cannot replace the conventional model checking as it can never guarantee the correctness of a model. It can however, be a very useful debugging tool because it scales well. From this point of view, it relieves the system designer from the difficult task of dealing with the problem of size in model checking. Using random walks, the effort goes instead into looking for errors. The effectiveness of model checking can be greatly enhanced if the above two techniques are combined: a random walk is used to search for errors, but the walk is guided by a heuristic. This in a nutshell is the focus of this work. We should emphasise that the random walk approach uses the same formal model as model checking. Furthermore, the same heuristic technique is used to guide the random walk as a guided model checker. Together, guidance and random walks are shown in this work to result in vastly improved performance over conventional model checking. Verification has been sacrificed of course, but the new technique is able to find errors far more quickly, and deal with much larger models.
10

Guided random-walk based model checking

Bui, Hoai Thang, Computer Science & Engineering, Faculty of Engineering, UNSW January 2009 (has links)
The ever increasing use of computer systems in society brings emergent challenges to companies and system designers. The reliability of software and hardware can be financially critical, and lives can depend on it. The growth in size and complexity of software, and increasing concurrency, compounds the problem. The potential for errors is greater than ever before, and the stakes are higher than ever before. Formal methods, particularly model checking, is an approach that attempts to prove mathematically that a model of the behaviour of a product is correct with respect to certain properties. Certain errors can therefore be proven never to occur in the model. This approach has tremendous potential in system development to provide guarantees of correctness. Unfortunately, in practice, model checking cannot handle the enormous sizes of the models of real-world systems. The reason is that the approach requires an exhaustive search of the model to be conducted. While there are exceptions, in general model checkers are said not to scale well. In this thesis, we deal with this scaling issue by using a guiding technique that avoids searching areas of the model, which are unlikely to contain errors. This technique is based on a process of model abstraction in which a new, much smaller model is generated that retains certain important model information but discards the rest. This new model is called a heuristic. While model checking using a heuristic as a guide can be extremely effective, in the worst case (when the guide is of no help), it performs the same as exhaustive search, and hence it also does not scale well in all cases. A second technique is employed to deal with the scaling issue. This technique is based on the concept of random walks. A random walk is simply a `walk' through the model of the system, carried out by selecting states in the model randomly. Such a walk may encounter an error, or it may not. It is a non-exhaustive technique in the sense that only a manageable number of walks are carried out before the search is terminated. This technique cannot replace the conventional model checking as it can never guarantee the correctness of a model. It can however, be a very useful debugging tool because it scales well. From this point of view, it relieves the system designer from the difficult task of dealing with the problem of size in model checking. Using random walks, the effort goes instead into looking for errors. The effectiveness of model checking can be greatly enhanced if the above two techniques are combined: a random walk is used to search for errors, but the walk is guided by a heuristic. This in a nutshell is the focus of this work. We should emphasise that the random walk approach uses the same formal model as model checking. Furthermore, the same heuristic technique is used to guide the random walk as a guided model checker. Together, guidance and random walks are shown in this work to result in vastly improved performance over conventional model checking. Verification has been sacrificed of course, but the new technique is able to find errors far more quickly, and deal with much larger models.

Page generated in 0.0716 seconds