Spelling suggestions: "subject:"3research algorithms"" "subject:"1research algorithms""
1 |
An Analysis of Path Planning Algorithms Focusing on A* and D*Reeves, Megan Clancy 30 May 2019 (has links)
No description available.
|
2 |
Theoretical and Practical Aspects of Ant Colony OptimizationBlum, Christian 23 January 2004 (has links)
Combinatorial optimization problems are of high academical as well as practical importance. Many instances of relevant combinatorial optimization problems are, due to their dimensions, intractable for complete methods such as branch and bound. Therefore, approximate algorithms such as metaheuristics received much attention in the past 20 years. Examples of metaheuristics are simulated annealing, tabu search, and evolutionary computation. One of the most recent metaheuristics is ant colony optimization (ACO), which was developed by Prof. M. Dorigo (who is the supervisor of this thesis) and colleagues. This thesis deals with theoretical as well as practical aspects of ant colony optimization.
* A survey of metaheuristics. Chapter 1 gives an extensive overview on the nowadays most important metaheuristics. This overview points out the importance of two important concepts in metaheuristics: intensification and diversification.
* The hyper-cube framework. Chapter 2 introduces a new framework for implementing ACO algorithms. This framework brings two main benefits to ACO researchers. First, from the point of view of the theoretician: we prove that Ant System (the first ACO algorithm to be proposed in the literature) in the hyper-cube framework generates solutions whose expected quality monotonically increases with the number of algorithm iterations when applied to unconstrained problems. Second, from the point of view of the experimental researcher, we show through examples that the implementation of ACO algorithms in the hyper-cube framework increases their robustness and makes the handling of the pheromone values easier.
* Deception. In the first part of Chapter 3 we formally define the notions of first and second order deception in ant colony optimization. Hereby, first order deception corresponds to deception as defined in the field of evolutionary computation and is therefore a bias introduced by the problem (instance) to be solved. Second order deception is an ACO-specific phenomenon. It describes the observation that the quality of the solutions generated by ACO algorithms may decrease over time in certain settings. In the second part of Chapter 3 we propose different ways of avoiding second order deception.
* ACO for the KCT problem. In Chapter 4 we outline an ACO algorithm for the edge-weighted k-cardinality tree (KCT) problem. This algorithm is implemented in the hyper-cube framework and uses a pheromone model that was determined to be well-working in Chapter 3. Together with the evolutionary computation and the tabu search approaches that we develop in Chapter 4, this ACO algorithm belongs to the current state-of-the-art algorithms for the KCT problem.
* ACO for the GSS problem. Chapter 5 describes a new ACO algorithm for the group shop scheduling (GSS) problem, which is a general shop scheduling problem that includes among others the well-known job shop scheduling (JSS) and the open shop scheduling (OSS) problems. This ACO algorithm, which is implemented in the hyper-cube framework and which uses a new pheromone model that was experimentally tested in Chapter 3, is currently the best ACO algorithm for the JSS as well as the OSS problem. In particular when applied to OSS problem instances, this algorithm obtains excellent results, improving the best known solution for several OSS benchmark instances. A final contribution of this thesis is the development of a general method for the solution of combinatorial optimization problems which we refer to as Beam-ACO. This method is a hybrid between ACO and a tree search technique known as beam search. We show that Beam-ACO is currently a state-of-the-art method for the application to the existing open shop scheduling (OSS) problem instances.
|
3 |
Theory and techniques for synthesizing efficient breadth-first search algorithmsNedunuri, Srinivas 05 October 2012 (has links)
The development of efficient algorithms to solve a wide variety of
combinatorial and planning problems is a significant achievement in
computer science. Traditionally each algorithm is developed individually,
based on flashes of insight or experience, and then (optionally) verified
for correctness. While computer science has formalized the analysis
and verification of algorithms, the process of algorithm development remains largely ad-hoc. The ad-hoc nature
of algorithm development is especially limiting when developing algorithms
for a family of related problems.
Guided program synthesis is an existing
methodology for systematic development of algorithms. Specific algorithms
are viewed as instances of very general algorithm schemas. For example,
the Global Search schema generalizes traditional branch-and-bound
search, and includes both depth-first and breadth-first strategies.
Algorithm development involves systematic specialization of the algorithm
schema based on problem-specific constraints to create efficient algorithms
that are correct by construction, obviating the need for a separate
verification step. Guided program synthesis has been applied to a
wide range of algorithms, but there is still no systematic process
for the synthesis of large search programs such as AI planners.
Our first contribution is the specialization of Global Search to a
class we call Efficient Breadth-First Search (EBFS), by incorporating
dominance relations to constrain the size of the frontier of the search
to be polynomially bounded. Dominance relations allow two search spaces
to be compared to determine whether one dominates the other, thus
allowing the dominated space to be eliminated from the search. We
further show that EBFS is an effective characterization of greedy
algorithms, when the breadth bound is set to one. Surprisingly, the
resulting characterization is more general than the well-known characterization
of greedy algorithms, namely the Greedy Algorithm parametrized over
algebraic structures called greedoids.
Our second contribution is a methodology for systematically deriving
dominance relations, not just for individual problems but for families
of related problems. The techniques are illustrated on numerous well-known
problems. Combining this with the program schema for EBFS results
in efficient greedy algorithms.
Our third contribution is application of the theory and methodology
to the practical problem of synthesizing fast planners. Nearly all
the state-of-the-art planners in the planning literature are heuristic
domain-independent planners. They generally do not scale well and
their space requirements also become quite prohibitive.
Planners such as TLPlan that incorporate domain-specific information
in the form of control rules are orders of magnitude faster. However,
devising the control rules is labor-intensive task and requires domain
expertise and insight. The correctness of the rules is also not guaranteed.
We introduce a method by which domain-specific dominance relations
can be systematically derived, which can then be turned into control
rules, and demonstrate the method on a planning problem (Logistics). / text
|
4 |
Computational investigations into the structure and reactivity of small transition metal clusters.Addicoat, Matthew January 2009 (has links)
This thesis presents a number of largely independent forays into developing an understanding of the unique chemistry of transition metal clusters. The first chapter of this thesis represents an initial foray into mapping the chemical reactivity of transition metal clusters - a monumental task that will doubtless continue for some time. The small slice undertaken in this work investigates the reactivity with CO of a series of the smallest possible metal clusters; 4d (Nb - Ag) homonuclear metal trimers. In Chapter 2, two known transition metal clusters were studied using CASSCF (MCSCF) and MRCI methods, only to find that DFT methods provided more accurate Ionisation Potentials (IPs). Thus Chapter 3 was devoted to optimising a density functional to predict IPs. As clusters get larger, the number of possible structures grows rapidly too large for human intuition to handle, thus Chapter 4 is devoted to the use of an automated stochastic algorithm, “Kick”, for structure elucidation. Chapter 5 improves on this algorithm, by permitting chemically sensible molecular fragments to be defined and used. Chapter 6 then comes full circle and uses the new Kick algorithm to investigate the reaction of CO with a series of mono-substituted niobium tetramers (i.e. Nb₃X). / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1350246 / Thesis (Ph.D.) - University of Adelaide, School of Chemistry and Physics, 2009
|
5 |
Computational investigations into the structure and reactivity of small transition metal clusters.Addicoat, Matthew January 2009 (has links)
This thesis presents a number of largely independent forays into developing an understanding of the unique chemistry of transition metal clusters. The first chapter of this thesis represents an initial foray into mapping the chemical reactivity of transition metal clusters - a monumental task that will doubtless continue for some time. The small slice undertaken in this work investigates the reactivity with CO of a series of the smallest possible metal clusters; 4d (Nb - Ag) homonuclear metal trimers. In Chapter 2, two known transition metal clusters were studied using CASSCF (MCSCF) and MRCI methods, only to find that DFT methods provided more accurate Ionisation Potentials (IPs). Thus Chapter 3 was devoted to optimising a density functional to predict IPs. As clusters get larger, the number of possible structures grows rapidly too large for human intuition to handle, thus Chapter 4 is devoted to the use of an automated stochastic algorithm, “Kick”, for structure elucidation. Chapter 5 improves on this algorithm, by permitting chemically sensible molecular fragments to be defined and used. Chapter 6 then comes full circle and uses the new Kick algorithm to investigate the reaction of CO with a series of mono-substituted niobium tetramers (i.e. Nb₃X). / http://proxy.library.adelaide.edu.au/login?url= http://library.adelaide.edu.au/cgi-bin/Pwebrecon.cgi?BBID=1350246 / Thesis (Ph.D.) - University of Adelaide, School of Chemistry and Physics, 2009
|
6 |
ACCTuner: OpenACC Auto-Tuner For Accelerated Scientific ApplicationsAlzayer, Fatemah 17 May 2015 (has links)
We optimize parameters in OpenACC clauses for a stencil evaluation kernel executed on Graphical Processing Units (GPUs) using a variety of machine learning and optimization search algorithms, individually and in hybrid combinations, and compare execution time performance to the best possible obtained from brute force search. Several auto-tuning techniques – historic learning, random walk, simulated annealing, Nelder-Mead, and genetic algorithms – are evaluated over a large two-dimensional parameter space not satisfactorily addressed to date by OpenACC compilers, consisting of gang size and vector length. A hybrid of historic learning and Nelder-Mead delivers the best balance of high performance and low tuning effort.
GPUs are employed over an increasing range of applications due to the performance available from their large number of cores, as well as their energy efficiency. However, writing code that takes advantage of their massive fine-grained parallelism requires deep knowledge of the hardware, and is generally a complex task involving program transformation and the selection of many parameters. To improve programmer productivity, the directive-based programming model OpenACC was announced as an industry standard in 2011. Various compilers have been developed to support this model, the most notable being those by Cray, CAPS, and PGI. While the architecture and number of cores have evolved rapidly, the compilers have failed to keep
up at configuring the parallel program to run most e ciently on the hardware. Following successful approaches to obtain high performance in kernels for cache-based processors using auto-tuning, we approach this compiler-hardware gap in GPUs by employing auto-tuning for the key parameters “gang” and “vector” in OpenACC clauses. We demonstrate results for a stencil evaluation kernel typical of seismic imaging over a variety of realistically sized three-dimensional grid configurations, with different truncation error orders in the spatial dimensions. Apart from random walk and historic learning based on nearest neighbor in grid size, most of our heuristics, including the one that proves best, appear to be applied in this context for the first time. This work is a stepping-stone towards an OpenACC auto-tuning framework for
more general high-performance numerical kernels optimized for GPU computations.
|
7 |
Analyses and Cost Evaluation of Code Tree Search AlgorithmsMohan, Seshadri 09 1900 (has links)
<p> Codes with a tree structure find wide use in data compression and error correction. It is generally impractical to view and weigh all the branches in a code tree, so a search algorithm is employed which considers some but not others in a predetermined fashion. Traditionally, the efficiency of code tree search algorithms has been measured by the number of tree branches visited for a given level of performance. This measure does not indicate the true consumption of resources. Cost functions are defined based on the number of code tree paths retained, S, the length of the paths, L, and the number of code tree branches searched per branch released as output, E[C]. Using these cost functions, most of the existing algorithms as well as some new algorithms proposed here are compared.</p> <p> These new algorithms include three metric-first algorithms. The first one, the merge algorithm, uses, in addition to the main list used by the stack algorithm, an auxiliary list to store paths. The merge algorithm reduces the dependence on S for the product resource cost from O(S^2) for the stack algorithm to O(S^4/3 ) for the merge algorithm. A generalization of this algorithm reduces the product cost to O(S log S). The second algorithm uses a class of height-balanced trees, known as AVL trees, to store code tree paths, resulting in an alternate method to the merge algorithm achieving O(S log S) cost.</p> <p> The third algorithm, using the concepts of dynamic hashing and trie searching, provides important modifications to the Jelinek bucket algorithm by incorporating dynamic splitting and merging of buckets. This strategy provides a
balanced data structure and reduces the product cost still further compared to the first two algorithms.</p> <p> We next turn to analysis of the number of nodes visited during a search. Using the theory of multitype branching processes in random environments an equation for node computation is derived for asymmetric source coding by the single stack algorithm. This equation is shown to be the stochastic analog of an equation for symmetric sources. Simulation results, obtained by encoding the Hamming source by the single stack algorithm, are used to optimize the performance of the algorithm with respect to the bias
factor, stack length, and limit on computation. A modification to the algorithm that raises the barrier during forward motion provides a better distortion performance.</p> <p> The metric-first stack algorithm is used to encode a voiced speech sound. From experimental evidence, it is shown how to optimize the algorithm's SNR performance with respect to the algorithm's storage, execution time, and node computation. For each of these, the optimal parameterizing
of the algorithm differs markedly. Similarities are pointed out between the results for speech and earlier theoretical results for the binary i.i.d. source with Hamming distortion measure. It is shown that metric-first algorithms may perform better with "real life" sources like speech than they do with artificial sources, and in view of this the algorithms proposed here take on added significance.</p> / Thesis / Doctor of Philosophy (PhD)
|
8 |
Investigations into Satisfiability SearchSlater, Andrew, andrew.slater@csl.anu.edu.au January 2003 (has links)
In this dissertation we investigate theoretical aspects of some practical approaches used
in solving and understanding search problems. We concentrate on the Satisfiability
problem, which is a strong representative from search problem domains. The work develops
general theoretical foundations to investigate some practical aspects of satisfiability
search. This results in a better understanding of the fundamental mechanics for search
algorithm construction and behaviour. A theory of choice or branching heuristics is
presented, accompanied by results showing a correspondence of both parameterisations and
performance when the method is compared to previous empirically motivated branching
techniques. The logical foundations of the backtracking mechanism are explored alongside formulations for reasoning in relevant logics which results in the development of a
malleable backtracking mechanism that subsumes other intelligent backtracking proof
construction techniques and allows the incorporation of proof rearrangement strategies.
Moreover, empirical tests show that relevant backtracking outperforms all other forms of
intelligent backtracking search tree construction methods. An investigation into
modelling and generating world problem instances justifies a modularised problem model proposal which is used experimentally to highlight the practicability of search algorithms
for the proposed model and related domains.
|
9 |
Speeding Up the Convergence of Online Heuristic Search and Scaling Up Offline Heuristic SearchFurcy, David Andre 25 November 2004 (has links)
The most popular methods for solving the shortest-path problem in
Artificial Intelligence are heuristic search algorithms. The main
contributions of this research are new heuristic search algorithms
that are either faster or scale up to larger problems than existing
algorithms. Our contributions apply to both online and offline tasks.
For online tasks, existing real-time heuristic search algorithms learn
better informed heuristic values and in some cases eventually converge
to a shortest path by repeatedly executing the action leading to a
successor state with a minimum cost-to-goal estimate. In contrast, we
claim that real-time heuristic search converges faster to a shortest
path when it always selects an action leading to a state with a
minimum f-value, where the f-value of a state is an estimate of the
cost of a shortest path from start to goal via the state, just like in
the offline A* search algorithm. We support this claim by implementing
this new non-trivial action-selection rule in FALCONS and by showing
empirically that FALCONS significantly reduces the number of actions
to convergence of a state-of-the-art real-time search algorithm.
For offline tasks, we improve on two existing ways of scaling up
best-first search to larger problems. First, it is known that the WA*
algorithm (a greedy variant of A*) solves larger problems when it is
either diversified (i.e., when it performs expansions in parallel) or
committed (i.e., when it chooses the state to expand next among a
fixed-size subset of the set of generated but unexpanded states). We
claim that WA* solves even larger problems when it is enhanced with
both diversity and commitment. We support this claim with our MSC-KWA*
algorithm. Second, it is known that breadth-first search solves
larger problems when it prunes unpromising states, resulting in the
beam search algorithm. We claim that beam search quickly solves even
larger problems when it is enhanced with backtracking based on limited
discrepancy search. We support this claim with our BULB algorithm. We
show that both MSC-KWA* and BULB scale up to larger problems than
several state-of-the-art offline search algorithms in three standard
benchmark domains. Finally, we present an anytime variant of BULB and
apply it to the multiple sequence alignment problem in biology.
|
10 |
Combining search strategies for distributed constraint satisfactionMagaji, Amina Sambo-Muhammad January 2015 (has links)
Many real-life problems such as distributed meeting scheduling, mobile frequency allocation and resource allocation can be solved using multi-agent paradigms. Distributed constraint satisfaction problems (DisCSPs) is a framework for describing such problems in terms of related subproblems, called a complex local problem (CLP), which are dispersed over a number of locations, each with its own constraints on the values their variables can take. An agent knows the variables in its CLP plus the variables (and their current value) which are directly related to one of its own variables and the constraints relating them. It knows little about the rest of the problem. Thus, each CLP is solved by an agent which cooperates with other agents to solve the overall problem. Algorithms for solving DisCSPs can be classified as either systematic or local search with the former being complete and the latter incomplete. The algorithms generally assume that each agent has only one variable as they can solve DisCSP with CLPs using “virtual” agents. However, in large DisCSPs where it is appropriate to trade completeness off against timeliness, systematic search algorithms can be expensive when compared to local search algorithms which generally converge quicker to a solution (if a solution is found) when compared to systematic algorithms. A major drawback of local search algorithms is getting stuck at local optima. Significant researches have focused on heuristics which can be used in an attempt to either escape or avoid local optima. This thesis makes significant contributions to local search algorithms for DisCSPs. Firstly, we present a novel combination of heuristics in DynAPP (Dynamic Agent Prioritisation with Penalties), which is a distributed synchronous local search algorithm for solving DisCSPs having one variable per agent. DynAPP combines penalties on values and dynamic agent prioritisation heuristics to escape local optima. Secondly, we develop a divide and conquer approach that handles DisCSP with CLPs by exploiting the structure of the problem. The divide and conquer approach prioritises the finding of variable instantiations which satisfy the constraints between agents which are often more expensive to satisfy when compared to constraints within an agent. The approach also exploits concurrency and combines the following search strategies: (i) both systematic and local searches; (ii) both centralised and distributed searches; and (iii) a modified compilation strategy. We also present an algorithm that implements the divide and conquer approach in Multi-DCA (Divide and Conquer Algorithm for Agents with CLPs). DynAPP and Multi-DCA were evaluated on several benchmark problems and compared to the leading algorithms for DisCSPs and DisCSPs with CLPs respectively. The results show that at the region of difficult problems, combining search heuristics and exploiting problem structure in distributed constraint satisfaction achieve significant benefits (i.e. generally used less computational time and communication costs) over existing competing methods.
|
Page generated in 0.0606 seconds