• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 50
  • 50
  • 15
  • 10
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Learning Optimal Bayesian Networks with Heuristic Search

Malone, Brandon M 11 August 2012 (has links)
Bayesian networks are a widely used graphical model which formalize reasoning under uncertainty. Unfortunately, construction of a Bayesian network by an expert is timeconsuming, and, in some cases, all expertsmay not agree on the best structure for a problem domain. Additionally, for some complex systems such as those present in molecular biology, experts with an understanding of the entire domain and how individual components interact may not exist. In these cases, we must learn the network structure from available data. This dissertation focuses on score-based structure learning. In this context, a scoring function is used to measure the goodness of fit of a structure to data. The goal is to find the structure which optimizes the scoring function. The first contribution of this dissertation is a shortest-path finding perspective for the problem of learning optimal Bayesian network structures. This perspective builds on earlier dynamic programming strategies, but, as we show, offers much more flexibility. Second, we develop a set of data structures to improve the efficiency of many of the integral calculations for structure learning. Most of these data structures benefit our algorithms, dynamic programming and other formulations of the structure learning problem. Next, we introduce a suite of algorithms that leverage the new data structures and shortest-path finding perspective for structure learning. These algorithms take advantage of a number of new heuristic functions to ignore provably sub-optimal parts of the search space. They also exploit regularities in the search that previous approaches could not. All of the algorithms we present have their own advantages. Some minimize work in a provable sense; others use external memory such as hard disk to scale to datasets with more variables. Several of the algorithms quickly find solutions and improve them as long as they are given more resources. Our algorithms improve the state of the art in structure learning by running faster, using less memory and incorporating other desirable characteristics, such as anytime behavior. We also pose unanswered questions to drive research into the future.
12

A Greedy Search Algorithm for Maneuver-Based Motion Planning of Agile Vehicles

Neas, Charles Bennett 29 December 2010 (has links)
This thesis presents a greedy search algorithm for maneuver-based motion planning of agile vehicles. In maneuver-based motion planning, vehicle maneuvers are solved offline and saved in a library to be used during motion planning. From this library, a tree of possible vehicle states can be generated through the search space. A depth-first, library-based algorithm called AD-Lib is developed and used to quickly provide feasible trajectories along the tree. AD-Lib combines greedy search techniques with hill climbing and effective backtracking to guide the search process rapidly towards the goal. Using simulations of a four-thruster hovercraft, AD-Lib is compared to existing suboptimal search algorithms in both known and unknown environments with static obstacles. AD-Lib is shown to be faster than existing techniques, at the expense of increased path cost. The motion planning strategy of AD-Lib along with a switching controller is also tested in an environment with dynamic obstacles. / Master of Science
13

Automatic detection and classification of leukaemia cells

Ismail, Waidah Binti January 2012 (has links)
Today, there is a substantial number of software and research groups that focus on the development of image processing software to extract useful information from medical images, in order to assist and improve patient diagnosis. The work presented in this thesis is centred on processing of images of blood and bone marrow smears of patients suffering from leukaemia, a common type of cancer. In general, cancer is due to aberrant gene expression, which is caused by either mutations or epigenetic changes in DNA. Poor diet and unhealthy lifestyle may trigger or contribute to these changes, although the underlying mechanism is often unknown. Importantly, many cancer types including leukaemia are curable and patient survival and treatment can be improved, subject to prompt diagnosis. In particular, this study focuses on Acute Myeloid Leukaemia (AML), which can be of eight distinct types (M0 to M7), with the main objective to develop a methodology to automatically detect and classify leukaemia cells into one of the above types. The data was collected from the Department of Haematology, Universiti Sains Malaysia, in Malaysia. Three main methods, namely Cellular Automata, Heuristic Search and classification using Neural Networks are facilitated. In the case of Cellular Automata, an improved method based on the 8-neighbourhood and rules were developed to remove noise from images and estimate the radius of the potential blast cells contained in them. The proposed methodology selects the starting points, corresponding to potential blast cells, for the subsequent seeded heuristic search. The Seeded Heuristic employs a new fitness function for blast cell detection. Furthermore, the WEKA software is utilised for classification of blast cells and hence images, into AML subtypes. As a result accuracy of 97.22% was achieved in the classification of blasts into M3 and other AML subtypes. Finally, these algorithms are integrated into an automated system for image processing. In brief, the research presented in this thesis involves the use of advanced computational techniques for processing and classification of medical images, that is, images of blood samples from patients suffering from leukaemia.
14

Structure and inference in classical planning

Lipovetzky, Nir 07 December 2012 (has links)
Classical planning is the problem of finding a sequence of actions for achieving a goal from an initial state assuming that actions have deterministic effects. The most effective approach for finding such plans is based on heuristic search guided by heuristics extracted automatically from the problem representation. In this thesis, we introduce alternative approaches for performing inference over the structure of planning problems that do not appeal to heuristic functions, nor to reductions to other formalisms such as SAT or CSP. We show that many of the standard benchmark domains can be solved with almost no search or a polynomially bounded amount of search, once the structure of planning problems is taken into account. In certain cases we can characterize this structure in terms of a novel width parameter for classical planning. / Los problemas en planificación clásica consisten en encontrar la secuencia de acciones que lleve a un agente a su objetivo desde un estado inicial, asumiendo que los efectos de las acciones son determinísticos. El enfoque más efectivo para encontrar dichos planes es la búsqueda heurística, extrayendo de la representación del problema de forma automática heurísticas que guien la búsqueda. En esta tesis, introducimos enfoques alternativos para realizar inferencias sobre la estructura del los problemas de planificación, sin apelar a funciones heurísticas, reducciones a SAT o CSP. Demostramos que la mayoría de problemas estándares pueden ser resueltos casi sin búsqueda o con una cantidad de búsqueda polinomialmente limitada, en algunos casos, caracterizando la estructura de los problemas en término de un nuevo parámetro de complejidad para la planificación clásica.
15

Memory-efficient graph search applied to multiple sequence alignment

Zhou, Rong 06 August 2005 (has links)
Graph search is used in many areas of computer science. It is well-known that the scalability of graph-search algorithms such as A* is limited by their memory requirements. In this dissertation, I describe three complementary strategies for reducing the memory requirements of graph-search algorithms, especially for multiple sequence alignment (a central problem in computational molecular biology). These search strategies dramatically increase the range and difficulty of multiple sequence alignment problems that can be solved. The first strategy uses a divide-and-conquer method of solution reconstruction, and one of my contributions is to show that when divide-and-conquer solution reconstruction is used, a layer-by-layer strategy for multiple sequence alignment is more memory-efficient than a bestirst strategy. The second strategy is a new approach to duplicate detection in external-memory graph search that involves partitioning the search graph based on an abstraction of the state space. For graphs with sufficient local structure, it allows graph-search algorithms to use external memory, such as disk storage, almost as efficiently as internal memory. The third strategy is a technique for reducing the memory requirements of sub-alignment search heuristics that are stored in lookup tables. It uses the start and goal states of a problem instance to restrict the region of the state space for which a table-based heuristic is needed, making it possible to store more accurate heuristic estimates in the same amount of memory. These three strategies dramatically improve the scalability of graph search not only for multiple sequence alignment, but for many other graph-search problems, and generalizations of these search strategies for other graph-search problems are discussed throughout the dissertation.
16

Exact and heuristic algorithms for the Euclidean Steiner tree problem

Van Laarhoven, Jon William 01 July 2010 (has links)
In this thesis, we study the Euclidean Steiner tree problem (ESTP) which arises in the field of combinatorial optimization. The ESTP asks for a network of minimal total edge length spanning a set of given terminal points in Rd with the ability to add auxiliary connecting points (Steiner points) to decrease the overall length of the network. The graph theory literature contains extensive studies of exact, approximation, and heuristic algorithms for ESTP in the plane, but less is known in higher dimensions. The contributions of this thesis include a heuristic algorithm and enhancements to an exact algorithm for solving the ESTP. We present a local search heuristic for the ESTP in Rd for d ≥ 2. The algorithm utilizes the Delaunay triangulation to generate candidate Steiner points for insertion, the minimum spanning tree to create a network on the inserted points, and second order cone programming to optimize the locations of Steiner points. Unlike other ESTP heuristics relying on the Delaunay triangulation, the algorithm inserts Steiner points probabilistically into Delaunay triangles to achieve different subtrees on subsets of terminal points. The algorithm extends effectively into higher dimensions, and we present computational results on benchmark test problems in Rd for 2 ≤ d ≤ 5. We develop new geometric conditions derived from properties of Steiner trees which bound below the number of Steiner points on paths between terminals in the Steiner minimal tree. We describe conditions for trees with a Steiner topology and show how these conditions relate to the Voronoi diagram. We describe more restrictive conditions for trees with a full Steiner topology and their implementation into the algorithm of Smith (1992). We present computational results on benchmark test problems in Rd for 2 ≤ d ≤ 5.
17

Water quality modeling and rainfall estimation: a data driven approach

Roz, Evan Phillips 01 July 2011 (has links)
Water is vital to man and its quality it a serious topic of concern. Addressing sustainability issues requires new understanding of water quality and water transport. Past research in hydrology has focused primarily on physics-based models to explain hydrological transport and water quality processes. The widespread use of in situ hydrological instrumentation has provided researchers a wealth of data to use for analysis and therefore use of data mining for data-driven modeling is warranted. In fact, this relatively new field of hydroinformatics makes use of the vast data collection and communication networks that are prevalent in the field of hydrology. In this Thesis, a data-driven approach for analyzing water quality is introduced. Improvements in the data collection of information system allow collection of large volumes of data. Although improvements in data collection systems have given researchers sufficient information about various systems, they must be used in conjunction with novel data-mining algorithms to build models and recognize patterns in large data sets. Since the mid 1990's, data mining has been successful used for model extraction and describing various phenomena of interest.
18

A comparison of SL- and unit-resolution search rules for stratified logic programs

Lagerqvist, Victor January 2010 (has links)
<p>There are two symmetrical resolution rules applicable to logic programs - SL-resolution which yields a top-down refutation and unit-resolution which yields a bottom-up refutation. Both resolution principles need to be coupled with a search rule before they can be used in practice. The search rule determines in which order program clauses are used in the refutation and affects both performance, completeness and quality of solutions. The thesis surveys exhaustive and heuristic search rules for SL-resolution and transformation techniques for (general) logic programs that makes unit-resolution goal oriented.</p><p>The search rules were implemented as meta-interpreters for Prolog and were benchmarked on a suite of programs incorporating both deterministic and nondeterministic code. Whenever deemed applicable benchmark programs were permuted with respect to clause and goal ordering to see if it affected the interpreters performance and termination.</p><p>With the help of the evaluation the conclusion was that alternative search rules for SL-resolution should not be used for performance gains but can in some cases greatly improve the quality of solutions, e.g. in planning or other applications where the quality of an answer correlates with the length of the refutation. It was also established that A* is more flexible than exhaustive search rules since its behavior can be fine-tuned with weighting, and can in some cases be more efficient than both iterative deepening and breadth-first search. The bottom-up interpreter based on unit-resolution and magic transformation had several advantages over the top-down interpreters. Notably for programs where subgoals are recomputed many times. The great disparity in implementation techniques made direct performance comparisons hard however, and it is not clear if even an optimized bottom-up interpreter is competitive against a top-down interpreter with tabling of answers.</p>
19

Admissible Heuristics for Automated Planning

Haslum, Patrik January 2006 (has links)
The problem of domain-independent automated planning has been a topic of research in Artificial Intelligence since the very beginnings of the field. Due to the desire not to rely on vast quantities of problem specific knowledge, the most widely adopted approach to automated planning is search. The topic of this thesis is the development of methods for achieving effective search control for domain-independent optimal planning through the construction of admissible heuristics. The particular planning problem considered is the so called “classical” AI planning problem, which makes several restricting assumptions. Optimality with respect to two measures of plan cost are considered: in planning with additive cost, the cost of a plan is the sum of the costs of the actions that make up the plan, which are assumed independent, while in planning with time, the cost of a plan is the total execution time – makespan – of the plan. The makespan optimization objective can not, in general, be formulated as a sum of independent action costs and therefore necessitates a problem model slightly different from the classical one. A further small extension to the classical model is made with the introduction of two forms of capacitated resources. Heuristics are developed mainly for regression planning, but based on principles general enough that heuristics for other planning search spaces can be derived on the same basis. The thesis describes a collection of methods, including the hm, additive hm and improved pattern database heuristics, and the relaxed search and boosting techniques for improving heuristics through limited search, and presents two extended experimental analyses of the developed methods, one comparing heuristics for planning with additive cost and the other concerning the relaxed search technique in the context of planning with time, aimed at discovering the characteristics of problem domains that determine the relative effectiveness of the compared methods. Results indicate that some plausible such characteristics have been found, but are not entirely conclusive.
20

Speeding Up the Convergence of Online Heuristic Search and Scaling Up Offline Heuristic Search

Furcy, David Andre 25 November 2004 (has links)
The most popular methods for solving the shortest-path problem in Artificial Intelligence are heuristic search algorithms. The main contributions of this research are new heuristic search algorithms that are either faster or scale up to larger problems than existing algorithms. Our contributions apply to both online and offline tasks. For online tasks, existing real-time heuristic search algorithms learn better informed heuristic values and in some cases eventually converge to a shortest path by repeatedly executing the action leading to a successor state with a minimum cost-to-goal estimate. In contrast, we claim that real-time heuristic search converges faster to a shortest path when it always selects an action leading to a state with a minimum f-value, where the f-value of a state is an estimate of the cost of a shortest path from start to goal via the state, just like in the offline A* search algorithm. We support this claim by implementing this new non-trivial action-selection rule in FALCONS and by showing empirically that FALCONS significantly reduces the number of actions to convergence of a state-of-the-art real-time search algorithm. For offline tasks, we improve on two existing ways of scaling up best-first search to larger problems. First, it is known that the WA* algorithm (a greedy variant of A*) solves larger problems when it is either diversified (i.e., when it performs expansions in parallel) or committed (i.e., when it chooses the state to expand next among a fixed-size subset of the set of generated but unexpanded states). We claim that WA* solves even larger problems when it is enhanced with both diversity and commitment. We support this claim with our MSC-KWA* algorithm. Second, it is known that breadth-first search solves larger problems when it prunes unpromising states, resulting in the beam search algorithm. We claim that beam search quickly solves even larger problems when it is enhanced with backtracking based on limited discrepancy search. We support this claim with our BULB algorithm. We show that both MSC-KWA* and BULB scale up to larger problems than several state-of-the-art offline search algorithms in three standard benchmark domains. Finally, we present an anytime variant of BULB and apply it to the multiple sequence alignment problem in biology.

Page generated in 0.0704 seconds