• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1507
  • 499
  • 154
  • 145
  • 145
  • 118
  • 55
  • 55
  • 47
  • 36
  • 36
  • 34
  • 17
  • 16
  • 16
  • Tagged with
  • 3354
  • 481
  • 466
  • 366
  • 339
  • 283
  • 257
  • 249
  • 234
  • 233
  • 231
  • 219
  • 212
  • 210
  • 210
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

The Psychology of a Web Search Engine

Ogbonna, Antoine I. January 2011 (has links)
No description available.
72

Multiagent classical planning

Crosby, Matthew David January 2014 (has links)
Classical planning problems consist of an environment in a predefined state; a set of deterministic actions that, under certain conditions, change the state of the environment; and a set of goal conditions. A solution to a classical planning problem is a sequence of actions that leads from the initial state to a state satisfying the problem’s goal conditions. There are many methods for finding solutions to classical planning problems, and a popular technique is to exploit structures that commonly occur. One such structure, apparent in many planning domains, is a breakdown of the problem into multiple agents. However, methods for finding and exploiting multiagent structures are not prevalent in the literature and are currently not competitive. This thesis sets out to rectify this problem. Its first main contribution, is to introduce a domain independent algorithm for extracting multiagent structure from classical planning problems. The algorithm relies on identifying a generalisable property of agents in planning; namely, that agents are entities with an internal state, a part of the planning problem that, under a certain distribution of actions, only they can modify. Once this is appropriately formalised, the decomposition algorithm is introduced and is shown to produce identifiably multiagent decompositions over all of the classical planning domains used in the International Planning Competitions, even finding more detailed decompositions than are used by humans in certain cases. Solving multiagent planning problems can be challenging because a solution may require complex inter-agent coordination. The second main contribution of the thesis is a heuristic planning algorithm that effectively exploits the structure of decomposed domains. The algorithm transforms the coordination problem into a process of subgoal generation that can be solved efficiently under a well-known relaxation in planning. The generated subgoals guide the search so that it is always performed by one single-agent subproblem at a time. The algorithm is evaluated and shown to greatly outperform current state-of-the-art planners over decomposable domains. The thesis also includes discussion of the possible extensions of this work, to include the multiagent concepts of self-interested agents and concurrent actions. Results from the multiagent planning literature are improved upon and a new solution concept is presented that accounts for the ‘farsightedness’ inherent in planning. A method is then presented that can find stable solutions for a certain class of multiagent planning problems. A new method is introduced for modelling concurrent actions that allows them to be written without requiring knowledge of each other agent in the domain, and it is shown how such problems can be solved by a translation to single-agent planning.
73

Theory of optimization and a novel chemical reaction-inspired metaheuristic

Lam, Yun-sang, Albert., 林潤生. January 2009 (has links)
published_or_final_version / Electrical and Electronic Engineering / Master / Master of Philosophy
74

An application of genetic algorithms to chemotherapy treatment

Petrovski, Andrei January 1998 (has links)
The present work investigates methods for optimising cancer chemotherapy within the bounds of clinical acceptability and making this optimisation easily accessible to oncologists. Clinical oncologists wish to be able to improve existing treatment regimens in a systematic, effective and reliable way. In order to satisfy these requirements a novel approach to chemotherapy optimisation has been developed, which utilises Genetic Algorithms in an intelligent search process for good chemotherapy treatments. The following chapters consequently address various issues related to this approach. Chapter 1 gives some biomedical background to the problem of cancer and its treatment. The complexity of the cancer phenomenon, as well as the multi-variable and multi-constrained nature of chemotherapy treatment, strongly support the use of mathematical modelling for predicting and controlling the development of cancer. Some existing mathematical models, which describe the proliferation process of cancerous cells and the effect of anti-cancer drugs on this process, are presented in Chapter 2. Having mentioned the control of cancer development, the relevance of optimisation and optimal control theory becomes evident for achieving the optimal treatment outcome subject to the constraints of cancer chemotherapy. A survey of traditional optimisation methods applicable to the problem under investigation is given in Chapter 3 with the conclusion that the constraints imposed on cancer chemotherapy and general non-linearity of the optimisation functionals associated with the objectives of cancer treatment often make these methods of optimisation ineffective. Contrariwise, Genetic Algorithms (GAs), featuring the methods of evolutionary search and optimisation, have recently demonstrated in many practical situations an ability to quickly discover useful solutions to highly-constrained, irregular and discontinuous problems that have been difficult to solve by traditional optimisation methods. Chapter 4 presents the essence of Genetic Algorithms, as well as their salient features and properties, and prepares the ground for the utilisation of Genetic Algorithms for optimising cancer chemotherapy treatment. The particulars of chemotherapy optimisation using Genetic Algorithms are given in Chapter 5 and Chapter 6, which present the original work of this thesis. In Chapter 5 the optimisation problem of single-drug chemotherapy is formulated as a search task and solved by several numerical methods. The results obtained from different optimisation methods are used to assess the quality of the GA solution and the effectiveness of Genetic Algorithms as a whole. Also, in Chapter 5 a new approach to tuning GA factors is developed, whereby the optimisation performance of Genetic Algorithms can be significantly improved. This approach is based on statistical inference about the significance of GA factors and on regression analysis of the GA performance. Being less computationally intensive compared to the existing methods of GA factor adjusting, the newly developed approach often gives better tuning results. Chapter 6 deals with the optimisation of multi-drug chemotherapy, which is a more practical and challenging problem. Its practicality can be explained by oncologists' preferences to administer anti-cancer drugs in various combinations in order to better cope with the occurrence of drug resistant cells. However, the imposition of strict toxicity constraints on combining various anticancer drugs together, makes the optimisation problem of multi-drug chemotherapy very difficult to solve, especially when complex treatment objectives are considered. Nevertheless, the experimental results of Chapter 6 demonstrate that this problem is tractable to Genetic Algorithms, which are capable of finding good chemotherapeutic regimens in different treatment situations. On the basis of these results a decision has been made to encapsulate Genetic Algorithms into an independent optimisation module and to embed this module into a more general and user-oriented environment - the Oncology Workbench. The particulars of this encapsulation and embedding are also given in Chapter 6. Finally, Chapter 7 concludes the present work by summarising the contributions made to the knowledge of the subject treated and by outlining the directions for further investigations. The main contributions are: (1) a novel application of the Genetic Algorithm technique in the field of cancer chemotherapy optimisation, (2) the development of a statistical method for tuning the values of GA factors, and (3) the development of a robust and versatile optimisation utility for a clinically usable decision support system. The latter contribution of this thesis creates an opportunity to widen the application domain of Genetic Algorithms within the field of drug treatments and to allow more clinicians to benefit from utilising the GA optimisation.
75

The relationship between bureaucracy and the law : The fall and rise of the General Warrant

Lewis, S. J. January 1985 (has links)
No description available.
76

A Look at Learning in Repeated Search: The Role of Memory and Competition

Grant, Emily Nicole Skow January 2007 (has links)
The role of memory in repeated search tasks is contentious. Wolfe et al. (2000) have argued that participants do not learn a repeated scene and continue to perform a time-consuming search process for hundreds of trials. In contrast, Chun and Jiang (1998, 1999) have shown that search efficiency is improved for repeated versus new scenes and this learning can occur for either spatial layout independent of identity or identity independent of spatial layout. The experiments presented here demonstrate that participants learn a great deal about repeated search displays including the location of a particular item (both identity and location), the relative probability with which an item occurs in a location, and direction from the fixation point to the target. I argue that memory is established for these components and the reactivation of these memories by a repeated search display produces competition. This competitive target verification process takes time and can result in positive search slopes, which have been taken as evidence for memory-free search - a flawed logical argument.
77

Resource allocation analysis of the stochastic diffusion search

Nasuto, Slawomir Jaroslaw January 1999 (has links)
The Stochastic Diffusion Search (SDS) was developed as a solution to the best-fit search problem. Thus, as a special case it is capable of solving the transform invariant pattern recognition problem. SDS is efficient and, although inherently probabilistic, produces very reliable solutions in widely ranging search conditions. However, to date a systematic formal investigation of its properties has not been carried out. This thesis addresses this problem. The thesis reports results pertaining to the global convergence of SDS as well as characterising its time complexity. However, the main emphasis of the work, reports on the resource allocation aspect of the Stochastic Diffusion Search operations. The thesis introduces a novel model of the algorithm, generalising an Ehrenfest Urn Model from statistical physics. This approach makes it possible to obtain a thorough characterisation of the response of the algorithm in terms of the parameters describing the search conditions in case of a unique best-fit pattern in the search space. This model is further generalised in order to account for different search conditions: two solutions in the search space and search for a unique solution in a noisy search space. Also an approximate solution in the case of two alternative solutions is proposed and compared with predictions of the extended Ehrenfest Urn model. The analysis performed enabled a quantitative characterisation of the Stochastic Diffusion Search in terms of exploration and exploitation of the search space. It appeared that SDS is biased towards the latter mode of operation. This novel perspective on the Stochastic Diffusion Search lead to an investigation of extensions of the standard SDS, which would strike a different balance between these two modes of search space processing. Thus, two novel algorithms were derived from the standard Stochastic Diffusion Search, ‘context-free’ and ‘context-sensitive’ SDS, and their properties were analysed with respect to resource allocation. It appeared that they shared some of the desired features of their predecessor but also possessed some properties not present in the classic SDS. The theory developed in the thesis was illustrated throughout with carefully chosen simulations of a best-fit search for a string pattern, a simple but representative domain, enabling careful control of search conditions.
78

Essays on the Search-Theoretic Approach to Macroeconomics

Potter, Tristan L. January 2016 (has links)
Thesis advisor: Sanjay Chugh / This dissertation studies unemployment---both its micro-level contours and its macro-level fluctuations---from a search-theoretic perspective. Guided by the structure of search theory, each constituent chapter employs a different set of empirical tools to confront a fundamental aspect of joblessness. / Thesis (PhD) — Boston College, 2016. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: Economics.
79

No optimisation without representation : a knowledge based systems view of evolutionary/neighbourhood search optimisation

Tuson, Andrew Laurence January 1999 (has links)
In recent years, research into ‘neighbourhood search’ optimisation techniques such as simulated annealing, tabu search, and evolutionary algorithms has increased apace, resulting in a number of useful heuristic solution procedures for real-world and research combinatorial and function optimisation problems. Unfortunately, their selection and design remains a somewhat ad hoc procedure and very much an art. Needless to say, this shortcoming presents real difficulties for the future development and deployment of these methods. This thesis presents work aimed at resolving this issue of principled optimiser design. Driven by the needs of both the end-user and designer, and their knowledge of the problem domain and the search dynamics of these techniques, a semi-formal, structured, design methodology that makes full use of the available knowledge will be proposed, justified, and evaluated. This methodology is centred around a Knowledge Based System (KBS) view of neighbourhood search with a number of well-defined knowledge sources that relate to specific hypotheses about the problem domain. This viewpoint is complemented by a number of design heuristics that suggest a structured series of hillclimbing experiments which allow these results to be empirically evaluated and then transferred to other optimisation techniques if desired. First of all, this thesis reviews the techniques under consideration. The case for the exploitation of problem-specific knowledge in optimiser design is then made. Optimiser knowledge is shown to be derived from either the problem domain theory, or the optimiser search dynamics theory. From this, it will be argued that the design process should be primarily driven by the problem domain theory knowledge as this makes best use of the available knowledge and results in a system whose behaviour is more likely to be justifiable to the end-user. The encoding and neighbourhood operators are shown to embody the main source of problem domain knowledge, and it will be shown how forma analysis can be used to formalise the hypotheses about the problem domain that they represent. Therefore it should be possible for the designer to experimentally evaluate hypotheses about the problem domain. To this end, proposed design heuristics that allow the transfer of results across optimisers based on a common hillclimbing class, and that can be used to inform the choice of evolutionary algorithm recombination operators, will be justified. In fact, the above approach bears some similarity to that of KBS design. Additional knowledge sources and roles will therefore be described and discussed, and it will be shown how forma analysis again plays a key part in their formalisation. Design heuristics for many of these knowledge sources will then be proposed and justified. This methodology will be evaluated by testing the validity of the proposed design heuristics in the context of two sequencing case studies. The first case study is a well-studied problem from operational research, the flowshop sequencing problem, which will provide a through test of many of the design heuristics proposed here. Also, an idle-time move preference heuristic will be proposed and demonstrated on both directed mutation and candidate list methods. The second case study applies the above methodology to design a prototype system for resource redistribution in the developing world, a problem that can be modelled as a very large transportation problem with non-linear constraints and objective function. The system, combining neighbourhood search with a constructive algorithm which reformulates the problem to one of sequencing, was able to produce feasible shipment plans for problems derived from data from the World Health Organisation’s TB programme in China that are much larger than those problems tackled by the current ‘state-of-the-art’ for transportation problems.
80

Methods for Distributed Information Retrieval

Craswell, Nicholas Eric, Nick.Craswell@anu.edu.au January 2001 (has links)
Published methods for distributed information retrieval generally rely on cooperation from search servers. But most real servers, particularly the tens of thousands available on the Web, are not engineered for such cooperation. This means that the majority of methods proposed, and evaluated in simulated environments of homogeneous cooperating servers, are never applied in practice. ¶ This thesis introduces new methods for server selection and results merging. The methods do not require search servers to cooperate, yet are as effective as the best methods which do. Two large experiments evaluate the new methods against many previously published methods. In contrast to previous experiments they simulate a Web-like environment, where servers employ varied retrieval algorithms and tend not to sub-partition documents from a single source. ¶ The server selection experiment uses pages from 956 real Web servers, three different retrieval systems and TREC ad hoc topics. Results show that a broker using queries to sample servers’ documents can perform selection over non-cooperating servers without loss of effectiveness. However, using the same queries to estimate the effectiveness of servers, in order to favour servers with high quality retrieval systems, did not consistently improve selection effectiveness. ¶ The results merging experiment uses documents from five TREC sub-collections, five different retrieval systems and TREC ad hoc topics. Results show that a broker using a reference set of collection statistics, rather than relying on cooperation to collate true statistics, can perform merging without loss of effectiveness. Since application of the reference statistics method requires that the broker download the documents to be merged, experiments were also conducted on effective merging based on partial documents. The new ranking method developed was not highly effective on partial documents, but showed some promise on fully downloaded documents. ¶ Using the new methods, an effective search broker can be built, capable of addressing any given set of available search servers, without their cooperation.

Page generated in 0.0375 seconds