Spelling suggestions: "subject:"algorithms"" "subject:"a.lgorithms""
211 |
Estimating Auction Equilibria using Individual Evolutionary LearningJames, Kevin 31 May 2019 (has links)
I develop the Generalized Evolutionary Nash Equilibrium Estimator (GENEE) library. The tool is designed to provide a generic computational library for running genetic algorithms and individual evolutionary learning in economic decision-making environments. Most importantly, I have adapted the library to estimate equilibria bidding functions in auctions. I show it produces highly accurate estimates across a large class of auction environments with known solutions. I then apply GENEE to estimate the equilibria of two additional auctions with no known solutions: first-price sealed-bid common value auctions with multiple signals, and simultaneous first-price auctions with subadditive values
|
212 |
Numerical algorithms for the pole placement problemMiminis, George S. January 1984 (has links)
No description available.
|
213 |
Heuristic algorithms for routing problems.Chong, Yen N. January 2001 (has links)
General routing problems deal with transporting some commodities and/or travelling along the axes of a given network in some optimal manner. In the modern world such problems arise in several contexts such as distribution of goods, transportation of commodities and/or people etc.In this thesis we focus on two classical routing problems, namely the Travelling Salesman Problem (TSP) and the Vehicle Routing Problem (VRP). The TSP can be described as a salesman travels from his home city, visits each of the other ( n - 1) cities exactly once and returns back to the home city such that the total distance travelled by him is minimised. The VRP may be stated as follows: A set of n customers (with known locations and demands for some commodity) is to be supplied from a single depot using a set of delivery vehicles each with a prescribed capacity. A delivery route starts from the depot, visits some customers and returns back to the depot. The VRP is to determine the delivery routes for each vehicle such that the total distance travelled by all the vehicles is minimised.These routing problems are simple to state in terms of describing them in words. But they are very complex in terms of providing a suitable mathematical formulation and a valid procedure to solve them. These routing problems are simple to state in terms of describing them in words. But they are very complex in terms of providing a suitable mathematical formulation and a valid procedure to solve them. These problems belong to the class of NP-hard (Non- deterministic Polynomial) problems. With the present knowledge, it is believed that the problems in NP-hard are unlikely to have any good algorithms to arrive at optimal solutions to a general problem. Hence researchers have focused their effort on; (i) developing exact algorithms to solve as large size problems as possible; (ii) developing heuristics to produce ++ / near optimal solutions.The exact algorithms for such problems have not performed satisfactorily as they need an enormous amount of computational time to solve moderate size problems. For instance, in the literature, TSP of size 225-city, 4461-city and 7397-city were solved using computational time of 1 year, 1.9 years and 4 years respectively (Junger et al., 1995). Thus heuristics, in particular the probabilistic methods such as tabu search, play a significant role in obtaining near optimal solutions. In the literature there is very little comparison between the various exact algorithms and heuristics. (Very often the real-life problems are too large and no optimal solution can be found in a reasonable time.)One of the problems with a probabilistic heuristic is that different implementations (runs) of the same probabilistic heuristic on a given problem may produce distinct solutions of different quality. Thus the desired quality and reproducibility of the solution cannot be ensured. Furthermore, the performance of the heuristics on the benchmark problems provide no Guarantee of the quality of solutions that can be obtained for the problem faced by a researcher. Most of the documentation on the performance of heuristics in literature problems provides no information regarding the computational effort (CPU time) spent in obtaining the claimed solution, reproducibility of the claimed solution and the hardware environment of the implementation. This thesis focuses on some of these deficiencies.Most of the heuristics for general combinatorial optimisation problems are based on neighbourhood search methods. This thesis explores and provides a formal setup for defining neighbourhood structures, definitions of local optimum and global optimum. Furthermore it highlights the dependence and drawbacks of such methods on the neighbourhood structure.It is necessary to emphasise ++ / the need for a statistical analysis of the output to be part of any such probabilistic heuristic. Some of the statistical tools used for the two probabilistic heuristics for TSP and VRP are developed. Furthermore, these heuristics axe part of a bigger class called tabu search heuristics for combinatorial optimisation problems. Hence it includes an overview of the TSP, VRP and tabu search methods in Chapters 2, 3 and 4 respectively. Subsequently in Chapters 5, 6, 7 and 8 ideas of neighbourhood structure, local optimum etc. are developed and the required statistical analysis for some heuristics on the TSP and VRP is demonstrated. In Chapter 9, the conclusion of this thesis is drawn and the direction of future work is outlined. The following is a brief outline of the contribution of this thesis.In Chapter 5, the ideas of neighbourhood structure, local optimum, global optimum and probabilistic heuristics for any combinatorial optimisation problem sare developed. The drawbacks of the probabilistic heuristics for such problems axe highlighted. Furthermore, the need to select the best heuristic on the basis of testing a statistical hypothesis and related statistical analysis is emphasised.Chapter 6 illustrates some of the ideas presented in Chapter 5 using the GENIUS algorithm proposed for the TSP. Statistical analysis is performed for 36 variations of GENIUS algorithm based on different neighbourhood parameters, different types of insertion methods used and two types of constructions of starting solutions. The analysis is performed on 27 literature problems with size ranging from 100 cities to 532 cities and 20 randomly generated problems with size ranging from 100 cities to 480 cities. In both cases the best heuristic is selected. Furthermore, the fitting of the Weibull Distribution to the objective function values of the heuristic solutions provides an estimate of the ++ / optimal objective function value and a corresponding confidence interval for both the literature and randomly generated problems. In both cases the estimate of the optimal objective function values are within 8.2% of the best objective function value known.Since the GENIUS algorithm proved to be efficient, a hybrid heuristic for the TSP combining the branch and bound method and GENIUS algorithm to solve large dimensional problems is proposed. The algorithm is tested on both the literature problems with sizes ranging from 575 cities to 724 cities and randomly generated problems with sizes ranging from 500 cities to 700 cities. Though problems could not be solved to optimality within the 10 hours time limit, solutions were found within 2.4% of the best known objective function value in the literature.In Chapter 7, a similar statistical analysis for the TABUROUTE algorithm proposed for the VRP is conducted. The analysis is carried out based on the different neighbourhood parameters and tested using 14 literature problems with sizes ranging from 50 cities to 199 cities and 49 randomly generated problems with sizes ranging from 60 cities to 120 cities. In both sets of the problems, the statistical tests accepted the hypothesis that there is no significant difference in the solution produced between the various parameters used for the TABUROUTE algorithm. By fitting the Weibull distribution to the objective function values of the local optimal solutions, the optimal objective function value and a corresponding confidence intervals for each problem is estimated. These estimates give values that are to within 6.1% and 18.3% of the best known values for the literature problems and randomly generated problems respectively.In Chapter 8, the general neighbourhood search method for a general combinatorial optimisation problem is presented. Very often, the neighbourhood structure ++ / can be defined suitably only on a superset S of the set of feasible solutions S. Thus the solutions in SS are infeasible. Several questions axe posed regarding the computational complexity of the solution space of a problem. These concepts are illustrated on the 199-city CDVRP, using the TABUROUTE algorithm.In addition, the idea of complexity of the solution space based on the samples collected over the 140 runs is explored. Some of the data collected include the number of solutions with distance and/or capacity feasible, the number of feasible neighbourhood solutions encountered for one run, etc. Questions such asHow many solutions are there for the 199-city problem ?How many numbers of local minima solutions are there for the 199-city problem?What is the size of the feasible region for the 199-city problem?are answered. Finally, the conclusion is drawn that this problem cannot be used as a benchmark based on the size of the feasible region and too many local minima.The conclusion of this thesis and directions of future work are discussed in Chapter 9. There are two appendices presented at the end of the thesis. Appendix A presents the details of the Friedman test, the expected utility function test and the estimation of the optimal objective function value based on the Weibull distribution. Appendix B presents a list of tables from Chapters 6, 7 and 8.
|
214 |
Algorithms in the Study of Multiperfect and Odd Perfect NumbersJanuary 2003 (has links)
A long standing unanswered question in number theory concerns the existence (or not) of odd perfect numbers. Over time many properties of an odd perfect number have been established and refined. The initial goal of this research was to improve the lower bound on the number of distinct prime factors of an odd perfect number, if one exists, to at least 9. Previous approaches to this problem involved the analysis of a carefully chosen set of special cases with each then being eliminated by a contradiction. This thesis applies an algorithmic, factor chain approach to the problem. The implementation of such an approach as a computer program allows the speed, accuracy and flexibility of modern computer technology to not only assist but even direct the discovery process. In addition to considering odd perfect numbers, several related problems were investigated, concerned with (i) harmonic, (ii) even multiperfect and (iii) odd triperfect numbers. The aim in these cases was to demonstrate the correctness and versatility of the computer code and to fine tune its efficiency while seeking improved properties of these types of numbers. As a result of this work, significant improvements have been made to the understanding of harmonic numbers. The introduction of harmonic seeds, coupled with a straightforward procedure for generating most harmonic numbers below a chosen bound, expands the opportunities for further investigations of harmonic numbers and in particular allowed the determination of all harmonic numbers below 10 to the power 12 and a proof that there are no odd harmonic numbers below 10 to the power 15. When considering even multiperfect numbers, a search procedure was implemented to find the first 10-perfect number as well as several other new ones. As a fresh alternative to the factor chain search, a 0-1 linear programming model was constructed and used to show that all multiperfect numbers divisible by 2 to the power of a, for a being less than or equal to 65, subject to a modest constraint, are known in the literature. Odd triperfect numbers (if they exist) have properties which are similar to, but simpler than, those for odd perfect numbers. An extended test on the possible prime factors of such a number was developed that, with minor differences, applies to both odd triperfect and odd perfect numbers. When applicable, this test allows an earlier determination of a contradiction within a factor chain and so reduces the effort required. It was also shown that an odd triperfect number must be greater than 10 to the power 128. While the goal of proving that an odd perfect number must have at least 9 distinct prime factors was not achieved, due to mainly practical limitations, the algorithmic approach was able to show that for an odd perfect number with 8 distinct prime factors, (i) if it is exactly divisible by 3 to the power of 2a then a = 1, 2, 3, 5, 6 or a is greater than or equal to 31 (ii) if the special component is pi to the power of alpha, pi less than 10 to the 6 and pi to the (alpha+1) less than 10 to the 40, then alpha = 1.
|
215 |
The deprioritised approach to prioritised algorithmsHowe, Stephen Alexander, Mathematics & Statistics, Faculty of Science, UNSW January 2008 (has links)
Randomised algorithms are an effective method of attacking computationally intractable problems. A simple and fast randomised algorithm may produce results to an accuracy sufficient for many purposes, especially in the average case. In this thesis we consider average case analyses of heuristics for certain NP-hard graph optimisation problems. In particular, we consider algorithms that find dominating sets of random regular directed graphs. As well as providing an average case analysis, our results also determine new upper bounds on domination numbers of random regular directed graphs. The algorithms for random regular directed graphs considered in this thesis are known as prioritised algorithms. Each prioritised algorithm determines a discrete random process. This discrete process may be continuously approximated using differential equations. Under certain conditions, the solutions to these differential equations describe the behaviour of the prioritised algorithm. Applying such an analysis to prioritised algorithms directly is difficult. However, we are able to use prioritised algorithms to define new algorithms, called deprioritised algorithms, that can be analysed in this fashion. Defining a deprioritised algorithm based on a given prioritised algorithm, and then analysing the deprioritised algorithm, is called the deprioritised approach. The initial theory describing the deprioritised approach was developed by Wormald and has been successfully applied in many cases. However not all algorithms are covered by Wormald??s theory: for example, algorithms for random regular directed graphs. The main contribution of this thesis is the extension of the deprioritised approach to a larger class of prioritised algorithms. We demonstrate the new theory by applying it to two algorithms which find dominating sets of random regular directed graphs.
|
216 |
Three-dimensional measurement using a single camera and target trackingIovenitti, Pio Gioacchino, piovenitti@swin.edu.au January 1997 (has links)
This thesis involves the development of a three-dimensional measurement system
for digitising the surface of an object. The measurement system consists of a single
camera and a four point planar target of known size. The target is hand held, and is
used to probe the surface of the object being measured. The position of the target is
tracked by the camera, and the contact point on the object is determined. The vision
based digitising technique can be used in the industrial and engineering design fields
during the product development phase.
The accuracy of measurement is an important criterion for establishing the success
of the 3-D measurement system, and the factors influencing the accuracy are
investigated. These factors include the image processing algorithm, the intrinsic
parameters of the camera, the algorithm to determine the position, and various
procedural variables. A new iterative algorithm is developed to calculate position.
This algorithm is evaluated, and its performance is compared to that of an analytic
algorithm. Simple calibration procedures are developed to determine the intrinsic
parameters, and mathematical models are constructed to justify these procedures.
The performance of the 3-D measurement system is established and compared to
that of existing digitising systems.
|
217 |
Extensions to the probabilistic multi-hypothesis tracker for improved data association / Samuel J. Davey/Davey, Samuel Jarrod January 2003 (has links)
"September 2003" / Bibliography: leaves 209-216. / xxvi, 216 leaves : ill. ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / This thesis has introduced a number of enhancements to the PMHT algorithm, motivated by the Over the Horizon Radar tracking problem. The two primary enhancements are the incorporation of classification information, and the introduction of a discrete state model for the assignment prior probability. The modified PMHT algorithms achieved through these enhancements are referred to as PMHT-c and PMHT-y respectively. / Thesis (Ph.D.)--University of Adelaide, School of Electrical and Electronic Engineering, 2003
|
218 |
A normal-mixture model with random-effects for RR-interval data /Ketchum, Jessica McKinney, January 2006 (has links)
Thesis (Ph. D.)--Virginia Commonwealth University, 2006. / Prepared for: Dept. of Biostatistics. Bibliography: leaves 189-198. Also available online via the Internet.
|
219 |
Robust minutia-based fingerprint verificationDeng, Huimin, January 2006 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2006. / Title proper from title frame. Also available in printed format.
|
220 |
The design of an immunity-based search and rescue system for humanitarian logisticsKo, W. Y., Albert. January 2006 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2006. / Title proper from title frame. Also available in printed format.
|
Page generated in 0.0737 seconds