1 |
Convergent algorithms in simulation optimizationHu, Liujia 27 May 2016 (has links)
It is frequently the case that deterministic optimization models could be made more practical by explicitly incorporating uncertainty. The resulting stochastic optimization problems are in general more difficult to solve than their deterministic counterparts, because the objective function cannot be evaluated exactly and/or because there is no explicit relation between the objective function and the corresponding decision variables. This thesis develops random search algorithms for solving optimization problems with continuous decision variables when the objective function values can be estimated with some noise via simulation. Our algorithms will maintain a set of sampled solutions, and use simulation results at these solutions to guide the search for better solutions. In the first part of the thesis, we propose an Adaptive Search with Resampling and Discarding (ASRD) approach for solving continuous stochastic optimization problems. Our ASRD approach is a framework for designing provably convergent algorithms that are adaptive both in seeking new solutions and in keeping or discarding already sampled solutions. The framework is an improvement over the Adaptive Search with Resampling (ASR) method of Andradottir and Prudius in that it spends less effort on inferior solutions (the ASR method does not discard already sampled solutions). We present conditions under which the ASRD method is convergent almost surely and carry out numerical studies aimed at comparing the algorithms. Moreover, we show that whether it is beneficial to resample or not depends on the problem, and analyze when resampling is desirable. Our numerical results show that the ASRD approach makes substantial improvements on ASR, especially for difficult problems with large numbers of local optima. In traditional simulation optimization problems, noise is only involved in the objective functions. However, many real world problems involve stochastic constraints. Such problems are more difficult to solve because of the added uncertainty about feasibility. The second part of the thesis presents an Adaptive Search with Discarding and Penalization (ASDP) method for solving continuous simulation optimization problems involving stochastic constraints. Rather than addressing feasibility separately, ASDP utilizes the penalty function method from deterministic optimization to convert the original problem into a series of simulation optimization problems without stochastic constraints. We present conditions under which the ASDP algorithm converges almost surely from inside the feasible region, and under which it converges to the optimal solution but without feasibility guarantee. We also conduct numerical studies aimed at assessing the efficiency and tradeoff under the two different convergence modes. Finally, in the third part of the thesis, we propose a random search method named Gaussian Search with Resampling and Discarding (GSRD) for solving simulation optimization problems with continuous decision spaces. The method combines the ASRD framework with a sampling distribution based on a Gaussian process that not only utilizes the current best estimate of the optimal solution but also learns from past sampled solutions and their objective function observations. We prove that our GSRD algorithm converges almost surely, and carry out numerical studies aimed at studying the effects of utilizing the Gaussian sampling strategy. Our numerical results show that the GSRD framework performs well when the underlying objective function is multi-modal. However, it takes much longer to sample solutions, especially in higher dimensions.
|
2 |
Adaptive Java optimisation using machine learning techniquesLong, Shun January 2004 (has links)
There is a continuing demand for higher performance, particularly in the area of scientific and engineering computation. In order to achieve high performance in the context of frequent hardware upgrading, software must be adaptable for portable performance. What is required is an optimising compiler that evolves and adapts itself to environmental change without sacrificing performance. Java has emerged as a dominant programming language widely used in a variety of application areas. However, its architectural independant design means that it is frequently unable to deliver high performance especially when compared to other imperative languages such as Fortran and C/C++. This thesis presents a language- and architecture-independant approach to achieve portable high performance. It uses the mapping notation introduced in the Unified Transformation Framework to specify a large optimisation space. A heuristic random search algorithm is introduced to explore this space in a feedback-directed iterative optimisation manner. It is then extended using a machine learning approach which enables the compiler to learn from its previous optimisations and apply the knowledge when necessary. Both the heuristic random search algorithm and the learning optimisation approach are implemented in a prototype Adaptive Optimisation Framework for Java (AOF-Java). The experimental results show that the heuristic random search algorithm can find, within a relatively small number of atttempts, good points in the large optimisation space. In addition, the learning optimisation approach is capable of finding good transformations for a given program from its prior experience with other programs.
|
3 |
Application Of Controlled Random Search Optimization Technique In MMLE With Process NoiseAnilkumar, A K 08 1900 (has links)
Generally in most of the applications of estimation theory using the Method of Maximum Likelihood Estimation (MMLE) to dynamical systems one deals with a situation where only the measurement noise alone is present. However in many present day applications where modeling errors and random state noise input conditions occur it has become necessary for MMLE to handle measurement noise as well as process noise. The numerical algorithms accounting for both measurement and process noise require significantly an order of magnitude higher computer time and memory. Further more, implementation difficulties and convergence problems are often encountered. Here one has to estimate the quantities namely, the initial state error covariance matrix Po, measurement noise covariance matrix R, the process noise covariance matrix Q and the system parameter 0 and the present work deals with the above.
Since the above problem is fairly involved we need to have a good reference solution. For this purpose we utilize the approach and results of Gemson who considered the above problem via the extended Kalman filter (EKF) route to compare the present results from the MMLE route. The EKF uses the unknown parameters as additional states unlike in MMLE which uses only the system states.
Chapter 1 provides a brief historical perspective followed by parameter identification in the presence of process and measurement noises. The earlier formulations such as natural, innovation, combined, and adaptive approaches are discussed.
Chapter 2 deals with the heuristic adaptive tuning of the Kalman filter parameters for the matrices Q and R by Myers and Tapley originally developed for state estimation problems involving satellite orbit estimation. It turns out that for parameter estimation problems apart from the above matrices even the choice of the initial covariance matrix Po is crucial for obtaining proper parameter estimates with a finite amount of data and for this purpose the inverse of the information matrix for Po is used. This is followed by a description of the original Controlled Random Search (CRS) of Price and its variant as implemented and used in the present work to estimate or tune Q, R, and 0 which is the aim of the present work. The above help the reader to appreciate the setting under which the present study has been carried out.
Chapter 3 presents the results and the analysis of the estimation procedure adopted with respect to a specific case study of the lateral dynamics of an aircraft involving 15 unknown parameters. The reference results for the present work are the ones based on the approach of Gemson and Ananthasayanam (1998). The present work proceeds in two phases. In the first case (i) the EKF estimates for Po, Q, and R are used to obtain 0 and in the second case (ii) the estimate of Po and Q together with a reasonable choice of R are utilized to obtain 0 from the CRS algorithm. Thus one is able to assess the capability of the CRS to estimate only the unknown parameters.
The next Chapter 4 presents the results of utilizing the CRS algorithm with R based on a reasonable choice and for Po from the inverse of the information matrix to estimate both Q and 0. This brings out the efficiency of MMLE with CRS algorithm in the estimation of unknown process noise characteristics and unknown parameters. Thus it demonstratesthofcdifficult Q can be estimated using CRS technique without the attendant difficulties of the earlier MMLE formulations in dealing with process noise.
Chapter 5 discusses the - implementation of CRS to estimate the unknown measurement noise covariance matrix R together with the unknown 0 by utilizing the values of Po and Q obtained through EKF route. The effect of variation of R in the parameter estimation procedure is also highlighted in This Chapter. This Chapter explores the importance of Po in the estimation procedure. It establishes the importance of Po though most of the earlier works do not appear to have recognized such a feature. It turned out that the CRS algorithm does not converge when some arbitrary value of Po is chosen. It has to be necessarily obtained from a scouting pass of the EKF. Some sensitivity studies based on variations of Po shows its importance. Further studies shows the sequence of updates, the random nature of process and measurement noise effects, the deterministic nature of the parameter, play a critical role in the convergence of the algorithm.
The last Chapter 6 presents the conclusions from the present work and suggestions for further work.
|
4 |
Direct Search Methods for Nonsmooth Problems using Global Optimization TechniquesRobertson, Blair Lennon January 2010 (has links)
This thesis considers the practical problem of constrained and unconstrained local optimization. This subject has been well studied when the objective function f is assumed to smooth. However, nonsmooth problems occur naturally and frequently in practice. Here f is assumed to be nonsmooth or discontinuous without forcing smoothness assumptions near, or at, a potential solution. Various methods have been
presented by others to solve nonsmooth optimization problems, however only partial convergence results are possible for these methods.
In this thesis, an optimization method which use a series of local and localized global optimization phases is proposed. The local phase searches for a local minimum
and gives the methods numerical performance on parts of f which are smooth. The localized global phase exhaustively searches for points of descent in a neighborhood of cluster points. It is the localized global phase which provides strong theoretical convergence results on nonsmooth problems.
Algorithms are presented for solving bound constrained, unconstrained and constrained nonlinear nonsmooth optimization problems. These algorithms use direct search methods in the local phase as they can be applied directly to nonsmooth problems because gradients are not explicitly required. The localized global optimization phase uses a new partitioning random search algorithm to direct random sampling into promising subsets of ℝⁿ. The partition is formed using classification and regression trees (CART) from statistical pattern recognition. The CART partition defines desirable subsets where f is relatively low, based on previous sampling, from which further samples are drawn directly. For each algorithm, convergence to an essential local minimizer of f is demonstrated under mild conditions. That is, a point x* for which the set of all feasible points with lower f values has Lebesgue measure zero for all sufficiently small neighborhoods of x*. Stopping rules are derived for each algorithm giving practical convergence to estimates of essential local minimizers. Numerical results are presented on a range of nonsmooth test problems for 2 to 10 dimensions showing the methods are effective in practice.
|
5 |
Parallelization of random search global optimization algorithms / Atsitiktinės paieškos globaliojo optimizavimo algoritmų lygiagretinimasLančinskas, Algirdas 20 June 2013 (has links)
Global optimization problems are relevant in various fields of research and industry, such as chemistry, biology, biomedicine, operational research, etc. Normally it is easier to solve optimization problems having some specific properties of objective function such as linearity, convexity, differentiability, etc. However, there are a lot of practical problems that do not satisfy such properties or even cannot be expressed in an adequate mathematical form. Therefore, it is popular to use random search optimization methods in solving such optimization problems.
The dissertation deals with investigation of random search global optimization algorithms, their parallelization and application to solve practical problems. The work is focused on modification and parallelization of particle swarm optimization and genetic algorithms.
The modification of particle swarm optimization algorithm, based on reduction of the search area is proposed, and several strategies to parallelize the algorithm are investigated. The algorithm is applied to solve Multiple Gravity Assist problem using parallel computing system.
A hybrid global multi-objective optimization algorithm is developed by modifying single agent stochastic search strategy, and incorporating it into multi-objective optimization genetic algorithm. Several strategies to parallelize multi-objective optimization genetic algorithm is proposed. Parallel algorithms are experimentally investigated by solving competitive facility location... [to full text] / Optimizavimo uždaviniai sutinkami įvairiose mokslo ir pramonės srityse, tokiose kaip chemija, biologija, biomedicina, operacijų tyrimai ir pan. Paprastai efektyviausiai sprendžiami uždaviniai, turintys tam tikras savybes, tokias kaip tikslo funkcijų tiesiškumas, iškilumas, diferencijuojamumas ir pan. Tačiau ne visi praktikoje pasitaikantys optimizavimo uždaviniai tenkina šias savybes, o kartais iš vis negali būti išreiškiami adekvačia matematine išraiška. Tokiems uždaviniam spręsti yra populiarūs atsitiktinės paieškos optimizavimo metodai.
Disertacijoje yra tiriami atsitiktinės paieškos optimizavimo metodai, jų lygiagretinimo galimybės ir taikymas praktikoje pasitaikantiems uždaviniams spręsti. Pagrindinis dėmesys skiriamas dalelių spiečiaus optimizavimo ir genetinių algoritmų modifikavimui ir lygiagretinimui.
Disertacijoje yra siūloma dalelių spiečiaus optimizavimo algoritmo modifikacija, grįsta pieškos srities siaurinimu, ir tiriamos kelios algoritmo lygiagretinimo strategijos. Algoritmas yra taikomas erdvėlaivių skrydžių trajektorijų optimizavimo uždaviniui spręsti lygiagrečiųjų skaičiavimų sistemose.
Taip pat yra siūlomas hibridinis globaliojo daugiakriterio optimizavimo algoritmas, gautas modifikuojant vieno agento stochastinės paieškos algoritmą ir įkomponuojant į daugiakriterio optimizavimo genetinį algoritmą. Siūlomos kelios daugiakriterio genetinio algoritmo lygiagretinimo strategijos. Jų pagrindu gauti lygiagretieji algoritmai eksperimentiškai tiriami sprendžiant... [toliau žr. visą tekstą]
|
6 |
Atsitiktinės paieškos globaliojo optimizavimo algoritmų lygiagretinimas / Parallelization of random search global optimization algorithmsLančinskas, Algirdas 20 June 2013 (has links)
Optimizavimo uždaviniai sutinkami įvairiose mokslo ir pramonės srityse, tokiose kaip chemija, biologija, biomedicina, operacijų tyrimai ir pan. Paprastai efektyviausiai sprendžiami uždaviniai, turintys tam tikras savybes, tokias kaip tikslo funkcijų tiesiškumas, iškilumas, diferencijuojamumas ir pan. Tačiau ne visi praktikoje pasitaikantys optimizavimo uždaviniai tenkina šias savybes, o kartais iš vis negali būti išreiškiami adekvačia matematine išraiška. Tokiems uždaviniam spręsti yra populiarūs atsitiktinės paieškos optimizavimo metodai.
Disertacijoje yra tiriami atsitiktinės paieškos optimizavimo metodai, jų lygiagretinimo galimybės ir taikymas praktikoje pasitaikantiems uždaviniams spręsti. Pagrindinis dėmesys skiriamas dalelių spiečiaus optimizavimo ir genetinių algoritmų modifikavimui ir lygiagretinimui.
Disertacijoje yra siūloma dalelių spiečiaus optimizavimo algoritmo modifikacija, grįsta pieškos srities siaurinimu, ir tiriamos kelios algoritmo lygiagretinimo strategijos. Algoritmas yra taikomas erdvėlaivių skrydžių trajektorijų optimizavimo uždaviniui spręsti lygiagrečiųjų skaičiavimų sistemose.
Taip pat yra siūlomas hibridinis globaliojo daugiakriterio optimizavimo algoritmas, gautas modifikuojant vieno agento stochastinės paieškos algoritmą ir įkomponuojant į daugiakriterio optimizavimo genetinį algoritmą. Siūlomos kelios daugiakriterio genetinio algoritmo lygiagretinimo strategijos. Jų pagrindu gauti lygiagretieji algoritmai eksperimentiškai tiriami sprendžiant... [toliau žr. visą tekstą] / Global optimization problems are relevant in various fields of research and industry, such as chemistry, biology, biomedicine, operational research, etc. Normally it is easier to solve optimization problems having some specific properties of objective function such as linearity, convexity, differentiability, etc. However, there are a lot of practical problems that do not satisfy such properties or even cannot be expressed in an adequate mathematical form. Therefore, it is popular to use random search optimization methods in solving such optimization problems.
The dissertation deals with investigation of random search global optimization algorithms, their parallelization and application to solve practical problems. The work is focused on modification and parallelization of particle swarm optimization and genetic algorithms.
The modification of particle swarm optimization algorithm, based on reduction of the search area is proposed, and several strategies to parallelize the algorithm are investigated. The algorithm is applied to solve Multiple Gravity Assist problem using parallel computing system.
A hybrid global multi-objective optimization algorithm is developed by modifying single agent stochastic search strategy, and incorporating it into multi-objective optimization genetic algorithm. Several strategies to parallelize multi-objective optimization genetic algorithm is proposed. Parallel algorithms are experimentally investigated by solving competitive facility location... [to full text]
|
7 |
Some Population Set-Based Methods for Unconstrained Global OptimizationKaelo, Professor 16 November 2006 (has links)
Student Number : 0214677F -
PhD thesis -
School of Camputational and Applied Mathematics -
Faculty of Science / Many real-life problems are formulated as global optimization problems with continuous variables.
These problems are in most cases nonsmooth, nonconvex and often simulation based,
making gradient based methods impossible to be used to solve them. Therefore, ef#2;cient, reliable
and derivative-free global optimization methods for solving such problems are needed.
In this thesis, we focus on improving the ef#2;ciency and reliability of some global optimization
methods. In particular, we concentrate on improving some population set-based methods
for unconstrained global optimization, mainly through hybridization. Hybridization has widely
been recognized to be one of the most attractive areas of unconstrained global optimization.
Experiments have shown that through hybridization, new methods that inherit the strength of
the original elements but not their weakness can be formed. We suggest a number of new hybridized
population set-based methods based on differential evolution (de), controlled random
search (crs2) and real coded genetic algorithm (ga).
We propose #2;ve new versions of de. In the #2;rst version, we introduce a localization, called
random localization, in the mutation phase of de. In the second version, we propose a localization
in the acceptance phase of de. In the third version, we form a de hybrid algorithm by
probabilistically combining the point generation scheme of crs2 with that of de in the de algorithm.
The fourth and #2;fth versions are also de hybrids. These versions hybridize the mutation
of de with the point generation rule of the electromagnetism-like (em) algorithm. We also propose
#2;ve new versions of crs2. The #2;rst version modi#2;es the point generation scheme of crs2
by introducing a local mutation technique. In the second and third modi#2;cations, we probabilistically
combine the point generation scheme of crs2 with the linear interpolation scheme of a
trust-region based method. The fourth version is a crs hybrid that probabilistically combines the
quadratic interpolation scheme with the linear interpolation scheme in crs2. In the #2;fth version, we form a crs2 hybrid algorithm by probabilistically combining the point generation scheme
of crs2 with that of de in the crs2 algorithm. Finally, we propose #2;ve new versions of the real
coded genetic algorithm (ga) with arithmetic crossover. In the #2;rst version of ga, we introduce a
local technique. We propose, in the second version, an integrated crossover rule that generates
two children at a time using two different crossover rules. We introduce a local technique in
the second version to obtain the third version. The fourth and #2;fth versions are based on the
probabilistic adaptation of crossover rules.
The ef#2;ciency and reliability of the new methods are evaluated through numerical experiments
using a large test suite of both simple and dif#2;cult problems from the literature. Results
indicate that the new hybrids are much better than their original counterparts both in reliability
and ef#2;ciency. Therefore, the new hybrids proposed in this study offer an alternative to many
currently available stochastic algorithms for solving global optimization problems in which the
gradient information is not readily available.
|
8 |
Adaptive Random Search Methods for Simulation OptimizationPrudius, Andrei A. 26 June 2007 (has links)
This thesis is concerned with identifying the best decision among a set of possible decisions in the presence of uncertainty. We are primarily interested in situations where the objective function value at any feasible solution needs to be estimated, for example via a ``black-box' simulation procedure. We develop adaptive random search methods for solving such simulation optimization problems. The methods are adaptive in the sense that they use information gathered during previous iterations to decide how simulation effort is expended in the current iteration. We consider random search because such methods assume very little about the structure of the underlying problem, and hence can be applied to solve complex simulation optimization problems with little expertise required from an end-user. Consequently, such methods are suitable for inclusion in simulation software.
We first identify desirable features that algorithms for discrete simulation optimization need to possess to exhibit attractive empirical performance. Our approach emphasizes maintaining an appropriate balance between exploration, exploitation, and estimation. We also present two new and almost surely convergent random search methods that possess these desirable features and demonstrate their empirical attractiveness.
Second, we develop two frameworks for designing adaptive and almost surely convergent random search methods for discrete simulation optimization. Our frameworks involve averaging, in that all decisions that require estimates of the objective function values at various feasible solutions are based on the averages of all observations collected at these solutions so far. We present two new and almost surely convergent variants of simulated annealing and demonstrate the empirical effectiveness of averaging and adaptivity in the context of simulated annealing.
Finally, we present three random search methods for solving simulation optimization problems with uncountable feasible regions. One of the approaches is adaptive, while the other two are based on pure random search. We provide conditions under which the three methods are convergent, both in probability and almost surely. Lastly, we include a computational study that demonstrates the effectiveness of the methods when compared to some other approaches available in the literature.
|
9 |
Interference cancellation for collocated wireless radiosRaghavan, Anand 29 June 2007 (has links)
The area of deterministic noise cancellation in mobile radio communication systems is investigated and analyzed. Several interoperation problems in the mobile wireless radio space are identified and interference concerns for the Bluetooth - WLAN networks are characterized and quantified in the physical layer.
A mathematical framework has been created for describing interference in the 2.4 GHz band. An adaptive noise suppression system has been developed that is able to alleviate the encroachment of the aggressor signal on the victim without sacrificing any of the original signal. This system is demonstrated to improve the victim SNR in a spread spectrum communication scenario.
The research is extended to construct an interference canceller that is easy to assimilate into existing RF front-ends. A low-power small form-factor analog active canceller has been designed in 0.18-ìm Si-CMOS IC technology that delivers adequate noise suppression performance while operating in the RF domain. This includes novel implementations of phase rotator circuits based on delay interpolation and an integrated low-current quadrature modulator-based continuously variable analog phase shifter. This canceller is capable of up to 30 dB of in-band cancellation for the Bluetooth - WLAN problem. Other versions of the canceller are configured to protect GPS and DVB-H receivers from unintentional radiators transmitting in the vicinity. These demonstrate noise mitigation of at least 15 dB in their respective bands while generating very low broadband noise at the output. A simple low-power mixed-signal automatic control mechanism is also developed to operate the canceller adaptively.
The work described in this dissertation advances the state-of-the-art in the area of mobile wireless radio coexistence.
|
10 |
Bounded exhaustive generation of tests in model-based testing / Begränsad uttömmande generation av tester inom modellbaserad testningAlmajni, Nour Alhuda January 2021 (has links)
There are some systems (or parts of systems) that are very critical and need especially good test suites to test them. For these critical systems, exhaustive testing may be a good way to test them. Thus, we have implemented two versions of bounded exhaustive search (BES) algorithms in a model-based testing tool called, Modbat. One of the BES versions (BESnL) visits each self-loop in the model only once. The other version (BESL) has no constraint or limitation on the number of time it visits each self-loop. We have then compared the two BES algorithms with each other and with an already implemented algorithm in Modbat called random search (RS). We have run the three mentioned algorithms (BESL, BESnL and RS) on five different models and compared their performance on these models in terms of time, coverage and finding faults. We have found that BESnL is faster than BESL and it can miss some faults that BESL can find. However, BESnL can find errors faster than BESL. BESL has sometimes better performance than BESnL in terms of branch coverage. In terms of other coverage criteria (like state coverage, transition coverage and instruction coverage), both BESL and BESnL has very similar performance. We have also found that running the RS algorithm is, in general, faster than both BES algorithms at generating tests (given the same total number of tests generated) if the model has a clear end state. RS may also be faster at finding faults than the BES algorithms. However, The BES algorithms and the RS algorithm have similar behaviours regarding coverage. Nevertheless, RS can sometimes reach higher coverage faster than the BES algorithms and with a smaller number of tests. / Det finns vissa system (eller delar av system) som är mycket essentiella och som behöver särskilt bra testsviter för att testa dem. För dessa essentiella system kan uttömmande tester vara ett bra sätt att testa dem. Således har vi implementerat två versioner av begränsad uttömmande sökning eller på engelska ”bounded exhuastive search” (BES) algoritmer i ett modellbaserat testverktyg kallat Modbat. En av BES-versionerna (BESnL) besöker varje självslinga i modellen bara en gång. Den andra versionen (BESL) har ingen begränsning av hur många gånger den besöker varje självslinga. Vi har sedan jämfört de två BES-algoritmerna med varandra och med en redan implementerad algoritm i Modbat som kallas slumpmässig sökning eller på engelska ”random search” (RS). Vi har kört de tre nämnda algoritmerna (BESL, BESnL och RS) på fem olika modeller och jämfört deras prestanda på dessa modeller när det gäller tid, täckning (coverage) och att hitta fel. Vi har funnit att BESnL är snabbare än BESL och det kan missa några fel som BESL kan hitta, men BESnL kan hitta fel snabbare än BESL. BESL har ibland bättre prestanda än BESnL när det gäller filialtäckning (branch-coverage). När det gäller andra täckningskriterier (som statlig täckning, övergångstäckning (tranintion-coverage) och instruktionstäckning) har både BESL och BESnL mycket liknande resultat. Vi har också funnit att körning av RS-algoritmen i allmänhet är snabbare än båda BES- algoritmerna vid generering av tester (givet samma totala antal genererade tester) om modellen har ett klart slutläge (end-state). RS kan också vara snabbare att hitta fel än BES-algoritmerna. BES-algoritmerna och RS-algoritmen har dock liknande beteenden när det gäller täckning. RS kan ibland nå högre täckning snabbare än BES-algoritmerna och med ett mindre antal tester.
|
Page generated in 0.0712 seconds