Spelling suggestions: "subject:"line 3research"" "subject:"line 1research""
1 |
Lagrangian Relaxation / Dual Approaches For Solving Large-Scale Linear Programming ProblemsMadabushi, Ananth R. 17 February 1997 (has links)
This research effort focuses on large-scale linear programming problems that arise in the context of solving various problems such as discrete linear or polynomial, and continuous nonlinear, nonconvex programming problems, using linearization and branch-and-cut algorithms for the discrete case, and using polyhedral outer-approximation methods for the continuous case. These problems arise in various applications in production planning, location-allocation, game theory, economics, and many engineering and systems design problems. During the solution process of discrete or continuous nonconvex problems using polyhedral approaches, one has to contend with repeatedly solving large-scale linear programming(LP) relaxations. Thus, it becomes imperative to employ an efficient method in solving these problems. It has been amply demonstrated that solving LP relaxations using a simplex-based algorithm, or even an interior-point type of procedure, can be inadequately slow ( especially in the presence of complicating constraints, dense coefficient matrices, and ill-conditioning ) in comparison with a Lagrangian Relaxation approach. With this motivation, we present a practical primal-dual subgradient algorithm that incorporates a dual ascent, a primal recovery, and a penalty function approach to recover a near optimal and feasible pair of primal and dual solutions.
The proposed primal-dual approach is comprised of three stages. Stage I deals with solving the Lagrangian dual problem by using various subgradient deflection strategies such as the Modified Gradient Technique (MGT), the Average Direction Strategy (ADS), and a new direction strategy called the Modified Average Direction Strategy (M-ADS). In the latter, the deflection parameter is determined based on the process of projecting the unknown optimal direction onto the space spanned by the current subgradient direction and the previous direction. This projected direction approximates the desired optimal direction as closely as possible using the conjugate subgradient concept. The step-length rules implemented in this regard are the Quadratic Fit Line Search Method and a new line search method called the Directional Derivative Line Search Method in which we start with a prescribed step-length and then ascertain whether to increase or decrease the step-length value based on the right-hand and left-hand derivative information available at each iteration. In the second stage of the algorithm (Stage II), a sequence of updated primal solutions is generated using some convex combinations of the Lagrangian subproblem solutions. Alternatively, a starting primal optimal solution can be obtained using the complementary slackness conditions. Depending on the extent of feasibility and optimality attained, Stage III applies a penalty function method to improve the obtained primal solution toward a near feasible and optimal solution.
We present computational experience using a set of randomly generated, structured, linear programming problems of the type that might typically arise in the context of discrete optimization. / Master of Science
|
2 |
Adaptive Sampling Line Search for Simulation OptimizationRagavan, Prasanna Kumar 08 March 2017 (has links)
This thesis is concerned with the development of algorithms for simulation optimization (SO), a special case of stochastic optimization where the objective function can only be evaluated through noisy observations from a simulation. Deterministic techniques, when directly applied to simulation optimization problems fail to converge due to their inability to handle randomness thus requiring sophisticated algorithms. However, many existing algorithms dedicated for simulation optimization often show poor performance on implementation as they require extensive parameter tuning.
To overcome these shortfalls with existing SO algorithms, we develop ADALINE, a line search based algorithm that eliminates the need for any user defined parameters. ADALINE is designed to identify a local minimum on continuous and integer ordered feasible sets. ADALINE on a continuous feasible set mimics deterministic line search algorithms, while it iterates between a line search and an enumeration procedure on integer ordered feasible sets in its quest to identify a local minimum. ADALINE improves upon many of the existing SO algorithms by determining the sample size adaptively as a trade-off between the error due to estimation and the optimization error, that is, the algorithm expends simulation effort proportional to the quality of the incumbent solution. We also show that ADALINE converges ``almost surely'' to the set of local minima. Finally, our numerical results suggest that ADALINE converges to a local minimum faster, outperforming other advanced SO algorithms that utilize variable sampling strategies.
To demonstrate the performance of our algorithm on a practical problem, we apply ADALINE in solving a surgery rescheduling problem. In the rescheduling problem, the objective is to minimize the cost of disruptions to an existing schedule shared between multiple surgical specialties while accommodating semi-urgent surgeries that require expedited intervention. The disruptions to the schedule are determined using a threshold based heuristic and ADALINE identifies the best threshold levels for various surgical specialties that minimizes the expected total cost of disruption. A comparison of the solutions obtained using a Sample Average Approximation (SAA) approach, and ADALINE is provided. We find that the adaptive sampling strategy in ADALINE identifies a better solution quickly than SAA. / Ph. D. / This thesis is concerned with the development of algorithms for simulation optimization (SO), where the objective function does not have an analytical form, and can only be estimated through noisy observations from a simulation. Deterministic techniques, when directly applied to simulation optimization problems fail to converge due to their inability to handle randomness thus requiring sophisticated algorithms. However, many existing algorithms dedicated for simulation optimization often show poor performance on implementation as they require extensive parameter tuning.
To overcome these shortfalls with existing SO algorithms, we develop ADALINE, a line search based algorithm that minimizes the need for user defined parameter. ADALINE is designed to identify a local minimum on continuous and integer ordered feasible sets. ADALINE on continuous feasible sets mimics deterministic line search algorithms, while it iterates between a line search and an enumeration procedure on integer ordered feasible sets in its quest to identify a local minimum. ADALINE improves upon many of the existing SO algorithms by determining the sample size adaptively as a trade-off between the error due to estimation and the optimization error, that is, the algorithm expends simulation effort proportional to the quality of the incumbent solution. Finally, our numerical results suggest that ADALINE converges to a local minimum faster than the best available SO algorithm for the purpose.
To demonstrate the performance of our algorithm on a practical problem, we apply ADALINE in solving a surgery rescheduling problem. In the rescheduling problem, the objective is to minimize the cost of disruptions to an existing schedule shared between multiple surgical specialties while accommodating semi-urgent surgeries that require expedited intervention. The disruptions to the schedule are determined using a threshold based heuristic and ADALINE identifies the best threshold levels for various surgical specialties that minimizes the expected total cost of disruption. A comparison of the solutions obtained using traditional optimization techniques, and ADALINE is provided. We find that the adaptive sampling strategy in ADALINE identifies a better solution more quickly than traditional optimization.
|
3 |
Numerical Solution of the coupled algebraic Riccati equationsRajasingam, Prasanthan 01 December 2013 (has links)
In this paper we develop new and improved results in the numerical solution of the coupled algebraic Riccati equations. First we provide improved matrix upper bounds on the positive semidefinite solution of the unified coupled algebraic Riccati equations. Our approach is largely inspired by recent results established by Liu and Zhang. Our main results tighten the estimates of the relevant dominant eigenvalues. Also by relaxing the key restriction our upper bound applies to a larger number of situations. We also present an iterative algorithm to refine the new upper bounds and the lower bounds and numerically compute the solutions of the unified coupled algebraic Riccati equations. This construction follows the approach of Gao, Xue and Sun but we use different bounds. This leads to different analysis on convergence. Besides, we provide new matrix upper bounds for the positive semidefinite solution of the continuous coupled algebraic Riccati equations. By using an alternative primary assumption we present a new upper bound. We follow the idea of Davies, Shi and Wiltshire for the non-coupled equation and extend their results to the coupled case. We also present an iterative algorithm to improve our upper bounds. Finally we improve the classical Newton's method by the line search technique to compute the solutions of the continuous coupled algebraic Riccati equations. The Newton's method for couple Riccati equations is attributed to Salama and Gourishanar, but we construct the algorithm in a different way using the Fr\'echet derivative and we include line search too. Our algorithm leads to a faster convergence compared with the classical scheme. Numerical evidence is also provided to illustrate the performance of our algorithm.
|
4 |
Programação quadratica sequencial e condições de qualificação / Sequential quadratic programming and constraint qualificationNunes, Fernanda Téles 03 September 2009 (has links)
Orientador: Maria Aparecida Diniz Ehrhardt / Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica / Made available in DSpace on 2018-08-13T08:54:50Z (GMT). No. of bitstreams: 1
Nunes_FernandaTeles_M.pdf: 2400651 bytes, checksum: 206dfad35642a33d2de362510094e78d (MD5)
Previous issue date: 2009 / Resumo: Abordando problemas de minimização de funções com restrições nos deparamos com as condições de otimalidade e, ainda, com condições de qualificação das restrições. Nosso interesse é o estudo detalhado de várias condições de qualificação, com destaque para a condição de dependência linear positiva constante, e sua influência na convergência de algoritmos de Programação Quadrática Sequencial. A relevância deste estudo está no fato de que resultados de convergência que têm, em suas hipóteses, condições de qualificação fracas são mais fortes que aqueles baseados em condições de qualificação fortes. Experimentos numéricos serão realizados tanto para investigar a eficiência destes métodos na resolução de problemas com diferentes condições de qualificação, quanto para comparar dois diferentes tipos de busca, monótona e não-monótona. Tentamos confirmar a hipótese de que algoritmos baseados em uma busca não-monótona atuam contra o Efeito: Maratos, de comum ocorrência na resolução de problemas de minimização através de métodos de Programação Quadrática Sequencial. / Abstract: In the context of constrained optimization problems, we face the optimality conditions and also constraint qualification. Our aim is to study with details several constraint qualification, highlighting the constant positive linear dependence condition, and its influence in Sequential Quadratic Programming algorithms convergence. The relevance of this study is in the fact that convergence results having as hypothesis weak constraints qualification are stronger than those based on stronger constraints qualification. Numerical experiments will be done with the purpose of investigating the efficiency of these methods to solve problems with different constraints qualification and to compare two diferent kinds of line search, monotone and nonmonotone. We want to confirm the hypothesis that algorithms based on a nonmonotone line search act against the Maratos Effect, very common while solving minimization problems through Sequential Quadratic Programming methods. / Mestrado / Mestre em Matemática Aplicada
|
5 |
Line search methods with variable sample size / Metodi linijskog pretrazivanja sa promenljivom velicinom uzorkaKrklec Jerinkić Nataša 17 January 2014 (has links)
<p>The problem under consideration is an unconstrained optimization problem with the objective function in the form of mathematical ex-pectation. The expectation is with respect to the random variable that represents the uncertainty. Therefore, the objective function is in fact deterministic. However, nding the analytical form of that objective function can be very dicult or even impossible. This is the reason why the sample average approximation is often used. In order to obtain reasonable good approximation of the objective function, we have to use relatively large sample size. We assume that the sample is generated at the beginning of the optimization process and therefore we can consider this sample average objective function as the deterministic one. However, applying some deterministic method on that sample average function from the start can be very costly. The number of evaluations of the function under expectation is a common way of measuring the cost of an algorithm. Therefore, methods that vary the sample size throughout the optimization process are developed. Most of them are trying to determine the optimal dynamics of increasing the sample size.</p><p>The main goal of this thesis is to develop the clas of methods that can decrease the cost of an algorithm by decreasing the number of function evaluations. The idea is to decrease the sample size whenever it seems to be reasonable - roughly speaking, we do not want to impose a large precision, i.e. a large sample size when we are far away from the solution we search for. The detailed description of the new methods <br />is presented in Chapter 4 together with the convergence analysis. It is shown that the approximate solution is of the same quality as the one obtained by dealing with the full sample from the start.</p><p>Another important characteristic of the methods that are proposed here is the line search technique which is used for obtaining the sub-sequent iterates. The idea is to nd a suitable direction and to search along it until we obtain a sucient decrease in the function value. The sucient decrease is determined throughout the line search rule. In Chapter 4, that rule is supposed to be monotone, i.e. we are imposing strict decrease of the function value. In order to decrease the cost of the algorithm even more and to enlarge the set of suitable search directions, we use nonmonotone line search rules in Chapter 5. Within that chapter, these rules are modied to t the variable sample size framework. Moreover, the conditions for the global convergence and the R-linear rate are presented. </p><p>In Chapter 6, numerical results are presented. The test problems are various - some of them are academic and some of them are real world problems. The academic problems are here to give us more insight into the behavior of the algorithms. On the other hand, data that comes from the real world problems are here to test the real applicability of the proposed algorithms. In the rst part of that chapter, the focus is on the variable sample size techniques. Different implementations of the proposed algorithm are compared to each other and to the other sample schemes as well. The second part is mostly devoted to the comparison of the various line search rules combined with dierent search directions in the variable sample size framework. The overall numerical results show that using the variable sample size can improve the performance of the algorithms signicantly, especially when the nonmonotone line search rules are used.</p><p>The rst chapter of this thesis provides the background material for the subsequent chapters. In Chapter 2, basics of the nonlinear optimization are presented and the focus is on the line search, while Chapter 3 deals with the stochastic framework. These chapters are here to provide the review of the relevant known results, while the rest of the thesis represents the original contribution. </p> / <p>U okviru ove teze posmatra se problem optimizacije bez ograničenja pri čcemu je funkcija cilja u formi matematičkog očekivanja. Očekivanje se odnosi na slučajnu promenljivu koja predstavlja neizvesnost. Zbog toga je funkcija cilja, u stvari, deterministička veličina. Ipak, odredjivanje analitičkog oblika te funkcije cilja može biti vrlo komplikovano pa čak i nemoguće. Zbog toga se za aproksimaciju često koristi uzoračko očcekivanje. Da bi se postigla dobra aproksimacija, obično je neophodan obiman uzorak. Ako pretpostavimo da se uzorak realizuje pre početka procesa optimizacije, možemo posmatrati uzoračko očekivanje kao determinističku funkciju. Medjutim, primena nekog od determinističkih metoda direktno na tu funkciju moze biti veoma skupa jer evaluacija funkcije pod ocekivanjem često predstavlja veliki trošak i uobičajeno je da se ukupan trošak optimizacije meri po broju izračcunavanja funkcije pod očekivanjem. Zbog toga su razvijeni metodi sa promenljivom veličinom uzorka. Većcina njih je bazirana na odredjivanju optimalne dinamike uvećanja uzorka.</p><p>Glavni cilj ove teze je razvoj algoritma koji, kroz smanjenje broja izračcunavanja funkcije, smanjuje ukupne trošskove optimizacije. Ideja je da se veličina uzorka smanji kad god je to moguće. Grubo rečeno, izbegava se koriscenje velike preciznosti (velikog uzorka) kada smo daleko od rešsenja. U čcetvrtom poglavlju ove teze opisana je nova klasa metoda i predstavljena je analiza konvergencije. Dokazano je da je aproksimacija rešenja koju dobijamo bar toliko dobra koliko i za metod koji radi sa celim uzorkom sve vreme.</p><p>Još jedna bitna karakteristika metoda koji su ovde razmatrani je primena linijskog pretražzivanja u cilju odredjivanja naredne iteracije. Osnovna ideja je da se nadje odgovarajući pravac i da se duž njega vršsi pretraga za dužzinom koraka koja će dovoljno smanjiti vrednost funkcije. Dovoljno smanjenje je odredjeno pravilom linijskog pretraživanja. U čcetvrtom poglavlju to pravilo je monotono što znači da zahtevamo striktno smanjenje vrednosti funkcije. U cilju jos većeg smanjenja troškova optimizacije kao i proširenja skupa pogodnih pravaca, u petom poglavlju koristimo nemonotona pravila linijskog pretraživanja koja su modifikovana zbog promenljive velicine uzorka. Takodje, razmatrani su uslovi za globalnu konvergenciju i R-linearnu brzinu konvergencije.</p><p>Numerički rezultati su predstavljeni u šestom poglavlju. Test problemi su razliciti - neki od njih su akademski, a neki su realni. Akademski problemi su tu da nam daju bolji uvid u ponašanje algoritama. Sa druge strane, podaci koji poticu od stvarnih problema služe kao pravi test za primenljivost pomenutih algoritama. U prvom delu tog poglavlja akcenat je na načinu ažuriranja veličine uzorka. Različite varijante metoda koji su ovde predloženi porede se medjusobno kao i sa drugim šemama za ažuriranje veličine uzorka. Drugi deo poglavlja pretežno je posvećen poredjenju različitih pravila linijskog pretraživanja sa različitim pravcima pretraživanja u okviru promenljive veličine uzorka. Uzimajuci sve postignute rezultate u obzir dolazi se do zaključcka da variranje veličine uzorka može značajno popraviti učinak algoritma, posebno ako se koriste nemonotone metode linijskog pretraživanja.</p><p>U prvom poglavlju ove teze opisana je motivacija kao i osnovni pojmovi potrebni za praćenje preostalih poglavlja. U drugom poglavlju je iznet pregled osnova nelinearne optimizacije sa akcentom na metode linijskog pretraživanja, dok su u trećem poglavlju predstavljene osnove stohastičke optimizacije. Pomenuta poglavlja su tu radi pregleda dosadašnjih relevantnih rezultata dok je originalni doprinos ove teze predstavljen u poglavljima 4-6.</p>
|
6 |
Izbor parametara kod gradijentnih metoda za probleme optimizacije bez ograničenja / Choice of parameters in gradient methods for the unconstrained optimization problems / Choice of parameters in gradient methods for the unconstrained optimization problemsĐorđević Snežana 22 May 2015 (has links)
<p>Posmatra se problem optimizacije bez ograničenja. Za rešavanje<br />problema optimizacije bez ograničenja postoji mnoštvo raznovrsnih<br />metoda. Istraživanje ovde motivisano je potrebom za metodama koje<br />će brzo konvergirati.<br />Cilj je sistematizacija poznatih rezultata, kao i teorijska i numerička<br />analiza mogućnosti uvođenja parametra u gradijentne metode.<br />Najpre se razmatra problem minimizacije konveksne funkcije više<br />promenljivih.<br />Problem minimizacije konveksne funkcije više promenljivih ovde se<br />rešava bez izračunavanja matrice hesijana, što je naročito aktuelno za<br />sisteme velikih dimenzija, kao i za probleme optimizacije kod kojih<br />ne raspolažemo ni tačnom vrednošću funkcije cilja, ni tačnom<br />vrednošću gradijenta. Deo motivacije za istraživanjem ovde leži i u<br />postojanju problema kod kojih je funkcija cilja rezultat simulacija.<br />Numerički rezultati, predstavljeni u Glavi 6, pokazuju da uvođenje<br />izvesnog parametra može biti korisno, odnosno, dovodi do ubrzanja<br />određenog metoda optimizacije.<br />Takođe se predstavlja jedan novi hibridni metod konjugovanog<br />gradijenta, kod koga je parametar konjugovanog gradijenta<br />konveksna kombinacija dva poznata parametra konjugovanog<br />gradijenta.<br />U prvoj glavi opisuje se motivacija kao i osnovni pojmovi potrebni za<br />praćenje preostalih glava.<br />U drugoj glavi daje se pregled nekih gradijentnih metoda prvog i<br />drugog reda.<br />Četvrta glava sadrži pregled osnovnih pojmova i nekih rezultata<br />vezanih za metode konjugovanih gradijenata.<br />Pomenute glave su tu radi pregleda nekih poznatih rezultata, dok se<br />originalni doprinos predstavlja u trećoj, petoj i šestoj glavi.<br />U trećoj glavi se opisuje izvesna modifikacija određenog metoda u<br />kome se koristi multiplikativni parametar, izabran na slučajan način.<br />Dokazuje se linearna konvergencija tako formiranog novog metoda.<br />Peta glava sadrži originalne rezultate koji se odnose na metode<br />konjugovanih gradijenata. Naime, u ovoj glavi predstavlja se novi<br />hibridni metod konjugovanih gradijenata, koji je konveksna<br />kombinacija dva poznata metoda konjugovanih gradijenata.<br />U šestoj glavi se daju rezultati numeričkih eksperimenata, izvršenih<br />na izvesnom skupu test funkcija, koji se odnose na metode iz treće i<br />pete glave. Implementacija svih razmatranih algoritama rađena je u<br />paketu MATHEMATICA. Kriterijum upoređivanja je vreme rada<br />centralne procesorske jedinice.6</p> / <p>The problem under consideration is an unconstrained optimization<br />problem. There are many different methods made in aim to solve the<br />optimization problems. The investigation made here is motivated by<br />the fact that the methods which converge fast are necessary.<br />The main goal is the systematization of some known results and also<br />theoretical and numerical analysis of the possibilities to int roduce<br />some parameters within gradient methods.<br />Firstly, the minimization problem is considered, where the objective<br />function is a convex, multivar iable function. This problem is solved<br />here without the calculation of Hessian, and such solution is very<br />important, for example, when the big dimension systems are solved,<br />and also for solving optimization problems with unknown values of<br />the objective function and its gradient. Partially, this investigation is<br />motivated by the existence of problems where the objective function<br />is the result of simulations.<br />Numerical results, presented in Chapter 6, show that the introduction<br />of a parameter is useful, i.e., such introduction results by the<br />acceleration of the known optimization method.<br />Further, one new hybrid conjugate gradient method is presented, in<br />which the conjugate gradient parameter is a convex combination of<br />two known conjugate gradient parameters.<br />In the first chapter, there is motivation and also the basic co ncepts<br />which are necessary for the other chapters.<br />The second chapter contains the survey of some first order and<br />second order gradient methods.<br />The fourth chapter contains the survey of some basic concepts and<br />results corresponding to conjugate gradient methods.<br />The first, the second and the fourth chapters are here to help in<br />considering of some known results, and the original results are<br />presented in the chapters 3,5 and 6.<br />In the third chapter, a modification of one unco nstrained optimization<br />method is presented, in which the randomly chosen multiplicative<br />parameter is used. Also, the linear convergence of such modification<br />is proved.<br />The fifth chapter contains the original results, corresponding to<br />conjugate gradient methods. Namely, one new hybrid conjugate<br />gradient method is presented, and this method is the convex<br />combination of two known conjugate gradient methods.<br />The sixth chapter consists of the numerical results, performed on a set<br />of test functions, corresponding to methods in the chapters 3 and 5.<br />Implementation of all considered algorithms is made in Mathematica.<br />The comparison criterion is CPU time.</p> / <p>The problem under consideration is an unconstrained optimization<br />problem. There are many different methods made in aim to solve the<br />optimization problems. The investigation made here is motivated by<br />the fact that the methods which converge fast are necessary.<br />The main goal is the systematization of some known results and also<br />theoretical and numerical analysis of the possibilities to int roduce<br />some parameters within gradient methods.<br />Firstly, the minimization problem is considered, where the objective<br />function is a convex, multivar iable function. This problem is solved<br />here without the calculation of Hessian, and such solution is very<br />important, for example, when the big dimension systems are solved,<br />and also for solving optimization problems with unknown values of<br />the objective function and its gradient. Partially, this investigation is<br />motivated by the existence of problems where the objective function<br />is the result of simulations.<br />Numerical results, presented in Chapter 6, show that the introduction<br />of a parameter is useful, i.e., such introduction results by the<br />acceleration of the known optimization method.<br />Further, one new hybrid conjugate gradient method is presented, in<br />which the conjugate gradient parameter is a convex combination of<br />two known conjugate gradient parameters.<br />In the first chapter, there is motivation and also the basic co ncepts<br />which are necessary for the other chapters.<br />Key Words Documentation 97<br />The second chapter contains the survey of some first order and<br />second order gradient methods.<br />The fourth chapter contains the survey of some basic concepts and<br />results corresponding to conjugate gradient methods.<br />The first, the second and the fourth chapters are here to help in<br />considering of some known results, and the original results are<br />presented in the chapters 3,5 and 6.<br />In the third chapter, a modification of one unco nstrained optimization<br />method is presented, in which the randomly chosen multiplicative<br />parameter is used. Also, the linear convergence of such modification<br />is proved.<br />The fifth chapter contains the original results, corresponding to<br />conjugate gradient methods. Namely, one new hybrid conjugate<br />gradient method is presented, and this method is the convex<br />combination of two known conjugate gradient methods.<br />The sixth chapter consists of the numerical results, performed on a set<br />of test functions, corresponding to methods in the chapters 3 and 5.<br />Implementation of all considered algorithms is made in Mathematica.<br />The comparison criterion is CPU time</p>
|
7 |
Numerical Methods for Wilcoxon Fractal Image CompressionJau, Pei-Hung 28 June 2007 (has links)
In the thesis, the Wilcoxon approach to linear regression problems is combined with the fractal image compression to form a novel Wilcoxon fractal image compression. When the original image is corrupted by noise, we argue that the fractal image compression scheme should be insensitive to those outliers present in the corrupted image. This leads to the new concept of robust fractal image compression. The proposed Wilcoxon fractal image compression is the first attempt toward the design of robust fractal image compression. Four different numerical methods, i.e., steepest decent, line minimization based on quadratic interpolation, line minimization based on cubic interpolation, and least absolute deviation, will be proposed to solve the associated linear Wilcoxon regression problem. From the simulation results, it will be seen that, compared with the traditional fractal image compression, Wilcoxon fractal image compression has very good robustness against outliers caused by salt-and-pepper noise. However, it does not show great improvement of the robustness against outliers caused by Gaussian noise.
|
8 |
Newtons method with exact line search for solving the algebraic Riccati equationBenner, P., Byers, R. 30 October 1998 (has links) (PDF)
This paper studies Newton's method for solving the algebraic Riccati equation combined with an exact line search. Based on these considerations we present a Newton{like method for solving algebraic Riccati equations. This method can improve the sometimes erratic convergence behavior of Newton's method.
|
9 |
Conception d'heuristiques d'optimisation pour les problèmes de grande dimension : application à l'analyse de données de puces à ADN / Heuristics implementation for high-dimensional problem optimization : application in microarray data analysisGardeux, Vincent 30 November 2011 (has links)
Cette thèse expose la problématique récente concernant la résolution de problèmes de grande dimension. Nous présentons les méthodes permettant de les résoudre ainsi que leurs applications, notamment pour la sélection de variables dans le domaine de la fouille de données. Dans la première partie de cette thèse, nous exposons les enjeux de la résolution de problèmes de grande dimension. Nous nous intéressons principalement aux méthodes de recherche linéaire, que nous jugeons particulièrement adaptées pour la résolution de tels problèmes. Nous présentons ensuite les méthodes que nous avons développées, basées sur ce principe : CUS, EUS et EM323. Nous soulignons en particulier la très grande vitesse de convergence de CUS et EUS, ainsi que leur simplicité de mise en oeuvre. La méthode EM323 est issue d'une hybridation entre la méthode EUS et un algorithme d'optimisation unidimensionnel développé par F. Glover : l'algorithme 3-2-3. Nous montrons que ce dernier algorithme obtient des résultats d'une plus grande précision, notamment pour les problèmes non séparables, qui sont le point faible des méthodes issues de la recherche linéaire. Dans une deuxième partie, nous nous intéressons aux problèmes de fouille de données, et plus particulièrement l'analyse de données de puces à ADN. Le but est de classer ces données et de prédire le comportement de nouveaux exemples. Dans un premier temps, une collaboration avec l'hôpital Tenon nous permet d'analyser des données privées concernant le cancer du sein. Nous développons alors une méthode exacte, nommée delta-test, enrichie par la suite d'une méthode permettant la sélection automatique du nombre de variables. Dans un deuxième temps, nous développons une méthode heuristique de sélection de variables, nommée ABEUS, basée sur l'optimisation des performances du classifieur DLDA. Les résultats obtenus sur des données publiques montrent que nos méthodes permettent de sélectionner des sous-ensembles de variables de taille très faible,ce qui est un critère important permettant d'éviter le sur-apprentissage / This PhD thesis explains the recent issue concerning the resolution of high-dimensional problems. We present methods designed to solve them, and their applications for feature selection problems, in the data mining field. In the first part of this thesis, we introduce the stakes of solving high-dimensional problems. We mainly investigate line search methods, because we consider them to be particularly suitable for solving such problems. Then, we present the methods we developed, based on this principle : CUS, EUS and EM323. We emphasize, in particular, the very high convergence speed of CUS and EUS, and their simplicity of implementation. The EM323 method is based on an hybridization between EUS and a one-dimensional optimization algorithm developed by F. Glover : the 3-2-3 algorithm. We show that the results of EM323 are more accurate, especially for non-separable problems, which are the weakness of line search based methods. In the second part, we focus on data mining problems, and especially those concerning microarray data analysis. The objectives are to classify data and to predict the behavior of new samples. A collaboration with the Tenon Hospital in Paris allows us to analyze their private breast cancer data. To this end, we develop an exact method, called delta-test, enhanced by a method designed to automatically select the optimal number of variables. In a second time, we develop an heuristic, named ABEUS, based on the optimization of the DLDA classifier performances. The results obtained from publicly available data show that our methods manage to select very small subsets of variables, which is an important criterion to avoid overfitting
|
10 |
Estudo comparativo de passos espectrais e buscas lineares não monótonas / Comparative study of spectral steplengths and nonmonotone linear searchesCamargo, Fernando Taietti 07 March 2008 (has links)
O método do Gradiente Espectral, introduzido por Barzilai e Borwein e analisado por Raydan, para minimização irrestrita, é um método simples cujo desempenho é comparável ao de métodos tradicionais como, por exemplo, gradientes conjugados. Desde a introdução do método, assim como da sua extensão para minimização em conjuntos convexos, foram introduzidas várias combinações de passos espectrais diferentes, assim como de buscas lineares não monótonas diferentes. Dos resultados numéricos apresentados em vários trabalhos não é possível inferir se existem diferenças significativas no desempenho dos diversos métodos. Além disso, também não fica clara a relevância das buscas não monótonas como uma ferramenta em si próprias ou se, na verdade, elas são úteis apenas para permitir que o método seja o mais parecido possível com o método original de Barzilai e Borwein. O objetivo deste trabalho é comparar os diversos métodos recentemente introduzidos como combinações de diferentes buscas lineares não monótonas e diferentes passos espectrais para encontrar a melhor combinação e, a partir daí, aferir o desempenho numérico do método. / The Spectral Gradient method, introduced by Barzilai and Borwein and analized by Raydan for unconstrained minimization, is a simple method whose performance is comparable to traditional methods, such as conjugate gradients. Since the introduction of method, as well as its extension to minimization of convex sets, there were introduced various combinations of different spectral steplengths, as well as different nonmonotone line searches. By the numerical results presented in many studies it is not possible to infer whether there are siginificant differences in the performance of various methods. It also is not sure the relevance of the nonmonotone line searches as a tool in themselves or whether, in fact, they are usefull only to allow the method to be as similar as possible with the original method of Barzilai e Borwein. The objective of this study is to compare the different methods recently introduced as different combinations of nonmonotone linear searches and different spectral steplengths to find the best combination and from there, evaluating the numerical performance of the method.
|
Page generated in 0.0592 seconds