Spelling suggestions: "subject:"monotone"" "subject:"monotona""
1 |
Izbor parametara kod gradijentnih metoda za probleme optimizacije bez ograničenja / Choice of parameters in gradient methods for the unconstrained optimization problems / Choice of parameters in gradient methods for the unconstrained optimization problemsĐorđević Snežana 22 May 2015 (has links)
<p>Posmatra se problem optimizacije bez ograničenja. Za rešavanje<br />problema optimizacije bez ograničenja postoji mnoštvo raznovrsnih<br />metoda. Istraživanje ovde motivisano je potrebom za metodama koje<br />će brzo konvergirati.<br />Cilj je sistematizacija poznatih rezultata, kao i teorijska i numerička<br />analiza mogućnosti uvođenja parametra u gradijentne metode.<br />Najpre se razmatra problem minimizacije konveksne funkcije više<br />promenljivih.<br />Problem minimizacije konveksne funkcije više promenljivih ovde se<br />rešava bez izračunavanja matrice hesijana, što je naročito aktuelno za<br />sisteme velikih dimenzija, kao i za probleme optimizacije kod kojih<br />ne raspolažemo ni tačnom vrednošću funkcije cilja, ni tačnom<br />vrednošću gradijenta. Deo motivacije za istraživanjem ovde leži i u<br />postojanju problema kod kojih je funkcija cilja rezultat simulacija.<br />Numerički rezultati, predstavljeni u Glavi 6, pokazuju da uvođenje<br />izvesnog parametra može biti korisno, odnosno, dovodi do ubrzanja<br />određenog metoda optimizacije.<br />Takođe se predstavlja jedan novi hibridni metod konjugovanog<br />gradijenta, kod koga je parametar konjugovanog gradijenta<br />konveksna kombinacija dva poznata parametra konjugovanog<br />gradijenta.<br />U prvoj glavi opisuje se motivacija kao i osnovni pojmovi potrebni za<br />praćenje preostalih glava.<br />U drugoj glavi daje se pregled nekih gradijentnih metoda prvog i<br />drugog reda.<br />Četvrta glava sadrži pregled osnovnih pojmova i nekih rezultata<br />vezanih za metode konjugovanih gradijenata.<br />Pomenute glave su tu radi pregleda nekih poznatih rezultata, dok se<br />originalni doprinos predstavlja u trećoj, petoj i šestoj glavi.<br />U trećoj glavi se opisuje izvesna modifikacija određenog metoda u<br />kome se koristi multiplikativni parametar, izabran na slučajan način.<br />Dokazuje se linearna konvergencija tako formiranog novog metoda.<br />Peta glava sadrži originalne rezultate koji se odnose na metode<br />konjugovanih gradijenata. Naime, u ovoj glavi predstavlja se novi<br />hibridni metod konjugovanih gradijenata, koji je konveksna<br />kombinacija dva poznata metoda konjugovanih gradijenata.<br />U šestoj glavi se daju rezultati numeričkih eksperimenata, izvršenih<br />na izvesnom skupu test funkcija, koji se odnose na metode iz treće i<br />pete glave. Implementacija svih razmatranih algoritama rađena je u<br />paketu MATHEMATICA. Kriterijum upoređivanja je vreme rada<br />centralne procesorske jedinice.6</p> / <p>The problem under consideration is an unconstrained optimization<br />problem. There are many different methods made in aim to solve the<br />optimization problems. The investigation made here is motivated by<br />the fact that the methods which converge fast are necessary.<br />The main goal is the systematization of some known results and also<br />theoretical and numerical analysis of the possibilities to int roduce<br />some parameters within gradient methods.<br />Firstly, the minimization problem is considered, where the objective<br />function is a convex, multivar iable function. This problem is solved<br />here without the calculation of Hessian, and such solution is very<br />important, for example, when the big dimension systems are solved,<br />and also for solving optimization problems with unknown values of<br />the objective function and its gradient. Partially, this investigation is<br />motivated by the existence of problems where the objective function<br />is the result of simulations.<br />Numerical results, presented in Chapter 6, show that the introduction<br />of a parameter is useful, i.e., such introduction results by the<br />acceleration of the known optimization method.<br />Further, one new hybrid conjugate gradient method is presented, in<br />which the conjugate gradient parameter is a convex combination of<br />two known conjugate gradient parameters.<br />In the first chapter, there is motivation and also the basic co ncepts<br />which are necessary for the other chapters.<br />The second chapter contains the survey of some first order and<br />second order gradient methods.<br />The fourth chapter contains the survey of some basic concepts and<br />results corresponding to conjugate gradient methods.<br />The first, the second and the fourth chapters are here to help in<br />considering of some known results, and the original results are<br />presented in the chapters 3,5 and 6.<br />In the third chapter, a modification of one unco nstrained optimization<br />method is presented, in which the randomly chosen multiplicative<br />parameter is used. Also, the linear convergence of such modification<br />is proved.<br />The fifth chapter contains the original results, corresponding to<br />conjugate gradient methods. Namely, one new hybrid conjugate<br />gradient method is presented, and this method is the convex<br />combination of two known conjugate gradient methods.<br />The sixth chapter consists of the numerical results, performed on a set<br />of test functions, corresponding to methods in the chapters 3 and 5.<br />Implementation of all considered algorithms is made in Mathematica.<br />The comparison criterion is CPU time.</p> / <p>The problem under consideration is an unconstrained optimization<br />problem. There are many different methods made in aim to solve the<br />optimization problems. The investigation made here is motivated by<br />the fact that the methods which converge fast are necessary.<br />The main goal is the systematization of some known results and also<br />theoretical and numerical analysis of the possibilities to int roduce<br />some parameters within gradient methods.<br />Firstly, the minimization problem is considered, where the objective<br />function is a convex, multivar iable function. This problem is solved<br />here without the calculation of Hessian, and such solution is very<br />important, for example, when the big dimension systems are solved,<br />and also for solving optimization problems with unknown values of<br />the objective function and its gradient. Partially, this investigation is<br />motivated by the existence of problems where the objective function<br />is the result of simulations.<br />Numerical results, presented in Chapter 6, show that the introduction<br />of a parameter is useful, i.e., such introduction results by the<br />acceleration of the known optimization method.<br />Further, one new hybrid conjugate gradient method is presented, in<br />which the conjugate gradient parameter is a convex combination of<br />two known conjugate gradient parameters.<br />In the first chapter, there is motivation and also the basic co ncepts<br />which are necessary for the other chapters.<br />Key Words Documentation 97<br />The second chapter contains the survey of some first order and<br />second order gradient methods.<br />The fourth chapter contains the survey of some basic concepts and<br />results corresponding to conjugate gradient methods.<br />The first, the second and the fourth chapters are here to help in<br />considering of some known results, and the original results are<br />presented in the chapters 3,5 and 6.<br />In the third chapter, a modification of one unco nstrained optimization<br />method is presented, in which the randomly chosen multiplicative<br />parameter is used. Also, the linear convergence of such modification<br />is proved.<br />The fifth chapter contains the original results, corresponding to<br />conjugate gradient methods. Namely, one new hybrid conjugate<br />gradient method is presented, and this method is the convex<br />combination of two known conjugate gradient methods.<br />The sixth chapter consists of the numerical results, performed on a set<br />of test functions, corresponding to methods in the chapters 3 and 5.<br />Implementation of all considered algorithms is made in Mathematica.<br />The comparison criterion is CPU time</p>
|
Page generated in 0.0398 seconds