Spelling suggestions: "subject:"aproksimacija"" "subject:"aproksimacijas""
1 |
Dirichlė L funkcijų jungtinis universalumas / The joint universality of dirichlet l-functionsMikalajūnaitė, Aušra 04 July 2014 (has links)
Šio magistro darbo tikslas yra pateikti pilną jungtinės universalumo teoremos Dirichlė L funkcijoms įrodymą. Darbas sudarytas iš keturių dalių. Pirmojoje, įvadinėje dalyje, yra aprašomi turimi duomenys ir suformuluojamas pagrindinis darbo rezultatas – teorema, kurią šiame darbe reikia įrodyti. Antrojoje dalyje suformuluojama ir įrodoma jungtinė ribinė teorema: P_T, kai T→∞, silpnai konverguoja į P_L_. Trečiojoje darbo dalyje pateikiama ribinio mato P_L_ atramos išraiška. Ketvirtojoje dalyje, remiantis jau gautais rezultatais bei pasitelkiant dar pora pagalbinių lemų, įrodomas pagrindinis magistro darbo rezultatas, kuris yra toks tvirtinimas: χ_1,...,χ_r yra poromis neekvivalentūs Dirichlė charakteriai. Su visais 1≤j≤r , tegul K_j⊂D yra kompaktinis poaibis, turintis jungųjį papildinį, o funkcija f_j (s) yra tolydi, nevirsta nuliu aibėje K_j ir yra analizinė aibės K_j viduje. Tuomet su kiekvienu ε>0 yra teisinga nelygybė, kuri reiškia, kad Dirichlė L funkcijų postūmių, aproksimuojančių duotą analizinių funkcijų rinkinį, aibė yra gana turtinga, ji turi teigiamą apatinį tankį. / The master‘s graduating work has been written by analyzing and presenting the joint universality theorem of Dirichlet L-functions. The work comprise 4 parts. A description of the available data has been made and the main purpose to prove the theorem has been formulated in the first part. The joint limit theorem (P_T converges weakly to P_L_ as T→∞) have been described and proved in the second part. The support of the limit measures P_L_ has been analyzed in the third part. Finally, by using some of the obtained results and a couple of auxiliary lemmas, the main result of this master’s graduating work has been proved, which is the assertion: suppose that χ_1,..., χ_r are pairwise non-equivalent Dirichlet characters. For all j=1,…,r, let K_j ⊂D be a compact subset with connected complement, and f_j (s) be a continuous non-vanishing functions on K_j which is analytic in the interior of K_j. Then, for every ε>0,we have an inequality which means, that the shifts of Dirichlet L-functions approximating the collection of given analytic function, the set is quite rich and it has a positive lover density.
|
2 |
Netolygūs įverčiai, aproksimuojant sudėtiniu Puasono dėsniu / Nonuniform estimates in approximation by the compound Poisson lawAndreikėnas, Giedrius 11 August 2009 (has links)
Šiame darbe nagrinėjama atsitiktinio dydžio X skirstinio aproksimacijos sudėtiniu Puasono dėsniu netolygūs įverčiai. / In this paper we analyze nonuniform estimates in approximation by the compound Poisson distribution.
|
3 |
Optimizacija problema sa stohastičkim ograničenjima tipa jednakosti – kazneni metodi sa promenljivom veličinom uzorka / Optimization of problems with stochastic equality constraints – penaltyvariable sample size methodsRožnjik Andrea 24 January 2019 (has links)
<p>U disertaciji je razmatran problem stohastičkog programiranja s ograničenjima tipa jednakosti, odnosno problem minimizacije s ograničenjima koja su u obliku matematičkog očekivanja. Za rešavanje posmatranog problema kreirana su dva iterativna postupka u kojima se u svakoj iteraciji računa s uzoračkim očekivanjem kao aproksimacijom matematičkog očekivanja. Oba postupka koriste prednosti postupaka s promenljivom veličinom uzorka zasnovanih na adaptivnom ažuriranju veličine uzorka. To znači da se veličina uzorka određuje na osnovu informacija u tekućoj iteraciji. Konkretno, tekuće informacije o preciznosti aproksimacije očekivanja i tačnosti aproksimacije rešenja problema definišu veličinu uzorka za narednu iteraciju. Oba iterativna postupka su zasnovana na linijskom pretraživanju, a kako je u pitanju problem s ograničenjima, i na kvadratnom kaznenom postupku prilagođenom stohastičkom okruženju. Postupci su zasnovani na istim idejama, ali s različitim pristupom.<br />Po prvom pristupu postupak je kreiran za rešavanje SAA reformulacije problema stohastičkog programiranja, dakle za rešavanje aproksimacije originalnog problema. To znači da je uzorak definisan pre iterativnog postupka, pa je analiza konvergencije algoritma deterministička. Pokazano je da se, pod standardnim pretpostavkama, navedenim algoritmom dobija podniz iteracija čija je tačka nagomilavanja KKT tačka SAA reformulacije.<br />Po drugom pristupu je formiran algoritam za rešavanje samog problema<br />stohastičkog programiranja, te je analiza konvergencije stohastička. Predstavljenim algoritmom se generiše podniz iteracija čija je tačka nagomilavanja, pod standardnim pretpostavkama za stohastičku optimizaciju, skoro sigurno<br />KKT tačka originalnog problema.<br />Predloženi algoritmi su implementirani na istim test problemima. Rezultati numeričkog testiranja prikazuju njihovu efikasnost u rešavanju posmatranih problema u poređenju s postupcima u kojima je ažuriranje veličine uzorka<br />zasnovano na unapred definisanoj šemi. Za meru efikasnosti je upotrebljen<br />broj izračunavanja funkcija. Dakle, na osnovu rezultata dobijenih na skupu<br />testiranih problema može se zaključiti da se adaptivnim ažuriranjem veličine<br />uzorka može uštedeti u broju evaluacija funkcija kada su u pitanju i problemi s<br />ograničenjima.<br />Kako je posmatrani problem deterministički, a formulisani postupci su stohastički, prva tri poglavlja disertacije sadrže osnovne pojmove determinističke<br />i stohastiˇcke optimizacije, ali i kratak pregled definicija i teorema iz drugih<br />oblasti potrebnih za lakše praćenje analize originalnih rezultata. Nastavak disertacije čini prikaz formiranih algoritama, analiza njihove konvergencije i numerička implementacija.<br /> </p> / <p>Stochastic programming problem with equality constraints is considered within thesis. More precisely, the problem is minimization problem with constraints in the form of mathematical expectation. We proposed two iterative methods for solving considered problem. Both procedures, in each iteration, use a sample average function instead of the mathematical expectation function, and employ the advantages of the variable sample size method based on adaptive updating the sample size. That means, the sample size is determined at every iteration using information from the current iteration. Concretely, the current precision of the approximation of expectation and the quality of the approximation of solution determine the sample size for the next iteration. Both iterative procedures are based on the line search technique as well as on the quadratic penalty method adapted to stochastic environment, since the considered problem has constraints. Procedures relies on same ideas, but the approach is different.<br />By first approach, the algorithm is created for solving an SAA reformulation of the stochastic programming problem, i.e., for solving the approximation of the original problem. That means the sample size is determined before the iterative procedure, so the convergence analyses is deterministic. We show that, under the standard assumptions, the proposed algorithm generates a subsequence which accumulation point is the KKT point of the SAA problem. Algorithm formed by the second approach is for solving the stochastic programming problem, and therefore the convergence analyses is stochastic. It generates a subsequence with accumulation point that is almost surely the KKT point of the original problem, under the standard assumptions for stochastic optimization.for sample size. The number of function evaluations is used as measure of efficiency. Results of the set of tested problems suggest that it is possible to make smaller number of function evaluations by adaptive sample size scheduling in the case of constrained problems, too.<br />Since the considered problem is deterministic, but the formed procedures are stochastic, the first three chapters of thesis contain basic notations of deterministic and stochastic optimization, as well as a short sight of definitions and theorems from another fields necessary for easier tracking the original results analysis. The rest of thesis consists of the presented algorithms, their convergence analysis and numerical implementation.</p>
|
4 |
Izbor parametara kod gradijentnih metoda za probleme optimizacije bez ograničenja / Choice of parameters in gradient methods for the unconstrained optimization problems / Choice of parameters in gradient methods for the unconstrained optimization problemsĐorđević Snežana 22 May 2015 (has links)
<p>Posmatra se problem optimizacije bez ograničenja. Za rešavanje<br />problema optimizacije bez ograničenja postoji mnoštvo raznovrsnih<br />metoda. Istraživanje ovde motivisano je potrebom za metodama koje<br />će brzo konvergirati.<br />Cilj je sistematizacija poznatih rezultata, kao i teorijska i numerička<br />analiza mogućnosti uvođenja parametra u gradijentne metode.<br />Najpre se razmatra problem minimizacije konveksne funkcije više<br />promenljivih.<br />Problem minimizacije konveksne funkcije više promenljivih ovde se<br />rešava bez izračunavanja matrice hesijana, što je naročito aktuelno za<br />sisteme velikih dimenzija, kao i za probleme optimizacije kod kojih<br />ne raspolažemo ni tačnom vrednošću funkcije cilja, ni tačnom<br />vrednošću gradijenta. Deo motivacije za istraživanjem ovde leži i u<br />postojanju problema kod kojih je funkcija cilja rezultat simulacija.<br />Numerički rezultati, predstavljeni u Glavi 6, pokazuju da uvođenje<br />izvesnog parametra može biti korisno, odnosno, dovodi do ubrzanja<br />određenog metoda optimizacije.<br />Takođe se predstavlja jedan novi hibridni metod konjugovanog<br />gradijenta, kod koga je parametar konjugovanog gradijenta<br />konveksna kombinacija dva poznata parametra konjugovanog<br />gradijenta.<br />U prvoj glavi opisuje se motivacija kao i osnovni pojmovi potrebni za<br />praćenje preostalih glava.<br />U drugoj glavi daje se pregled nekih gradijentnih metoda prvog i<br />drugog reda.<br />Četvrta glava sadrži pregled osnovnih pojmova i nekih rezultata<br />vezanih za metode konjugovanih gradijenata.<br />Pomenute glave su tu radi pregleda nekih poznatih rezultata, dok se<br />originalni doprinos predstavlja u trećoj, petoj i šestoj glavi.<br />U trećoj glavi se opisuje izvesna modifikacija određenog metoda u<br />kome se koristi multiplikativni parametar, izabran na slučajan način.<br />Dokazuje se linearna konvergencija tako formiranog novog metoda.<br />Peta glava sadrži originalne rezultate koji se odnose na metode<br />konjugovanih gradijenata. Naime, u ovoj glavi predstavlja se novi<br />hibridni metod konjugovanih gradijenata, koji je konveksna<br />kombinacija dva poznata metoda konjugovanih gradijenata.<br />U šestoj glavi se daju rezultati numeričkih eksperimenata, izvršenih<br />na izvesnom skupu test funkcija, koji se odnose na metode iz treće i<br />pete glave. Implementacija svih razmatranih algoritama rađena je u<br />paketu MATHEMATICA. Kriterijum upoređivanja je vreme rada<br />centralne procesorske jedinice.6</p> / <p>The problem under consideration is an unconstrained optimization<br />problem. There are many different methods made in aim to solve the<br />optimization problems. The investigation made here is motivated by<br />the fact that the methods which converge fast are necessary.<br />The main goal is the systematization of some known results and also<br />theoretical and numerical analysis of the possibilities to int roduce<br />some parameters within gradient methods.<br />Firstly, the minimization problem is considered, where the objective<br />function is a convex, multivar iable function. This problem is solved<br />here without the calculation of Hessian, and such solution is very<br />important, for example, when the big dimension systems are solved,<br />and also for solving optimization problems with unknown values of<br />the objective function and its gradient. Partially, this investigation is<br />motivated by the existence of problems where the objective function<br />is the result of simulations.<br />Numerical results, presented in Chapter 6, show that the introduction<br />of a parameter is useful, i.e., such introduction results by the<br />acceleration of the known optimization method.<br />Further, one new hybrid conjugate gradient method is presented, in<br />which the conjugate gradient parameter is a convex combination of<br />two known conjugate gradient parameters.<br />In the first chapter, there is motivation and also the basic co ncepts<br />which are necessary for the other chapters.<br />The second chapter contains the survey of some first order and<br />second order gradient methods.<br />The fourth chapter contains the survey of some basic concepts and<br />results corresponding to conjugate gradient methods.<br />The first, the second and the fourth chapters are here to help in<br />considering of some known results, and the original results are<br />presented in the chapters 3,5 and 6.<br />In the third chapter, a modification of one unco nstrained optimization<br />method is presented, in which the randomly chosen multiplicative<br />parameter is used. Also, the linear convergence of such modification<br />is proved.<br />The fifth chapter contains the original results, corresponding to<br />conjugate gradient methods. Namely, one new hybrid conjugate<br />gradient method is presented, and this method is the convex<br />combination of two known conjugate gradient methods.<br />The sixth chapter consists of the numerical results, performed on a set<br />of test functions, corresponding to methods in the chapters 3 and 5.<br />Implementation of all considered algorithms is made in Mathematica.<br />The comparison criterion is CPU time.</p> / <p>The problem under consideration is an unconstrained optimization<br />problem. There are many different methods made in aim to solve the<br />optimization problems. The investigation made here is motivated by<br />the fact that the methods which converge fast are necessary.<br />The main goal is the systematization of some known results and also<br />theoretical and numerical analysis of the possibilities to int roduce<br />some parameters within gradient methods.<br />Firstly, the minimization problem is considered, where the objective<br />function is a convex, multivar iable function. This problem is solved<br />here without the calculation of Hessian, and such solution is very<br />important, for example, when the big dimension systems are solved,<br />and also for solving optimization problems with unknown values of<br />the objective function and its gradient. Partially, this investigation is<br />motivated by the existence of problems where the objective function<br />is the result of simulations.<br />Numerical results, presented in Chapter 6, show that the introduction<br />of a parameter is useful, i.e., such introduction results by the<br />acceleration of the known optimization method.<br />Further, one new hybrid conjugate gradient method is presented, in<br />which the conjugate gradient parameter is a convex combination of<br />two known conjugate gradient parameters.<br />In the first chapter, there is motivation and also the basic co ncepts<br />which are necessary for the other chapters.<br />Key Words Documentation 97<br />The second chapter contains the survey of some first order and<br />second order gradient methods.<br />The fourth chapter contains the survey of some basic concepts and<br />results corresponding to conjugate gradient methods.<br />The first, the second and the fourth chapters are here to help in<br />considering of some known results, and the original results are<br />presented in the chapters 3,5 and 6.<br />In the third chapter, a modification of one unco nstrained optimization<br />method is presented, in which the randomly chosen multiplicative<br />parameter is used. Also, the linear convergence of such modification<br />is proved.<br />The fifth chapter contains the original results, corresponding to<br />conjugate gradient methods. Namely, one new hybrid conjugate<br />gradient method is presented, and this method is the convex<br />combination of two known conjugate gradient methods.<br />The sixth chapter consists of the numerical results, performed on a set<br />of test functions, corresponding to methods in the chapters 3 and 5.<br />Implementation of all considered algorithms is made in Mathematica.<br />The comparison criterion is CPU time</p>
|
5 |
Modifications of Stochastic Approximation Algorithm Based on Adaptive Step Sizes / Modifikacije algoritma stohastičke aproksimacije zasnovane na prilagođenim dužinama korakaKresoja Milena 25 September 2017 (has links)
<p>The problem under consideration is an unconstrained mini-mization problem in noisy environment. The common approach for solving the problem is Stochastic Approximation (SA) algorithm. We propose a class of adaptive step size schemes for the SA algorithm. The step size selection in the proposed schemes is based on the objective functionvalues. At each iterate, interval estimates of the optimal function value are constructed using the xed number of previously observed function values. If the observed function value in the current iterate is larger than the upper bound of the interval, we reject the current iterate. If the observed function value in the current iterate is smaller than the lower bound of the interval, we suggest a larger step size in the next iterate. Otherwise, if the function value lies in the interval, we propose a small safe step size in the next iterate. In this manner, a faster progress of the algorithm is ensured when it is expected that larger steps will improve the performance of the algorithm. We propose two main schemes which dier in the intervals that we construct at each iterate. In the rst scheme, we construct a symmetrical interval that can be viewed as a condence-like interval for the optimal function value. The bounds of the interval are shifted means of the xed number of previously observed function values. The generalization of this scheme using a convex combination instead of the mean is also presented. In the second scheme, we use the minimum and the maximum of previous noisy function values as the lower and upper bounds of the interval, respectively. The step size sequences generated by the proposed schemes satisfy the step size convergence conditions for the SA algorithm almost surely. Performance of SA algorithms with the new step size schemes is tested on a set of standard test problems. Numerical results support theoretical expectations and verify eciency of the algorithms in comparison to other relevant modications of SA algorithms. Application of the algorithms in LASSO regression models is also considered. The algorithms are applied for estimation of the regression parameters where the objective function contains L<sub>1</sub> penalty.</p> / <p>Predmet istraživanja doktorske disertacije su numerički postupci za rešavanje problema stohastičke optimizacije. Najpoznatiji numerički postupak za rešavanje pomenutog problema je algoritam stohastičke aproksimacije (SA). U disertaciji se predlaže nova klasa šema za prilagođavanje dužina koraka u svakoj iteraciji. Odabir dužina koraka u predloženim šemama se zasniva na vrednostima funkcije cilja. U svakoj iteraciji formira se intervalna ocena optimalne vrednosti funkcije cilja koristeći samo registrovane vrednosti funkcije cilja iz ksnog broja prethodnih iteracija. Ukoliko je vrednost funkcije cilja u trenutnoj iteraciji veća od gornje granice intervala, iteracija se odbacuje. Korak dužine 0 se koristi u narednoj iteraciji. Ako je trenutna vrednost funkcije cilja manja od donje granice intervala, predlaže se duži korak u narednoj iteraciji. Ukoliko vrednost funkcije leži u intervalu, u narednoj iteraciji se koristi korak dobijen harmonijskim pravilom. Na ovaj način se obezbeđuje brzi progres algoritma i izbegavaju mali koraci posebno kada se povećava broj iteracija. Šeme izbegavaju korake proporcionalne sa 1/k kada se očekuje da ce duži koraci poboljšati proces optimizacije. Predložene šeme se razlikuju u intervalima koji se formiraju u svakoj iteraciji. U prvoj predloženoj šemi se formira veštački interval poverenja za ocenu optimalne vrednosti funkcije cilja u svakoj iteraciji. Granice tog intervala se uzimaju za kriterijume dovoljnog smanjenja ili rasta funkcije cilja. Predlaže se i uopštenje ove šeme tako što se umesto srednje vrednosti koristi konveksna kombinacija prethodnih vrednosti funkcije cilja. U drugoj šemi, kriterijum po kom se prilagođavaju dužine koraka su minimum i maksimum prethodnih registrovanih vrednosti funkcije cilja. Nizovi koji se formiranju predloženim šemama zadovoljavaju uslove potrebne za konvergenciju SA algoritma skoro sigurno. SA algoritmi sa novim šemama za prilagođavanje dužina koraka su testirani na standardnim test problemima i upoređ eni sa SA algoritmom i njegovim postojećim modikacijama. Rezultati pokazuju napredak u odnosu na klasičan algoritam stohastičke aproksimacije sa determinističkim nizom dužine koraka kao i postojećim adaptivnim algoritmima. Takođe se razmatra primena novih algoritama na LASSO regresijske modele. Algoritmi su primenjeni za ocenjivanje parametara modela.</p>
|
Page generated in 0.0387 seconds