• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 19
  • 8
  • 6
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 88
  • 88
  • 22
  • 16
  • 15
  • 14
  • 13
  • 12
  • 10
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Line search methods with variable sample size / Metodi linijskog pretrazivanja sa promenljivom velicinom uzorka

Krklec Jerinkić Nataša 17 January 2014 (has links)
<p>The problem under consideration is an unconstrained optimization&nbsp;problem with the objective function in the form of mathematical ex-pectation. The expectation is with respect to the random variable that represents the uncertainty. Therefore, the objective &nbsp;function is in fact deterministic. However, nding the analytical form of that objective function can be very dicult or even impossible. This is the reason why the sample average approximation is often used. In order to obtain reasonable good approximation of the objective function, we have to use relatively large sample size. We assume that the sample is generated at the beginning of the optimization process and therefore we can consider this sample average objective function as the deterministic one. However, applying some deterministic method on that sample average function from the start can be very costly. The number of evaluations of the function under expectation is a common way of measuring the cost of an algorithm. Therefore, methods that vary the sample size throughout the optimization process are developed. Most of them are trying to determine the optimal dynamics of increasing the sample size.</p><p>The main goal of this thesis is to develop the clas of methods that&nbsp;can decrease the cost of an algorithm by decreasing the number of&nbsp;function evaluations. The idea is to decrease the sample size whenever&nbsp;it seems to be reasonable - roughly speaking, we do not want to impose&nbsp;a large precision, i.e. a large sample size when we are far away from the&nbsp;solution we search for. The detailed description of the new methods&nbsp;<br />is presented in Chapter 4 together with the convergence analysis. It&nbsp;is shown that the approximate solution is of the same quality as the&nbsp;one obtained by dealing with the full sample from the start.</p><p>Another important characteristic of the methods that are proposed&nbsp;here is the line search technique which is used for obtaining the sub-sequent iterates. The idea is to nd a suitable direction and to search&nbsp;along it until we obtain a sucient decrease in the &nbsp;function value. The&nbsp;sucient decrease is determined throughout the line search rule. In&nbsp;Chapter 4, that rule is supposed to be monotone, i.e. we are imposing&nbsp;strict decrease of the function value. In order to decrease the cost of&nbsp;the algorithm even more and to enlarge the set of suitable search directions, we use nonmonotone line search rules in Chapter 5. Within that chapter, these rules are modied to t the variable sample size framework. Moreover, the conditions for the global convergence and the R-linear rate are presented.&nbsp;</p><p>In Chapter 6, numerical results are presented. The test problems&nbsp;are various - some of them are academic and some of them are real&nbsp;world problems. The academic problems are here to give us more&nbsp;insight into the behavior of the algorithms. On the other hand, data&nbsp;that comes from the real world problems are here to test the real&nbsp;applicability of the proposed algorithms. In the rst part of that&nbsp;chapter, the focus is on the variable sample size techniques. Different&nbsp;implementations of the proposed algorithm are compared to each other&nbsp;and to the other sample schemes as well. The second part is mostly&nbsp;devoted to the comparison of the various line search rules combined&nbsp;with dierent search directions in the variable sample size framework.&nbsp;The overall numerical results show that using the variable sample size&nbsp;can improve the performance of the algorithms signicantly, especially&nbsp;when the nonmonotone line search rules are used.</p><p>The rst chapter of this thesis provides the background material&nbsp;for the subsequent chapters. In Chapter 2, basics of the nonlinear&nbsp;optimization are presented and the focus is on the line search, while&nbsp;Chapter 3 deals with the stochastic framework. These chapters are&nbsp;here to provide the review of the relevant known results, while the&nbsp;rest of the thesis represents the original contribution.&nbsp;</p> / <p>U okviru ove teze posmatra se problem optimizacije bez ograničenja pri čcemu je funkcija cilja u formi matematičkog očekivanja. Očekivanje se odnosi na slučajnu promenljivu koja predstavlja neizvesnost. Zbog toga je funkcija cilja, u stvari, deterministička veličina. Ipak, odredjivanje analitičkog oblika te funkcije cilja može biti vrlo komplikovano pa čak i nemoguće. Zbog toga se za aproksimaciju često koristi uzoračko očcekivanje. Da bi se postigla dobra aproksimacija, obično je neophodan obiman uzorak. Ako pretpostavimo da se uzorak realizuje pre početka procesa optimizacije, možemo posmatrati uzoračko očekivanje kao determinističku funkciju. Medjutim, primena nekog od determinističkih metoda direktno na tu funkciju&nbsp; moze biti veoma skupa jer evaluacija funkcije pod ocekivanjem često predstavlja veliki tro&scaron;ak i uobičajeno je da se ukupan tro&scaron;ak optimizacije meri po broju izračcunavanja funkcije pod očekivanjem. Zbog toga su razvijeni metodi sa promenljivom veličinom uzorka. Većcina njih je bazirana na odredjivanju optimalne dinamike uvećanja uzorka.</p><p>Glavni cilj ove teze je razvoj algoritma koji, kroz smanjenje broja izračcunavanja funkcije, smanjuje ukupne tro&scaron;skove optimizacije. Ideja je da se veličina uzorka smanji kad god je to moguće. Grubo rečeno, izbegava se koriscenje velike preciznosti&nbsp; (velikog uzorka) kada smo daleko od re&scaron;senja. U čcetvrtom poglavlju ove teze opisana je nova klasa metoda i predstavljena je analiza konvergencije. Dokazano je da je aproksimacija re&scaron;enja koju dobijamo bar toliko dobra koliko i za metod koji radi sa celim uzorkom sve vreme.</p><p>Jo&scaron; jedna bitna karakteristika metoda koji su ovde razmatrani je primena linijskog pretražzivanja u cilju odredjivanja naredne iteracije. Osnovna ideja je da se nadje odgovarajući pravac i da se duž njega vr&scaron;si pretraga za dužzinom koraka koja će dovoljno smanjiti vrednost funkcije. Dovoljno smanjenje je odredjeno pravilom linijskog pretraživanja. U čcetvrtom poglavlju to pravilo je monotono &scaron;to znači da zahtevamo striktno smanjenje vrednosti funkcije. U cilju jos većeg smanjenja tro&scaron;kova optimizacije kao i pro&scaron;irenja skupa pogodnih pravaca, u petom poglavlju koristimo nemonotona pravila linijskog pretraživanja koja su modifikovana zbog promenljive velicine uzorka. Takodje, razmatrani su uslovi za globalnu konvergenciju i R-linearnu brzinu konvergencije.</p><p>Numerički rezultati su predstavljeni u &scaron;estom poglavlju. Test problemi su razliciti - neki od njih su akademski, a neki su realni. Akademski problemi su tu da nam daju bolji uvid u pona&scaron;anje algoritama. Sa druge strane, podaci koji poticu od stvarnih problema služe kao pravi test za primenljivost pomenutih algoritama. U prvom delu tog poglavlja akcenat je na načinu ažuriranja veličine uzorka. Različite varijante metoda koji su ovde predloženi porede se medjusobno kao i sa drugim &scaron;emama za ažuriranje veličine uzorka. Drugi deo poglavlja pretežno je posvećen poredjenju različitih pravila linijskog pretraživanja sa različitim pravcima pretraživanja u okviru promenljive veličine uzorka. Uzimajuci sve postignute rezultate u obzir dolazi se do zaključcka da variranje veličine uzorka može značajno popraviti učinak algoritma, posebno ako se koriste nemonotone metode linijskog pretraživanja.</p><p>U prvom poglavlju ove teze opisana je motivacija kao i osnovni pojmovi potrebni za praćenje preostalih poglavlja. U drugom poglavlju je iznet pregled osnova nelinearne optimizacije sa akcentom na metode linijskog pretraživanja, dok su u trećem poglavlju predstavljene osnove stohastičke optimizacije. Pomenuta poglavlja su tu radi pregleda dosada&scaron;njih relevantnih rezultata dok je originalni doprinos ove teze predstavljen u poglavljima 4-6.</p>
72

Izbor parametara kod gradijentnih metoda za probleme optimizacije bez ograničenja / Choice of parameters in gradient methods for the unconstrained optimization problems / Choice of parameters in gradient methods for the unconstrained optimization problems

Đorđević Snežana 22 May 2015 (has links)
<p>Posmatra se problem optimizacije bez ograničenja. Za re&scaron;avanje<br />problema&nbsp; optimizacije bez ograničenja postoji mno&scaron;tvo raznovrsnih<br />metoda. Istraživanje ovde motivisano je potrebom za metodama koje<br />će brzo konvergirati.<br />Cilj je sistematizacija poznatih rezultata, kao i teorijska i numerička<br />analiza mogućnosti uvođenja parametra u gradijentne metode.<br />Najpre se razmatra problem minimizacije konveksne funkcije vi&scaron;e<br />promenljivih.<br />Problem minimizacije konveksne funkcije vi&scaron;e promenljivih ovde se<br />re&scaron;ava bez izračunavanja matrice hesijana, &scaron;to je naročito aktuelno za<br />sisteme velikih dimenzija, kao i za probleme optimizacije kod kojih<br />ne raspolažemo ni tačnom vredno&scaron;ću funkcije cilja, ni tačnom<br />vredno&scaron;ću gradijenta. Deo motivacije za istraživanjem ovde leži i u<br />postojanju problema kod kojih je funkcija cilja rezultat simulacija.<br />Numerički rezultati, predstavljeni u Glavi 6, pokazuju da uvođenje<br />izvesnog parametra može biti korisno, odnosno, dovodi do ubrzanja<br />određenog metoda optimizacije.<br />Takođe se predstavlja jedan novi hibridni metod konjugovanog<br />gradijenta, kod koga je parametar konjugovanog gradijenta<br />konveksna kombinacija dva poznata parametra konjugovanog<br />gradijenta.<br />U prvoj glavi opisuje se motivacija kao i osnovni pojmovi potrebni za<br />praćenje preostalih glava.<br />U drugoj glavi daje se pregled nekih gradijentnih metoda prvog i<br />drugog reda.<br />Četvrta glava sadrži pregled osnovnih pojmova i nekih rezultata<br />vezanih za metode konjugovanih gradijenata.<br />Pomenute glave su tu radi pregleda nekih poznatih rezultata, dok se<br />originalni doprinos predstavlja u trećoj, petoj i &scaron;estoj glavi.<br />U trećoj glavi se opisuje izvesna modifikacija određenog metoda u<br />kome se koristi multiplikativni parametar, izabran na slučajan način.<br />Dokazuje se linearna konvergencija tako formiranog novog metoda.<br />Peta glava sadrži originalne rezultate koji se odnose na metode<br />konjugovanih gradijenata. Naime, u ovoj glavi predstavlja se novi<br />hibridni metod konjugovanih gradijenata, koji je konveksna<br />kombinacija dva poznata metoda konjugovanih gradijenata.<br />U &scaron;estoj glavi se daju rezultati numeričkih eksperimenata, izvr&scaron;enih<br />na&nbsp; izvesnom skupu test funkcija, koji se odnose na metode iz treće i<br />pete glave. Implementacija svih razmatranih algoritama rađena je u<br />paketu MATHEMATICA. Kriterijum upoređivanja je vreme rada<br />centralne procesorske jedinice.6</p> / <p>The problem under consideration is an unconstrained optimization<br />problem. There are many different methods made in aim to solve the<br />optimization problems.&nbsp; The investigation made here is motivated by<br />the fact that the methods which converge fast are necessary.<br />The main goal is the systematization of some known results and also<br />theoretical and numerical analysis of the possibilities to int roduce<br />some parameters within gradient methods.<br />Firstly, the minimization problem is considered, where the objective<br />function is a convex, multivar iable function. This problem is solved<br />here without the calculation of Hessian, and such solution is very<br />important, for example, when the&nbsp; big dimension systems are solved,<br />and also for solving optimization problems with unknown values of<br />the objective function and its gradient. Partially, this investigation is<br />motivated by the existence of problems where the objective function<br />is the result of simulations.<br />Numerical results, presented in&nbsp; Chapter&nbsp; 6, show that the introduction<br />of a parameter is useful, i.e., such introduction results by the<br />acceleration of the known optimization method.<br />Further, one new hybrid conjugate gradient method is presented, in<br />which the conjugate gradient parameter is a convex combination of<br />two known conjugate gradient parameters.<br />In the first chapter, there is motivation and also the basic co ncepts<br />which are necessary for the other chapters.<br />The second chapter contains the survey of some first order and<br />second order gradient methods.<br />The fourth chapter contains the survey of some basic concepts and<br />results corresponding to conjugate gradient methods.<br />The first, the second and the fourth&nbsp; chapters are here to help in<br />considering of some known results, and the original results are<br />presented in the chapters 3,5 and 6.<br />In the third chapter, a modification of one unco nstrained optimization<br />method is presented, in which the randomly chosen multiplicative<br />parameter is used. Also, the linear convergence of such modification<br />is proved.<br />The fifth chapter contains the original results, corresponding to<br />conjugate gradient methods. Namely, one new hybrid conjugate<br />gradient method is presented, and this&nbsp; method is the convex<br />combination of two known conjugate gradient methods.<br />The sixth chapter consists of the numerical results, performed on a set<br />of test functions, corresponding to methods in the chapters 3 and 5.<br />Implementation of all considered algorithms is made in Mathematica.<br />The comparison criterion is CPU time.</p> / <p>The problem under consideration is an unconstrained optimization<br />problem. There are many different methods made in aim to solve the<br />optimization problems.&nbsp; The investigation made here is motivated by<br />the fact that the methods which converge fast are necessary.<br />The main goal is the systematization of some known results and also<br />theoretical and numerical analysis of the possibilities to int roduce<br />some parameters within gradient methods.<br />Firstly, the minimization problem is considered, where the objective<br />function is a convex, multivar iable function. This problem is solved<br />here without the calculation of Hessian, and such solution is very<br />important, for example, when the&nbsp; big dimension systems are solved,<br />and also for solving optimization problems with unknown values of<br />the objective function and its gradient. Partially, this investigation is<br />motivated by the existence of problems where the objective function<br />is the result of simulations.<br />Numerical results, presented in&nbsp; Chapter&nbsp; 6, show that the introduction<br />of a parameter is useful, i.e., such introduction results by the<br />acceleration of the known optimization method.<br />Further, one new hybrid conjugate gradient method is presented, in<br />which the conjugate gradient parameter is a convex combination of<br />two known conjugate gradient parameters.<br />In the first chapter, there is motivation and also the basic co ncepts<br />which are necessary for the other chapters.<br />Key&nbsp; Words Documentation&nbsp; 97<br />The second chapter contains the survey of some first order and<br />second order gradient methods.<br />The fourth chapter contains the survey of some basic concepts and<br />results corresponding to conjugate gradient methods.<br />The first, the second and the fourth&nbsp; chapters are here to help in<br />considering of some known results, and the original results are<br />presented in the chapters 3,5 and 6.<br />In the third chapter, a modification of one unco nstrained optimization<br />method is presented, in which the randomly chosen multiplicative<br />parameter is used. Also, the linear convergence of such modification<br />is proved.<br />The fifth chapter contains the original results, corresponding to<br />conjugate gradient methods. Namely, one new hybrid conjugate<br />gradient method is presented, and this&nbsp; method is the convex<br />combination of two known conjugate gradient methods.<br />The sixth chapter consists of the numerical results, performed on a set<br />of test functions, corresponding to methods in the chapters 3 and 5.<br />Implementation of all considered algorithms is made in Mathematica.<br />The comparison criterion is CPU time</p>
73

Modifications of Stochastic Approximation Algorithm Based on Adaptive Step Sizes / Modifikacije algoritma stohastičke aproksimacije zasnovane na prilagođenim dužinama koraka

Kresoja Milena 25 September 2017 (has links)
<p>The problem under consideration is an unconstrained mini-mization problem in noisy environment. The common approach for solving the problem is Stochastic Approximation (SA) algorithm. We propose a class of adaptive step size schemes for the SA algorithm. The step size selection in the proposed schemes is based on the objective functionvalues. At each iterate, interval estimates of the optimal function&nbsp; value are constructed using the xed number of previously observed function values.&nbsp;If the observed function value in the current iterate is larger than the upper bound of the interval, we reject the current iterate. If the observed function value in the current iterate is smaller than the lower bound of the interval, we suggest a larger step size in the next iterate. Otherwise, if the function value lies in the interval, we propose a small safe step size in the next iterate. In this manner, a faster progress of the algorithm is ensured&nbsp;when it is expected that larger steps will improve the performance of the algorithm. We propose two main schemes which dier in the intervals that we construct at each iterate. In the rst scheme, we construct a symmetrical interval that can be viewed as a condence-like interval for the optimal function value. The bounds of the interval are shifted means of the xed number of previously observed function values. The generalization&nbsp;of this scheme using a convex combination instead of the mean is also presented. In the second scheme, we use the minimum and the maximum of previous noisy function values as the lower and upper bounds of the interval, respectively. The step size sequences generated by the proposed schemes satisfy the step size convergence conditions for the SA algorithm almost surely. Performance of SA algorithms with the new step size schemes is tested on a set of standard test problems. Numerical results&nbsp;support theoretical expectations and verify eciency of the algorithms in comparison to other relevant modications of SA algorithms. Application of the algorithms in LASSO regression models is also considered. The algorithms are applied for estimation of the regression parameters where the objective function contains L<sub>1</sub> penalty.</p> / <p>Predmet istraživanja doktorske disertacije su numerički postupci za re&scaron;avanje problema stohastičke optimizacije. Najpoznatiji numerički postupak za re&scaron;avanje pomenutog problema je algoritam stohastičke aproksimacije (SA). U disertaciji se&nbsp;&nbsp; predlaže nova klasa &scaron;ema za prilagođavanje dužina koraka u svakoj iteraciji. Odabir dužina koraka u predloženim &scaron;emama se zasniva na vrednostima funkcije cilja. U svakoj iteraciji formira se intervalna ocena optimalne vrednosti funkcije cilja koristeći samo registrovane vrednosti funkcije cilja iz ksnog broja prethodnih iteracija. Ukoliko je vrednost funkcije cilja u trenutnoj iteraciji veća od gornje granice intervala, iteracija se odbacuje. Korak dužine 0 se koristi u narednoj iteraciji. Ako je trenutna vrednost funkcije cilja manja od donje granice intervala, predlaže se duži korak u narednoj iteraciji. Ukoliko vrednost funkcije leži u intervalu, u narednoj iteraciji se koristi korak dobijen harmonijskim pravilom. Na ovaj način se obezbeđuje brzi progres algoritma i&nbsp; izbegavaju mali koraci posebno kada se povećava broj iteracija.&nbsp; &Scaron;eme izbegavaju korake proporcionalne sa 1/k kada se očekuje da ce duži koraci pobolj&scaron;ati proces optimizacije. Predložene &scaron;eme se razlikuju u intervalima koji se formiraju u svakoj iteraciji. U prvoj predloženoj &scaron;emi se formira ve&scaron;tački interval poverenja za ocenu optimalne vrednosti funkcije cilja u svakoj iteraciji. Granice tog intervala se uzimaju za&nbsp; kriterijume dovoljnog smanjenja ili rasta funkcije cilja. Predlaže se i uop&scaron;tenje ove &scaron;eme tako &scaron;to se umesto srednje vrednosti koristi konveksna kombinacija prethodnih vrednosti funkcije cilja. U drugoj &scaron;emi, kriterijum po kom se prilagođavaju dužine koraka su minimum i maksimum prethodnih registrovanih vrednosti funkcije cilja. Nizovi koji se formiranju predloženim &scaron;emama zadovoljavaju uslove potrebne za konvergenciju SA algoritma skoro sigurno. SA algoritmi sa novim &scaron;emama za prilagođavanje dužina koraka su testirani na standardnim test&nbsp; problemima i upoređ eni sa SA algoritmom i njegovim postojećim modikacijama. Rezultati pokazuju napredak u odnosu na klasičan algoritam stohastičke aproksimacije sa determinističkim nizom dužine koraka kao i postojećim adaptivnim algoritmima. Takođe se razmatra primena novih algoritama na LASSO regresijske modele. Algoritmi su primenjeni za ocenjivanje parametara modela.</p>
74

Negative Selection - An Absolute Measure of Arbitrary Algorithmic Order Execution / Negativna selekcija - Apsolutna mera algoritamskog izvršenja proizvoljnog naloga

Lončar Sanja 18 September 2017 (has links)
<p>Algorithmic trading is an automated process of order execution on electronic stock markets. It can be applied to a broad range of financial instruments, and it is&nbsp; characterized by a signicant investors&#39; control over the execution of his/her orders, with the principal goal of finding the right balance between costs and risk of not (fully) executing an order. As the measurement of execution performance gives information whether best execution is achieved, a signicant number of diffeerent benchmarks is&nbsp; used in practice. The most frequently used are price benchmarks, where some of them are determined before trading (Pre-trade benchmarks), some during the trading&nbsp; day (In-traday benchmarks), and some are determined after the trade (Post-trade benchmarks). The two most dominant are VWAP and Arrival Price, which is along with other pre-trade price benchmarks known as the Implementation Shortfall (IS).</p><p>We introduce Negative Selection as a posteriori measure of the execution algorithm performance. It is based on the concept of Optimal Placement, which represents the ideal order that could be executed in a given time win-dow, where the notion of ideal means that it is an order with the best execution price considering&nbsp; market &nbsp;conditions&nbsp; during the time window. Negative Selection is dened as a difference between vectors of optimal and executed orders, with vectors dened as a quantity of shares at specied price positionsin the order book. It is equal to zero when the order is optimally executed; negative if the order is not (completely) filled, and positive if the order is executed but at an unfavorable price.</p><p>Negative Selection is based on the idea to offer a new, alternative performance measure, which will enable us to find the&nbsp; optimal trajectories and construct optimal execution of an order.</p><p>The first chapter of the thesis includes a list of notation and an overview of denitions and theorems that will be used further in the thesis. Chapters 2 and 3 follow with a&nbsp; theoretical overview of concepts related to market microstructure, basic information regarding benchmarks, and theoretical background of algorithmic trading. Original results are presented in chapters 4 and 5. Chapter 4 includes a construction of optimal placement, definition and properties of Negative Selection. The results regarding the properties of a Negative Selection are given in [35]. Chapter 5 contains the theoretical background for stochastic optimization, a model of the optimal execution formulated as a stochastic optimization problem with regard to Negative Selection, as well as original work on nonmonotone line search method [31], while numerical results are in the last, 6th chapter.</p> / <p>Algoritamsko trgovanje je automatizovani proces izvr&scaron;avanja naloga na elektronskim berzama. Može se primeniti na &scaron;irok spektar nansijskih instrumenata kojima se trguje na berzi i karakteri&scaron;e ga značajna kontrola investitora nad izvr&scaron;avanjem njegovih naloga, pri čemu se teži nalaženju pravog balansa izmedu tro&scaron;ka i rizika u vezi sa izvr&scaron;enjem naloga. S ozirom da se merenjem performasi izvr&scaron;enja naloga određuje da li je postignuto najbolje izvr&scaron;enje, u praksi postoji značajan broj različitih pokazatelja. Najče&scaron;će su to pokazatelji cena, neki od njih se određuju pre trgovanja (eng. Pre-trade), neki u toku trgovanja (eng. Intraday), a neki nakon trgovanja (eng. Post-trade). Dva najdominantnija pokazatelja cena su VWAP i Arrival Price koji je zajedno sa ostalim &quot;pre-trade&quot; pokazateljima cena poznat kao Implementation shortfall (IS).</p><p>Pojam negative selekcije se uvodi kao &quot;post-trade&quot; mera performansi algoritama izvr&scaron;enja, polazeći od pojma optimalnog naloga, koji predstavlja idealni nalog koji se&nbsp; mogao izvrsiti u datom vremenskom intervalu, pri ćemu se pod pojmom &quot;idealni&quot; podrazumeva nalog kojim se postiže najbolja cena u trži&scaron;nim uslovima koji su vladali&nbsp; u toku tog vremenskog intervala. Negativna selekcija se defini&scaron;e kao razlika vektora optimalnog i izvr&scaron;enog naloga, pri čemu su vektori naloga defisani kao količine akcija na odgovarajućim pozicijama cena knjige naloga. Ona je jednaka nuli kada je nalog optimalno izvr&scaron;en; negativna, ako nalog nije (u potpunosti) izvr&scaron;en, a pozitivna ako je nalog izvr&scaron;en, ali po nepovoljnoj ceni.</p><p>Uvođenje mere negativne selekcije zasnovano je na ideji da se ponudi nova, alternativna, mera performansi i da se u odnosu na nju nađe optimalna trajektorija i konstrui&scaron;e optimalno izvr&scaron;enje naloga.</p><p>U prvom poglavlju teze dati su lista notacija kao i pregled definicija i teorema&nbsp; neophodnih za izlaganje materije. Poglavlja 2 i 3 bave se teorijskim pregledom pojmova i literature u vezi sa mikrostrukturom trži&scaron;ta, pokazateljima trgovanja i algoritamskim trgovanjem. Originalni rezultati su predstavljeni u 4. i 5. poglavlju. Poglavlje 4 sadrži konstrukciju optimalnog naloga, definiciju i osobine negativne selekcije. Teorijski i praktični rezultati u vezi sa osobinama negativna selekcije dati su u [35]. Poglavlje 5 sadrži teorijske osnove stohastičke optimizacije, definiciju modela za optimalno izvr&scaron;enje, kao i originalni rad u vezi sa metodom nemonotonog linijskog pretraživanja [31], dok 6. poglavlje sadrži empirijske rezultate.</p>
75

Solução do problema de fluxo de potência ótimo com restrição de segurança e controles discretos utilizando o método primal-dual barreira logarítmica / Solution of the optimal power flow problem with security constraint and discrete controls using the primal-dual logarithmic barrier method

Costa, Marina Teixeira [UNESP] 16 December 2016 (has links)
Submitted by Marina Teixeira Costa null (marinateixeiracosta@gmail.com) on 2017-02-14T14:27:15Z No. of bitstreams: 1 Dissertação MARINA 12.pdf: 1807218 bytes, checksum: 95bc28b832360cf51847512b47b234d8 (MD5) / Approved for entry into archive by LUIZA DE MENEZES ROMANETTO (luizamenezes@reitoria.unesp.br) on 2017-02-14T15:29:56Z (GMT) No. of bitstreams: 1 costa_mt_me_bauru.pdf: 1807218 bytes, checksum: 95bc28b832360cf51847512b47b234d8 (MD5) / Made available in DSpace on 2017-02-14T15:29:56Z (GMT). No. of bitstreams: 1 costa_mt_me_bauru.pdf: 1807218 bytes, checksum: 95bc28b832360cf51847512b47b234d8 (MD5) Previous issue date: 2016-12-16 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / O problema de Fluxo de Potência Ótimo determina a melhor condição de operação de um sistema elétrico de potência. Há diferentes classes de problemas de Fluxo de Potência Ótimo de acordo com os tipos de funções a serem otimizadas, e os conjuntos de controles e de restrições utilizados. Dentre elas, dá-se destaque ao problema de Fluxo de Potência Ótimo com Restrição de Segurança, o qual é uma importante ferramenta para os Operadores dos Sistemas de Transmissão, tanto para o planejamento operacional, quanto para a precificação da energia. Seu objetivo é minimizar os custos operacionais de geração de energia levando em consideração as restrições decorrentes da operação do sistema sob um conjunto de contingências. Ele é formulado como um problema de otimização não linear, não-convexo de grande porte, com variáveis contínuas e discretas. Neste trabalho investiga-se este problema em relação à sua formulação, dificuldades computacionais e método de solução. Para um tratamento do problema mais próximo à realidade adotam-se alguns controles como variáveis discretas, ou seja, os taps dos transformadores. Estes são tratados através de um método que penaliza a função objetivo quando as variáveis discretas assumem valores não discretos. Desta forma, o problema não linear discreto é transformado em um problema contínuo e o método Primal-Dual Barreira Logarítmica é utilizado em sua resolução. Testes computacionais são apresentados com o problema de Fluxo de Potência Ótimo com Restrição de Segurança associado ao sistema teste IEEE 14 barras em três etapas de teste. Os resultados obtidos e as comparações realizadas comprovam a eficiência do método de resolução escolhido / The Optimum Power Flow problem determines the best operating condition of an electric power system. There are different classes of Optimal Power Flow problems according to the types of functions to be optimized, and the sets of controls and constraints used. Among them, the problem of Optimal Power Flow with Security Constraint is highlighted, which is an important tool for the Transmission System operators, both for operational planning and for energy pricing. Its objective is to minimize the operational costs of power generation taking into account the constraints arising from the operation of the system under a set of contingencies. It is formulated as a nonlinear, nonconvex large optimization problem, of continuous and discrete variables. In this work, the problem in relation to its formulation, computational difficulties and solution method is investigated. For a treatment of the problem closest to the reality, some controls such as discrete variables, i.e. the taps of the transformers, are used. These are treated by a method that penalizes the objective function when the discrete variables assume non-discrete values. Thus, the discrete nonlinear problem is transformed into a continuous problem and the Primal-Dual Logarithmic Barrier method is used in its resolution. Computational tests are performed with the optimal power flow problem with security constraint associated with the test system of IEEE 14 bars in three test stages. The obtained results and the realized comparisons prove the efficiency of the chosen resolution method.
76

Tarification logit dans un réseau

Gilbert, François 12 1900 (has links)
Le problème de tarification qui nous intéresse ici consiste à maximiser le revenu généré par les usagers d'un réseau de transport. Pour se rendre à leurs destinations, les usagers font un choix de route et utilisent des arcs sur lesquels nous imposons des tarifs. Chaque route est caractérisée (aux yeux de l'usager) par sa "désutilité", une mesure de longueur généralisée tenant compte à la fois des tarifs et des autres coûts associés à son utilisation. Ce problème a surtout été abordé sous une modélisation déterministe de la demande selon laquelle seules des routes de désutilité minimale se voient attribuer une mesure positive de flot. Le modèle déterministe se prête bien à une résolution globale, mais pèche par manque de réalisme. Nous considérons ici une extension probabiliste de ce modèle, selon laquelle les usagers d'un réseau sont alloués aux routes d'après un modèle de choix discret logit. Bien que le problème de tarification qui en résulte est non linéaire et non convexe, il conserve néanmoins une forte composante combinatoire que nous exploitons à des fins algorithmiques. Notre contribution se répartit en trois articles. Dans le premier, nous abordons le problème d'un point de vue théorique pour le cas avec une paire origine-destination. Nous développons une analyse de premier ordre qui exploite les propriétés analytiques de l'affectation logit et démontrons la validité de règles de simplification de la topologie du réseau qui permettent de réduire la dimension du problème sans en modifier la solution. Nous établissons ensuite l'unimodalité du problème pour une vaste gamme de topologies et nous généralisons certains de nos résultats au problème de la tarification d'une ligne de produits. Dans le deuxième article, nous abordons le problème d'un point de vue numérique pour le cas avec plusieurs paires origine-destination. Nous développons des algorithmes qui exploitent l'information locale et la parenté des formulations probabilistes et déterministes. Un des résultats de notre analyse est l'obtention de bornes sur l'erreur commise par les modèles combinatoires dans l'approximation du revenu logit. Nos essais numériques montrent qu'une approximation combinatoire rudimentaire permet souvent d'identifier des solutions quasi-optimales. Dans le troisième article, nous considérons l'extension du problème à une demande hétérogène. L'affectation de la demande y est donnée par un modèle de choix discret logit mixte où la sensibilité au prix d'un usager est aléatoire. Sous cette modélisation, l'expression du revenu n'est pas analytique et ne peut être évaluée de façon exacte. Cependant, nous démontrons que l'utilisation d'approximations non linéaires et combinatoires permet d'identifier des solutions quasi-optimales. Finalement, nous en profitons pour illustrer la richesse du modèle, par le biais d'une interprétation économique, et examinons plus particulièrement la contribution au revenu des différents groupes d'usagers. / The network pricing problem consists in finding tolls to set on a subset of a network's arcs, so to maximize a revenue expression. A fixed demand of commuters, going from their origins to their destinations, is assumed. Each commuter chooses a path of minimal "disutility", a measure of discomfort associated with the use of a path and which takes into account fixed costs and tolls. A deterministic modelling of commuter behaviour is mostly found in the literature, according to which positive flow is only assigned to \og shortest\fg\: paths. Even though the determinist pricing model is amenable to global optimization by the use of enumeration techniques, it has often been criticized for its lack of realism. In this thesis, we consider a probabilistic extension of this model involving a logit dicrete choice model. This more realistic model is non-linear and non-concave, but still possesses strong combinatorial features. Our analysis spans three separate articles. In the first we tackle the problem from a theoretical perspective for the case of a single origin-destination pair and develop a first order analysis that exploits the logit assignment analytical properties. We show the validity of simplification rules to the network topology which yield a reduction in the problem dimensionality. This enables us to establish the problem's unimodality for a wide class of topologies. We also establish a parallel with the product-line pricing problem, for which we generalize some of our results. In our second article, we address the problem from a numerical point of view for the case where multiple origin-destination pairs are present. We work out algorithms that exploit both local information and the pricing problem specific combinatorial features. We provide theoretical results which put in perspective the deterministic and probabilistic models, as well as numerical evidence according to which a very simple combinatorial approximation can lead to the best solutions. Also, our experiments clearly indicate that under any reasonable setting, the logit pricing problem is much smoother, and admits less optima then its deterministic counterpart. The third article is concerned with an extension to an heterogeneous demand resulting from a mixed-logit discrete choice model. Commuter price sensitivity is assumed random and the corresponding revenue expression admits no closed form expression. We devise nonlinear and combinatorial approximation schemes for its evaluation and optimization, which allow us to obtain quasi-optimal solutions. Numerical experiments here indicate that the most realistic model yields the best solution, independently of how well the model can actually be solved. We finally illustrate how the output of the model can be used for economic purposes by evaluating the contributions to the revenue of various commuter groups.
77

Nonlinear approaches for phase retrieval in the Fresnel region for hard X-ray imaging

Ion, Valentina 26 September 2013 (has links) (PDF)
The development of highly coherent X-ray sources offers new possibilities to image biological structures at different scales exploiting the refraction of X-rays. The coherence properties of the third-generation synchrotron radiation sources enables efficient implementations of phase contrast techniques. One of the first measurements of the intensity variations due to phase contrast has been reported in 1995 at the European Synchrotron Radiation Facility (ESRF). Phase imaging coupled to tomography acquisition allows threedimensional imaging with an increased sensitivity compared to absorption CT. This technique is particularly attractive to image samples with low absorption constituents. Phase contrast has many applications, ranging from material science, paleontology, bone research to medicine and biology. Several methods to achieve X-ray phase contrast have been proposed during the last years. In propagation based phase contrast, the measurements are made at different sample-to-detector distances. While the intensity data can be acquired and recorded, the phase information of the signal has to be "retrieved" from the modulus data only. Phase retrieval is thus an illposed nonlinear problem and regularization techniques including a priori knowledge are necessary to obtain stable solutions. Several phase recovery methods have been developed in recent years. These approaches generally formulate the phase retrieval problem as a linear one. Nonlinear treatments have not been much investigated. The main purpose of this work was to propose and evaluate new algorithms, in particularly taking into account the nonlinearity of the direct problem. In the first part of this work, we present a Landweber type nonlinear iterative scheme to solve the propagation based phase retrieval problem. This approach uses the analytic expression of the Fréchet derivative of the phase-intensity relationship and of its adjoint, which are presented in detail. We also study the effect of projection operators on the convergence properties of the method. In the second part of this thesis, we investigate the resolution of the linear inverse problem with an iterative thresholding algorithm in wavelet coordinates. In the following, the two former algorithms are combined and compared with another nonlinear approach based on sparsity regularization and a fixed point algorithm. The performance of theses algorithms are evaluated on simulated data for different noise levels. Finally the algorithms were adapted to process real data sets obtained in phase CT at the ESRF at Grenoble.
78

Mathématiques appliquées et traitement du signal pour l’évaluation de la dégradation de la biomasse lignocellulosique / Applied Mathematics and signal processing for the study of the evolution of plant litter during the biodegradation process

Rammal, Abbas 25 January 2016 (has links)
Dans cette thèse nous proposons de mettre en œuvre des méthodes des mathématiques appliquées et du traitement du signal pour l’étude à partir de spectres infrarouges (IR) de l’évolution des litières végétales au cours du processus de biodégradation. Nous présentons tout d’abord une nouvelle méthode de classification floue fondée sur une optimisation de type non supervisée, basée sur le facteur de covariance qui permet de classer des données IR de forme sphérique ou non sphérique afin d’identifier les méthodes de prétraitement et de choix de gammes spectrales les mieux adaptées. Nous développons des outils mathématiques et des algorithmes innovants permettant de combiner des informations spectrales moyen IR (MIR) et proche IR (MIR) afin d’identifier des marqueurs spectroscopiques discriminants de résidus lignocellulosiques en fonction de leur niveau de dégradation. Pour cela, nous proposons une méthode d'optimisation stochastique basée sur un algorithme génétique avec paramètres adaptés. Nous montrons que l’analyse conjoints des spectres MIR et NIR fusionnés par le produit extérieur permet de mieux discriminer la biomasse lignocellulosique au cours du processus de dégradation qu’un traitement séparé. Nous proposons ensuite une nouvelle approche d’optimisation non linéaire basée sur la sélection d’un vecteur qui met en évidence les poids des bandes spectrales. Enfin, nous développons une méthode de modélisation mathématique basée sur l’extension de l’algorithme AG-PLS en combinant les informations spectrales MIR et NIR par le produit extérieur (OP-AG-PLS). Cette méthode permet d’améliorer les performances de prédiction de l’état de dégradation de la biomasse. / In this thesis we propose to implement methods of applied mathematics and signal processing for the study of the evolution of plant biomass during the biodegradation process. The degradation of plant biomass is identified by FTIR spectroscopy, particularly in the MIR and NIR ranges. We proposed a new unsupervised classification method of Fuzzy C-Means based on the covariance factor to classify the IR data with spherical and not spherical form to identify the pre-treatment methods and the choice of spectral ranges that are the best adapted for our study. We have developed mathematical tools and innovative algorithms to combine these spectral information and identifying infrared spectroscopic markers that are discriminative in the lignocellulosic residues according to their level of degradation. For this, we have proposed a stochastic optimization method based on a genetic algorithm by choosing the appropriate parameters. We have shown that the joint analysis of the MIR and NIR spectra by the outer product (OP) provides better results than the separate analysis for the discrimination of the lignocellulosic biomass during the degradation process. Then, we proposed a new nonlinear optimization approach based on the built of vector which highlights the weight of spectral bands. Finally, we have developed a mathematical modelisation based on the extension of the GA-PLS algorithm combining the MIR and NIR spectral information by outer product (OP-GA-PLS) which significantly improves the prediction performance of the state of degradation of biomass.
79

A Nonlinear Stochastic Optimization Framework For RED

Patro, Rajesh Kumar 12 1900 (has links) (PDF)
No description available.
80

Globale Optimierungsverfahren, garantiert globale Lösungen und energieeffiziente Fahrzeuggetriebe

Stöcker, Martin 03 July 2014 (has links)
Der Schwerpunkt der vorliegenden Arbeit liegt auf Methoden zur Lösung nichtlinearer Optimierungsprobleme mit der Anforderung, jedes globale Optimum garantiert zu finden und mit einer im Voraus festgesetzten Genauigkeit zu approximieren. Eng verbunden mit dieser deterministischen Optimierung ist die Berechnung von Schranken für den Wertebereich einer Funktion über einem gegebenen Hyperquader. Verschiedene Ansätze, z. B. auf Basis der Intervallarithmetik, werden vorgestellt und analysiert. Im Besonderen werden Methoden zur Schrankengenerierung für multivariate ganz- und gebrochenrationale Polynome mit Hilfe der Darstellung in der Basis der Bernsteinpolynome weiterentwickelt. Weiterhin werden in der Arbeit schrittweise die Bausteine eines deterministischen Optimierungsverfahrens unter Verwendung der berechneten Wertebereichsschranken beschrieben und Besonderheiten für die Optimierung polynomialer Aufgaben näher untersucht. Die Analyse und Bearbeitung einer Aufgabenstellung aus dem Entwicklungsprozess für Fahrzeuggetriebe zeigt, wie die erarbeiteten Ansätze zur Lösung nichtlinearer Optimierungsprobleme die Suche nach energieeffizienten Getrieben mit einer optimalen Struktur unterstützen kann. Kontakt zum Autor: [Nachname] [.] [Vorname] [@] gmx [.] de

Page generated in 0.519 seconds