• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 92
  • 48
  • 22
  • 16
  • 12
  • 1
  • 1
  • Tagged with
  • 213
  • 213
  • 36
  • 36
  • 36
  • 36
  • 35
  • 32
  • 30
  • 26
  • 25
  • 23
  • 22
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Rozhodovací úlohy a empirická data; aplikace na nové typy úloh / Decision Problems and Empirical Data; Applications to New Types of Problems

Odintsov, Kirill January 2013 (has links)
This thesis concentrates on different approaches of solving decision making problems with an aspect of randomness. The basic methodologies of converting stochastic optimization problems to deterministic optimization problems are described. The proximity of solution of a problem and its empirical counterpart is shown. The empirical counterpart is used when we don't know the distribution of the random elements of the former problem. The distribution with heavy tails, stable distribution and their relationship is described. The stochastic dominance and the possibility of defining problems with stochastic dominance is introduced. The proximity of solution of problem with second order stochastic dominance and the solution of its empirical counterpart is proven. A portfolio management problem with second order stochastic dominance is solved by solving the equivalent empirical problem. Powered by TCPDF (www.tcpdf.org)
172

[en] POWER GENERATION INVESTMENTS SELECTION / [pt] SELEÇÃO DE PROJETOS DE INVESTIMENTO EM GERAÇÃO DE ENERGIA ELÉTRICA

LEONARDO BRAGA SOARES 22 July 2008 (has links)
[pt] A reestruturação do setor de energia elétrica, iniciada nos anos 90, teve como uma de suas principais implicações a introdução da competição na atividade de geração. A expansão do parque gerador, necessária para garantir o equilíbrio estrutural entre oferta e demanda, é estimulada por contratos de longo prazo negociados em leilões, na modalidade de menor tarifa. Destarte, o investidor deve oferecer um limite de preço para que o seu projeto seja competitivo (de forma a ganhar a licitação), mas que ao mesmo tempo seja suficiente para remunerar seu investimento, custos de operação e, sobretudo, protegê-lo contra todos os riscos intrínsecos ao projeto. Nesse contexto, as duas principais contribuições do presente trabalho são: (i) a proposição de uma metodologia de precificação de riscos, utilizando o critério do Value at Risk (VaR), que indica a máxima perda admitida pelo invetidor avesso a risco, com um determinado nível de confiança, e (ii) a aplicação de diferentes modelos de seleção de carteiras, que incorporam o critério do VaR para otimizar um portfolio com diferentes tecnologias de geração de energia. Os resultados da precificação de riscos são úteis para determinar os componentes críticos do projeto e calcular a competitividade (preço) de cada tecnologia. A aplicação de diferentes métodos de seleção de carteiras busca determinar o modelo mais indicado para o perfil das distribuições de retorno dos projetos de geração, que apresentam assimetria e curtose elevada (caldas pesadas). / [en] The new structure of the brazilian electric sector, consolidated by the end of the 90s main implication the introduction of competition in the power generation activity. The expansion of generation capacity, responsible to ensure structural equilibrium between supply and demand, is stimulated by long-term contracts negotiated through energy auctions. Therefore, the investor must give a competitive price (in order to win the auction), but also sufficient to pay his investment, operational costs and, especially, protect him against all project risks. In this role, the two main contributions of this work are: (i) to suggest a methodology of risk pricing, using the Value at Risk (VaR) criterium, which gives the maximum loss admitted by the risk averse investor, with a specified confidence level, and (ii) to apply different portfolio selection models, which incorporates the VaR criterium to optimize a portfolio with different power generation technologies. The risk pricing results are usefull to determine the project critical components and to calculate the competitiviness (price) of each technology. The study of different portfolio selection methods aims to investigate the most suitable model for the return distribution shape, characterized by having assimetry and curtosis (heavy tails).
173

[en] STRATEGIC BIDDING FOR GENERATORS IN ENERGY CONTRACT AUCTIONS / [pt] ESTRATÉGIA DE OFERTA DE GERADORAS EM LEILÕES DE CONTRATAÇÃO DE ENERGIA

ALEXANDRE STREET DE AGUIAR 13 May 2005 (has links)
[pt] O objetivo desta tese é desenvolver uma metodologia para estratégia de oferta de geradoras em leilões de contratos de energia elétrica, que determine a quantidade ótima que deve ser ofertada de cada contrato para cada ní­vel de preço de leilão, levando em conta os perfis de risco de cada agente e os riscos associados à  contratação. Em particular a incerteza quanto ao montante de energia produzida e ao seu preço no mercado de curto prazo (preço spot), também conhecida como incerteza de quantidade e preço. Desta forma, são realizadas aplicações desta metodologia para dois tipos de leilões de energia existente, mono e multi-produto. Neste segundo caso (multi-produto) é realizado um estudo de caso para o Leilão de Transição que ocorrerá em dezembro de 2004, onde serão leiloados 75% da eletricidade disponí­vel hoje no país (55 mil MW), segundo as diretrizes do novo modelo do setor elétrico brasileiro. / [en] The objective of this work is to develop a methodology for bidding strategies in multi-unit auctions for long-term electricity power purchase agreements (PPA). Considering a descending price auction design, the objective of a generating agent is to determine the optimal amount of energy to be offered in each contract for the actual auction prices at each round that maximizes the revenues of the agent given their risk profiles and the contract risks involved. The main risk treated in this work is the so-called price- quantity risk, related to the negative correlation between energy produced and the short term prices (spot price). The modeling of the risk profile for each agent is done using utility functions. This methodology is then applied on two types of auctions: singleproduct (only one contract being auctioned) and multi-product (more than one product is simultaneously auctioned). Case studies are presented with data from the Brazilian system. In particular, on the second type (multivariated auction) the case study is realized for the transition auction that will occur on December 2004, where 75% of the generation market of the whole country (about 55GW) will be negotiated under the guidelines of the new Brazilian electrical sector model.
174

Line search methods with variable sample size / Metodi linijskog pretrazivanja sa promenljivom velicinom uzorka

Krklec Jerinkić Nataša 17 January 2014 (has links)
<p>The problem under consideration is an unconstrained optimization&nbsp;problem with the objective function in the form of mathematical ex-pectation. The expectation is with respect to the random variable that represents the uncertainty. Therefore, the objective &nbsp;function is in fact deterministic. However, nding the analytical form of that objective function can be very dicult or even impossible. This is the reason why the sample average approximation is often used. In order to obtain reasonable good approximation of the objective function, we have to use relatively large sample size. We assume that the sample is generated at the beginning of the optimization process and therefore we can consider this sample average objective function as the deterministic one. However, applying some deterministic method on that sample average function from the start can be very costly. The number of evaluations of the function under expectation is a common way of measuring the cost of an algorithm. Therefore, methods that vary the sample size throughout the optimization process are developed. Most of them are trying to determine the optimal dynamics of increasing the sample size.</p><p>The main goal of this thesis is to develop the clas of methods that&nbsp;can decrease the cost of an algorithm by decreasing the number of&nbsp;function evaluations. The idea is to decrease the sample size whenever&nbsp;it seems to be reasonable - roughly speaking, we do not want to impose&nbsp;a large precision, i.e. a large sample size when we are far away from the&nbsp;solution we search for. The detailed description of the new methods&nbsp;<br />is presented in Chapter 4 together with the convergence analysis. It&nbsp;is shown that the approximate solution is of the same quality as the&nbsp;one obtained by dealing with the full sample from the start.</p><p>Another important characteristic of the methods that are proposed&nbsp;here is the line search technique which is used for obtaining the sub-sequent iterates. The idea is to nd a suitable direction and to search&nbsp;along it until we obtain a sucient decrease in the &nbsp;function value. The&nbsp;sucient decrease is determined throughout the line search rule. In&nbsp;Chapter 4, that rule is supposed to be monotone, i.e. we are imposing&nbsp;strict decrease of the function value. In order to decrease the cost of&nbsp;the algorithm even more and to enlarge the set of suitable search directions, we use nonmonotone line search rules in Chapter 5. Within that chapter, these rules are modied to t the variable sample size framework. Moreover, the conditions for the global convergence and the R-linear rate are presented.&nbsp;</p><p>In Chapter 6, numerical results are presented. The test problems&nbsp;are various - some of them are academic and some of them are real&nbsp;world problems. The academic problems are here to give us more&nbsp;insight into the behavior of the algorithms. On the other hand, data&nbsp;that comes from the real world problems are here to test the real&nbsp;applicability of the proposed algorithms. In the rst part of that&nbsp;chapter, the focus is on the variable sample size techniques. Different&nbsp;implementations of the proposed algorithm are compared to each other&nbsp;and to the other sample schemes as well. The second part is mostly&nbsp;devoted to the comparison of the various line search rules combined&nbsp;with dierent search directions in the variable sample size framework.&nbsp;The overall numerical results show that using the variable sample size&nbsp;can improve the performance of the algorithms signicantly, especially&nbsp;when the nonmonotone line search rules are used.</p><p>The rst chapter of this thesis provides the background material&nbsp;for the subsequent chapters. In Chapter 2, basics of the nonlinear&nbsp;optimization are presented and the focus is on the line search, while&nbsp;Chapter 3 deals with the stochastic framework. These chapters are&nbsp;here to provide the review of the relevant known results, while the&nbsp;rest of the thesis represents the original contribution.&nbsp;</p> / <p>U okviru ove teze posmatra se problem optimizacije bez ograničenja pri čcemu je funkcija cilja u formi matematičkog očekivanja. Očekivanje se odnosi na slučajnu promenljivu koja predstavlja neizvesnost. Zbog toga je funkcija cilja, u stvari, deterministička veličina. Ipak, odredjivanje analitičkog oblika te funkcije cilja može biti vrlo komplikovano pa čak i nemoguće. Zbog toga se za aproksimaciju često koristi uzoračko očcekivanje. Da bi se postigla dobra aproksimacija, obično je neophodan obiman uzorak. Ako pretpostavimo da se uzorak realizuje pre početka procesa optimizacije, možemo posmatrati uzoračko očekivanje kao determinističku funkciju. Medjutim, primena nekog od determinističkih metoda direktno na tu funkciju&nbsp; moze biti veoma skupa jer evaluacija funkcije pod ocekivanjem često predstavlja veliki tro&scaron;ak i uobičajeno je da se ukupan tro&scaron;ak optimizacije meri po broju izračcunavanja funkcije pod očekivanjem. Zbog toga su razvijeni metodi sa promenljivom veličinom uzorka. Većcina njih je bazirana na odredjivanju optimalne dinamike uvećanja uzorka.</p><p>Glavni cilj ove teze je razvoj algoritma koji, kroz smanjenje broja izračcunavanja funkcije, smanjuje ukupne tro&scaron;skove optimizacije. Ideja je da se veličina uzorka smanji kad god je to moguće. Grubo rečeno, izbegava se koriscenje velike preciznosti&nbsp; (velikog uzorka) kada smo daleko od re&scaron;senja. U čcetvrtom poglavlju ove teze opisana je nova klasa metoda i predstavljena je analiza konvergencije. Dokazano je da je aproksimacija re&scaron;enja koju dobijamo bar toliko dobra koliko i za metod koji radi sa celim uzorkom sve vreme.</p><p>Jo&scaron; jedna bitna karakteristika metoda koji su ovde razmatrani je primena linijskog pretražzivanja u cilju odredjivanja naredne iteracije. Osnovna ideja je da se nadje odgovarajući pravac i da se duž njega vr&scaron;si pretraga za dužzinom koraka koja će dovoljno smanjiti vrednost funkcije. Dovoljno smanjenje je odredjeno pravilom linijskog pretraživanja. U čcetvrtom poglavlju to pravilo je monotono &scaron;to znači da zahtevamo striktno smanjenje vrednosti funkcije. U cilju jos većeg smanjenja tro&scaron;kova optimizacije kao i pro&scaron;irenja skupa pogodnih pravaca, u petom poglavlju koristimo nemonotona pravila linijskog pretraživanja koja su modifikovana zbog promenljive velicine uzorka. Takodje, razmatrani su uslovi za globalnu konvergenciju i R-linearnu brzinu konvergencije.</p><p>Numerički rezultati su predstavljeni u &scaron;estom poglavlju. Test problemi su razliciti - neki od njih su akademski, a neki su realni. Akademski problemi su tu da nam daju bolji uvid u pona&scaron;anje algoritama. Sa druge strane, podaci koji poticu od stvarnih problema služe kao pravi test za primenljivost pomenutih algoritama. U prvom delu tog poglavlja akcenat je na načinu ažuriranja veličine uzorka. Različite varijante metoda koji su ovde predloženi porede se medjusobno kao i sa drugim &scaron;emama za ažuriranje veličine uzorka. Drugi deo poglavlja pretežno je posvećen poredjenju različitih pravila linijskog pretraživanja sa različitim pravcima pretraživanja u okviru promenljive veličine uzorka. Uzimajuci sve postignute rezultate u obzir dolazi se do zaključcka da variranje veličine uzorka može značajno popraviti učinak algoritma, posebno ako se koriste nemonotone metode linijskog pretraživanja.</p><p>U prvom poglavlju ove teze opisana je motivacija kao i osnovni pojmovi potrebni za praćenje preostalih poglavlja. U drugom poglavlju je iznet pregled osnova nelinearne optimizacije sa akcentom na metode linijskog pretraživanja, dok su u trećem poglavlju predstavljene osnove stohastičke optimizacije. Pomenuta poglavlja su tu radi pregleda dosada&scaron;njih relevantnih rezultata dok je originalni doprinos ove teze predstavljen u poglavljima 4-6.</p>
175

Modifications of Stochastic Approximation Algorithm Based on Adaptive Step Sizes / Modifikacije algoritma stohastičke aproksimacije zasnovane na prilagođenim dužinama koraka

Kresoja Milena 25 September 2017 (has links)
<p>The problem under consideration is an unconstrained mini-mization problem in noisy environment. The common approach for solving the problem is Stochastic Approximation (SA) algorithm. We propose a class of adaptive step size schemes for the SA algorithm. The step size selection in the proposed schemes is based on the objective functionvalues. At each iterate, interval estimates of the optimal function&nbsp; value are constructed using the xed number of previously observed function values.&nbsp;If the observed function value in the current iterate is larger than the upper bound of the interval, we reject the current iterate. If the observed function value in the current iterate is smaller than the lower bound of the interval, we suggest a larger step size in the next iterate. Otherwise, if the function value lies in the interval, we propose a small safe step size in the next iterate. In this manner, a faster progress of the algorithm is ensured&nbsp;when it is expected that larger steps will improve the performance of the algorithm. We propose two main schemes which dier in the intervals that we construct at each iterate. In the rst scheme, we construct a symmetrical interval that can be viewed as a condence-like interval for the optimal function value. The bounds of the interval are shifted means of the xed number of previously observed function values. The generalization&nbsp;of this scheme using a convex combination instead of the mean is also presented. In the second scheme, we use the minimum and the maximum of previous noisy function values as the lower and upper bounds of the interval, respectively. The step size sequences generated by the proposed schemes satisfy the step size convergence conditions for the SA algorithm almost surely. Performance of SA algorithms with the new step size schemes is tested on a set of standard test problems. Numerical results&nbsp;support theoretical expectations and verify eciency of the algorithms in comparison to other relevant modications of SA algorithms. Application of the algorithms in LASSO regression models is also considered. The algorithms are applied for estimation of the regression parameters where the objective function contains L<sub>1</sub> penalty.</p> / <p>Predmet istraživanja doktorske disertacije su numerički postupci za re&scaron;avanje problema stohastičke optimizacije. Najpoznatiji numerički postupak za re&scaron;avanje pomenutog problema je algoritam stohastičke aproksimacije (SA). U disertaciji se&nbsp;&nbsp; predlaže nova klasa &scaron;ema za prilagođavanje dužina koraka u svakoj iteraciji. Odabir dužina koraka u predloženim &scaron;emama se zasniva na vrednostima funkcije cilja. U svakoj iteraciji formira se intervalna ocena optimalne vrednosti funkcije cilja koristeći samo registrovane vrednosti funkcije cilja iz ksnog broja prethodnih iteracija. Ukoliko je vrednost funkcije cilja u trenutnoj iteraciji veća od gornje granice intervala, iteracija se odbacuje. Korak dužine 0 se koristi u narednoj iteraciji. Ako je trenutna vrednost funkcije cilja manja od donje granice intervala, predlaže se duži korak u narednoj iteraciji. Ukoliko vrednost funkcije leži u intervalu, u narednoj iteraciji se koristi korak dobijen harmonijskim pravilom. Na ovaj način se obezbeđuje brzi progres algoritma i&nbsp; izbegavaju mali koraci posebno kada se povećava broj iteracija.&nbsp; &Scaron;eme izbegavaju korake proporcionalne sa 1/k kada se očekuje da ce duži koraci pobolj&scaron;ati proces optimizacije. Predložene &scaron;eme se razlikuju u intervalima koji se formiraju u svakoj iteraciji. U prvoj predloženoj &scaron;emi se formira ve&scaron;tački interval poverenja za ocenu optimalne vrednosti funkcije cilja u svakoj iteraciji. Granice tog intervala se uzimaju za&nbsp; kriterijume dovoljnog smanjenja ili rasta funkcije cilja. Predlaže se i uop&scaron;tenje ove &scaron;eme tako &scaron;to se umesto srednje vrednosti koristi konveksna kombinacija prethodnih vrednosti funkcije cilja. U drugoj &scaron;emi, kriterijum po kom se prilagođavaju dužine koraka su minimum i maksimum prethodnih registrovanih vrednosti funkcije cilja. Nizovi koji se formiranju predloženim &scaron;emama zadovoljavaju uslove potrebne za konvergenciju SA algoritma skoro sigurno. SA algoritmi sa novim &scaron;emama za prilagođavanje dužina koraka su testirani na standardnim test&nbsp; problemima i upoređ eni sa SA algoritmom i njegovim postojećim modikacijama. Rezultati pokazuju napredak u odnosu na klasičan algoritam stohastičke aproksimacije sa determinističkim nizom dužine koraka kao i postojećim adaptivnim algoritmima. Takođe se razmatra primena novih algoritama na LASSO regresijske modele. Algoritmi su primenjeni za ocenjivanje parametara modela.</p>
176

Negative Selection - An Absolute Measure of Arbitrary Algorithmic Order Execution / Negativna selekcija - Apsolutna mera algoritamskog izvršenja proizvoljnog naloga

Lončar Sanja 18 September 2017 (has links)
<p>Algorithmic trading is an automated process of order execution on electronic stock markets. It can be applied to a broad range of financial instruments, and it is&nbsp; characterized by a signicant investors&#39; control over the execution of his/her orders, with the principal goal of finding the right balance between costs and risk of not (fully) executing an order. As the measurement of execution performance gives information whether best execution is achieved, a signicant number of diffeerent benchmarks is&nbsp; used in practice. The most frequently used are price benchmarks, where some of them are determined before trading (Pre-trade benchmarks), some during the trading&nbsp; day (In-traday benchmarks), and some are determined after the trade (Post-trade benchmarks). The two most dominant are VWAP and Arrival Price, which is along with other pre-trade price benchmarks known as the Implementation Shortfall (IS).</p><p>We introduce Negative Selection as a posteriori measure of the execution algorithm performance. It is based on the concept of Optimal Placement, which represents the ideal order that could be executed in a given time win-dow, where the notion of ideal means that it is an order with the best execution price considering&nbsp; market &nbsp;conditions&nbsp; during the time window. Negative Selection is dened as a difference between vectors of optimal and executed orders, with vectors dened as a quantity of shares at specied price positionsin the order book. It is equal to zero when the order is optimally executed; negative if the order is not (completely) filled, and positive if the order is executed but at an unfavorable price.</p><p>Negative Selection is based on the idea to offer a new, alternative performance measure, which will enable us to find the&nbsp; optimal trajectories and construct optimal execution of an order.</p><p>The first chapter of the thesis includes a list of notation and an overview of denitions and theorems that will be used further in the thesis. Chapters 2 and 3 follow with a&nbsp; theoretical overview of concepts related to market microstructure, basic information regarding benchmarks, and theoretical background of algorithmic trading. Original results are presented in chapters 4 and 5. Chapter 4 includes a construction of optimal placement, definition and properties of Negative Selection. The results regarding the properties of a Negative Selection are given in [35]. Chapter 5 contains the theoretical background for stochastic optimization, a model of the optimal execution formulated as a stochastic optimization problem with regard to Negative Selection, as well as original work on nonmonotone line search method [31], while numerical results are in the last, 6th chapter.</p> / <p>Algoritamsko trgovanje je automatizovani proces izvr&scaron;avanja naloga na elektronskim berzama. Može se primeniti na &scaron;irok spektar nansijskih instrumenata kojima se trguje na berzi i karakteri&scaron;e ga značajna kontrola investitora nad izvr&scaron;avanjem njegovih naloga, pri čemu se teži nalaženju pravog balansa izmedu tro&scaron;ka i rizika u vezi sa izvr&scaron;enjem naloga. S ozirom da se merenjem performasi izvr&scaron;enja naloga određuje da li je postignuto najbolje izvr&scaron;enje, u praksi postoji značajan broj različitih pokazatelja. Najče&scaron;će su to pokazatelji cena, neki od njih se određuju pre trgovanja (eng. Pre-trade), neki u toku trgovanja (eng. Intraday), a neki nakon trgovanja (eng. Post-trade). Dva najdominantnija pokazatelja cena su VWAP i Arrival Price koji je zajedno sa ostalim &quot;pre-trade&quot; pokazateljima cena poznat kao Implementation shortfall (IS).</p><p>Pojam negative selekcije se uvodi kao &quot;post-trade&quot; mera performansi algoritama izvr&scaron;enja, polazeći od pojma optimalnog naloga, koji predstavlja idealni nalog koji se&nbsp; mogao izvrsiti u datom vremenskom intervalu, pri ćemu se pod pojmom &quot;idealni&quot; podrazumeva nalog kojim se postiže najbolja cena u trži&scaron;nim uslovima koji su vladali&nbsp; u toku tog vremenskog intervala. Negativna selekcija se defini&scaron;e kao razlika vektora optimalnog i izvr&scaron;enog naloga, pri čemu su vektori naloga defisani kao količine akcija na odgovarajućim pozicijama cena knjige naloga. Ona je jednaka nuli kada je nalog optimalno izvr&scaron;en; negativna, ako nalog nije (u potpunosti) izvr&scaron;en, a pozitivna ako je nalog izvr&scaron;en, ali po nepovoljnoj ceni.</p><p>Uvođenje mere negativne selekcije zasnovano je na ideji da se ponudi nova, alternativna, mera performansi i da se u odnosu na nju nađe optimalna trajektorija i konstrui&scaron;e optimalno izvr&scaron;enje naloga.</p><p>U prvom poglavlju teze dati su lista notacija kao i pregled definicija i teorema&nbsp; neophodnih za izlaganje materije. Poglavlja 2 i 3 bave se teorijskim pregledom pojmova i literature u vezi sa mikrostrukturom trži&scaron;ta, pokazateljima trgovanja i algoritamskim trgovanjem. Originalni rezultati su predstavljeni u 4. i 5. poglavlju. Poglavlje 4 sadrži konstrukciju optimalnog naloga, definiciju i osobine negativne selekcije. Teorijski i praktični rezultati u vezi sa osobinama negativna selekcije dati su u [35]. Poglavlje 5 sadrži teorijske osnove stohastičke optimizacije, definiciju modela za optimalno izvr&scaron;enje, kao i originalni rad u vezi sa metodom nemonotonog linijskog pretraživanja [31], dok 6. poglavlje sadrži empirijske rezultate.</p>
177

[en] OPTIMAL CONTRACTING OF TRANSMISSION SYSTEM USAGE AMOUNTS VIA FLEXIBLE STATIC EQUIVALENTS AND PROBABILISTIC LOAD FLOW. / [pt] CONTRATAÇÃO ÓTIMA DOS MONTANTES DE USO DO SISTEMA DE TRANSMISSÃO VIA EQUIVALENTES ESTÁTICOS FLEXÍVEIS E FLUXO DE POTÊNCIA PROBABILÍSTICO

NATASHA SOARES MONTEIRO DA SILVA 24 January 2019 (has links)
[pt] Na década de noventa, no Brasil, havia uma predominância de empresas verticalizadas no setor elétrico, pertencentes aos governos estaduais e federais, que no decorrer do processo de reestruturação e privatização sofreram uma desverticalização das suas atividades, em geração, transmissão, distribuição e comercialização. Após iniciada a privatização das companhias foi criada a Agência Nacional de Energia Elétrica (ANEEL),responsável por regular as atividades do setor elétrico brasileiro. Estas mudanças acarretaram em diferentes modelos de mercado caracterizados pelo acentuado uso dos sistemas de transmissão. Neste cenário, foi definido pela ANEEL que as concessionárias de distribuição devem pagar às transmissoras pela utilização de suas instalações o Encargo de Uso do Sistema de Transmissão (EUST). Para isso, é necessário informar o Montante de Uso do Sistema de Transmissão (MUST) para cada ponto de conexão e período tarifário por meio do Contrato de Uso do Sistema de Transmissão (CUST). Em caso de ultrapassagem dos valores firmados neste contrato acima de um percentual estipulado, a contratante terá que pagar uma penalidade. Esta dissertação tem por finalidade apresentar uma nova metodologia na determinação do valor ótimo do MUST, baseado em equivalentes estáticos flexíveis, fluxo de potência probabilístico e técnicas de otimização estocástica de modo a equilibrar o custo do transporte de energia e o custo da penalidade. Primeiro, utiliza-se uma técnica de redução de rede, flexível e precisa. Segundo, as incertezas provenientes das cargas, geração e topologia da rede são mapeadas nos pontos de conexão em análise. Terceiro, utiliza-se uma técnica simples de otimização estocástica para obter o MUST a ser contratado, pela distribuidora de energia elétrica, em cada barra de fronteira. Por último, a metodologia proposta é empregada no sistema acadêmico IEEE RTS com o objetivo de demonstrar a sua eficiência sendo os resultados obtidos amplamente discutidos. / [en] In Brazil, during the 1990s, there was a predominance of vertical companies in the electricity sector, belonging to the state and federal governments, which in the course of the restructuring and privatization process suffered a deverticalization of their activities into generation, transmission, distribution, and commercialization. After the beginning of this privatization process, the National Electric Energy Agency (ANEEL) was created, which is responsible for regulating the activities of the Brazilian electricity sector. These changes have led to different market models characterized by the strong use of the transmission systems. In this scenario, it was defined by ANEEL that the distribution concessionaires must pay the transmission companies for the use of their equipment. Thus, it is necessary to inform the Transmission System Usage Amount (MUST) for each connection point and tariff period by means of the Transmission System Use Agreement (CUST). In case of exceeding a specified percentage of the contracted amounts, the contractor will have to pay penalties. This dissertation aims to present a new methodology to determine the optimal value of MUST, based on flexible static equivalents, probabilistic power flow, and stochastic optimization techniques, in order to balance the energy transport and penalty costs. First, a flexible and accurate network reduction technique is used. Second, the uncertainties arising from the load, generation, and topology of the network are mapped at the connection points under analysis. Third, a simple stochastic optimization technique is used to obtain the MUST to be contracted by the electric power distributor at each border bus. Finally, the proposed methodology is used in the IEEE RTS academic system in order to demonstrate its efficiency, and the obtained results are widely discussed.
178

Learning with Sparcity: Structures, Optimization and Applications

Chen, Xi 01 July 2013 (has links)
The development of modern information technology has enabled collecting data of unprecedented size and complexity. Examples include web text data, microarray & proteomics, and data from scientific domains (e.g., meteorology). To learn from these high dimensional and complex data, traditional machine learning techniques often suffer from the curse of dimensionality and unaffordable computational cost. However, learning from large-scale high-dimensional data promises big payoffs in text mining, gene analysis, and numerous other consequential tasks. Recently developed sparse learning techniques provide us a suite of tools for understanding and exploring high dimensional data from many areas in science and engineering. By exploring sparsity, we can always learn a parsimonious and compact model which is more interpretable and computationally tractable at application time. When it is known that the underlying model is indeed sparse, sparse learning methods can provide us a more consistent model and much improved prediction performance. However, the existing methods are still insufficient for modeling complex or dynamic structures of the data, such as those evidenced in pathways of genomic data, gene regulatory network, and synonyms in text data. This thesis develops structured sparse learning methods along with scalable optimization algorithms to explore and predict high dimensional data with complex structures. In particular, we address three aspects of structured sparse learning: 1. Efficient and scalable optimization methods with fast convergence guarantees for a wide spectrum of high-dimensional learning tasks, including single or multi-task structured regression, canonical correlation analysis as well as online sparse learning. 2. Learning dynamic structures of different types of undirected graphical models, e.g., conditional Gaussian or conditional forest graphical models. 3. Demonstrating the usefulness of the proposed methods in various applications, e.g., computational genomics and spatial-temporal climatological data. In addition, we also design specialized sparse learning methods for text mining applications, including ranking and latent semantic analysis. In the last part of the thesis, we also present the future direction of the high-dimensional structured sparse learning from both computational and statistical aspects.
179

Optimisation des horaires des agents et du routage des appels dans les centres d’appels

Chan, Wyean 09 1900 (has links)
Nous étudions la gestion de centres d'appels multi-compétences, ayant plusieurs types d'appels et groupes d'agents. Un centre d'appels est un système de files d'attente très complexe, où il faut généralement utiliser un simulateur pour évaluer ses performances. Tout d'abord, nous développons un simulateur de centres d'appels basé sur la simulation d'une chaîne de Markov en temps continu (CMTC), qui est plus rapide que la simulation conventionnelle par événements discrets. À l'aide d'une méthode d'uniformisation de la CMTC, le simulateur simule la chaîne de Markov en temps discret imbriquée de la CMTC. Nous proposons des stratégies pour utiliser efficacement ce simulateur dans l'optimisation de l'affectation des agents. En particulier, nous étudions l'utilisation des variables aléatoires communes. Deuxièmement, nous optimisons les horaires des agents sur plusieurs périodes en proposant un algorithme basé sur des coupes de sous-gradients et la simulation. Ce problème est généralement trop grand pour être optimisé par la programmation en nombres entiers. Alors, nous relaxons l'intégralité des variables et nous proposons des méthodes pour arrondir les solutions. Nous présentons une recherche locale pour améliorer la solution finale. Ensuite, nous étudions l'optimisation du routage des appels aux agents. Nous proposons une nouvelle politique de routage basé sur des poids, les temps d'attente des appels, et les temps d'inoccupation des agents ou le nombre d'agents libres. Nous développons un algorithme génétique modifié pour optimiser les paramètres de routage. Au lieu d'effectuer des mutations ou des croisements, cet algorithme optimise les paramètres des lois de probabilité qui génèrent la population de solutions. Par la suite, nous développons un algorithme d'affectation des agents basé sur l'agrégation, la théorie des files d'attente et la probabilité de délai. Cet algorithme heuristique est rapide, car il n'emploie pas la simulation. La contrainte sur le niveau de service est convertie en une contrainte sur la probabilité de délai. Par après, nous proposons une variante d'un modèle de CMTC basé sur le temps d'attente du client à la tête de la file. Et finalement, nous présentons une extension d'un algorithme de coupe pour l'optimisation stochastique avec recours de l'affectation des agents dans un centre d'appels multi-compétences. / We study the management of multi-skill call centers, with multiple call types and agent groups. A call center is a very complex queueing system, and we generally need to use simulation in order to evaluate its performances. First, we develop a call center simulator based on the simulation of a continuous-time Markov chain (CTMC) that is faster than traditional discrete-event simulation. Using an uniformization method, this simulator simulates the embedded discrete-time Markov chain of the CTMC. We propose strategies to use this simulator efficiently within a staffing optimization algorithm. In particular, we study the use of common random numbers. Secondly, we propose an algorithm, based on subgradient cuts and simulation, to optimize the shift scheduling problem. Since this problem is usually too big to be solved as an integer programming problem, we relax the integer variables and we propose methods to round the solutions. We also present a local search to improve the final solution. Next, we study the call routing optimization problem. We propose a new routing policy based on weights, call waiting times, and agent idle times or the number of idle agents. We develop a modified genetic algorithm to optimize all the routing parameters. Instead of doing mutations and crossovers, this algorithm refines the parametric distributions used to generate the population of solutions. We also develop a staffing algorithm based on aggregation, queueing theory and delay probability. This heuristic algorithm is fast, because it does not use simulation. The service level constraint is converted into a delay probability constraint. Moreover, we propose a variant of a CTMC model based on the waiting time of the customer at the head of the queue. Finally, we design an extension of a cutting-plane algorithm to optimize the stochastic version with recourse of the staffing problem for multi-skill call centers.
180

Enhancing supervised learning with complex aggregate features and context sensitivity / Amélioration de l'apprentissage supervisé par l'utilisation d'agrégats complexes et la prise en compte du contexte

Charnay, Clément 30 June 2016 (has links)
Dans cette thèse, nous étudions l'adaptation de modèles en apprentissage supervisé. Nous adaptons des algorithmes d'apprentissage existants à une représentation relationnelle. Puis, nous adaptons des modèles de prédiction aux changements de contexte.En représentation relationnelle, les données sont modélisées par plusieurs entités liées par des relations. Nous tirons parti de ces relations avec des agrégats complexes. Nous proposons des heuristiques d'optimisation stochastique pour inclure des agrégats complexes dans des arbres de décisions relationnels et des forêts, et les évaluons sur des jeux de données réelles.Nous adaptons des modèles de prédiction à deux types de changements de contexte. Nous proposons une optimisation de seuils sur des modèles à scores pour s'adapter à un changement de coûts. Puis, nous utilisons des transformations affines pour adapter les attributs numériques à un changement de distribution. Enfin, nous étendons ces transformations aux agrégats complexes. / In this thesis, we study model adaptation in supervised learning. Firstly, we adapt existing learning algorithms to the relational representation of data. Secondly, we adapt learned prediction models to context change.In the relational setting, data is modeled by multiples entities linked with relationships. We handle these relationships using complex aggregate features. We propose stochastic optimization heuristics to include complex aggregates in relational decision trees and Random Forests, and assess their predictive performance on real-world datasets.We adapt prediction models to two kinds of context change. Firstly, we propose an algorithm to tune thresholds on pairwise scoring models to adapt to a change of misclassification costs. Secondly, we reframe numerical attributes with affine transformations to adapt to a change of attribute distribution between a learning and a deployment context. Finally, we extend these transformations to complex aggregates.

Page generated in 0.0993 seconds