Spelling suggestions: "subject:"aptimization broblems"" "subject:"aptimization 2problems""
11 |
Portfolio optimization problems : a martingale and a convex duality approachTchamga, Nicole Flaure Kouemo 12 1900 (has links)
Thesis (MSc (Mathematics))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: The first approach initiated by Merton [Mer69, Mer71] to solve utility maximization portfolio
problems in continuous time is based on stochastic control theory. The idea of Merton
was to interpret the maximization portfolio problem as a stochastic control problem where
the trading strategies are considered as a control process and the portfolio wealth as the
controlled process. Merton derived the Hamilton-Jacobi-Bellman (HJB) equation and for
the special case of power, logarithm and exponential utility functions he produced a closedform
solution. A principal disadvantage of this approach is the requirement of the Markov
property for the stocks prices. The so-called martingale method represents the second
approach for solving utility maximization portfolio problems in continuous time. It was
introduced by Pliska [Pli86], Cox and Huang [CH89, CH91] and Karatzas et al. [KLS87]
in di erent variant. It is constructed upon convex duality arguments and allows one to
transform the initial dynamic portfolio optimization problem into a static one and to resolve
it without requiring any \Markov" assumption. A de nitive answer (necessary and
su cient conditions) to the utility maximization portfolio problem for terminal wealth has
been obtained by Kramkov and Schachermayer [KS99]. In this thesis, we study the convex
duality approach to the expected utility maximization problem (from terminal wealth) in
continuous time stochastic markets, which as already mentioned above can be traced back
to the seminal work by Merton [Mer69, Mer71]. Before we detail the structure of our
thesis, we would like to emphasize that the starting point of our work is based on Chapter
7 in Pham [P09] a recent textbook. However, as the careful reader will notice, we have
deepened and added important notions and results (such as the study of the upper (lower)
hedge, the characterization of the essential supremum of all the possible prices, compare
Theorem 7.2.2 in Pham [P09] with our stated Theorem 2.4.9, the dynamic programming
equation 2.31, the superhedging theorem 2.6.1...) and we have made a considerable e ort
in the proofs. Indeed, several proofs of theorems in Pham [P09] have serious gaps (not to
mention typos) and even
aws (for example see the proof of Proposition 7.3.2 in Pham [P09] and our proof of Proposition 3.4.8). In the rst chapter, we state the expected utility
maximization problem and motivate the convex dual approach following an illustrative
example by Rogers [KR07, R03]. We also brie
y review the von Neumann - Morgenstern
Expected Utility Theory. In the second chapter, we begin by formulating the superreplication
problem as introduced by El Karoui and Quenez [KQ95]. The fundamental result in
the literature on super-hedging is the dual characterization of the set of all initial endowments
leading to a super-hedge of a European contingent claim. El Karoui and Quenez
[KQ95] rst proved the superhedging theorem 2.6.1 in an It^o di usion setting and Delbaen
and Schachermayer [DS95, DS98] generalized it to, respectively, a locally bounded
and unbounded semimartingale model, using a Hahn-Banach separation argument. The
superreplication problem inspired a very nice result, called the optional decomposition
theorem for supermartingales 2.4.1, in stochastic analysis theory. This important theorem
introduced by El Karoui and Quenez [KQ95], and extended in full generality by Kramkov
[Kra96] is stated in Section 2.4 and proved at the end of Section 2.7. The third chapter
forms the theoretical core of this thesis and it contains the statement and detailed
proof of the famous Kramkov-Schachermayer Theorem that addresses the duality of utility
maximization portfolio problems. Firstly, we show in Lemma 3.2.1 how to transform the
dynamic utility maximization problem into a static maximization problem. This is done
thanks to the dual representation of the set of European contingent claims, which can be
dominated (or super-hedged) almost surely from an initial endowment x and an admissible
self- nancing portfolio strategy given in Corollary 2.5 and obtained as a consequence of
the optional decomposition of supermartingale. Secondly, under some assumptions on the
utility function, the existence and uniqueness of the solution to the static problem is given
in Theorem 3.2.3. Because the solution of the static problem is not easy to nd, we will
look at it in its dual form. We therefore synthesize the dual problem from the primal
problem using convex conjugate functions. Before we state the Kramkov-Schachermayer
Theorem 3.4.1, we present the Inada Condition and the Asymptotic Elasticity Condition
for Utility functions. For the sake of clarity, we divide the long and technical proof of
Kramkov-Schachermayer Theorem 3.4.1 into several lemmas and propositions of independent
interest, where the required assumptions are clearly indicate for each step of the
proof. The key argument in the proof of Kramkov-Schachermayer Theorem is an in nitedimensional
version of the minimax theorem (the classical method of nding a saddlepoint
for the Lagrangian is not enough in our situation), which is central in the theory of Lagrange multipliers. For this, we have stated and proved the technical Lemmata 3.4.5 and
3.4.6. The main steps in the proof of the the Kramkov-Schachermayer Theorem 3.4.1 are:
We show in Proposition 3.4.9 that the solution to the dual problem exists and we
characterize it in Proposition 3.4.12.
From the construction of the dual problem, we nd a set of necessary and su cient
conditions (3.1.1), (3.1.2), (3.3.1) and (3.3.7) for the primal and dual problems to
each have a solution.
Using these conditions, we can show the existence of the solution to the given problem
and characterize it in terms of the market parameters and the solution to the dual
problem.
In the last chapter we will present and study concrete examples of the utility maximization
portfolio problem in speci c markets. First, we consider the complete markets case, where
closed-form solutions are easily obtained. The detailed solution to the classical Merton
problem with power utility function is provided. Lastly, we deal with incomplete markets
under It^o processes and the Brownian ltration framework. The solution to the logarithmic
utility function as well as to the power utility function is presented. / AFRIKAANSE OPSOMMING: Die eerste benadering, begin deur Merton [Mer69, Mer71], om nutsmaksimering portefeulje
probleme op te los in kontinue tyd is gebaseer op stogastiese beheerteorie. Merton
se idee is om die maksimering portefeulje probleem te interpreteer as 'n stogastiese
beheer probleem waar die handelstrategi e as 'n beheer-proses beskou word en die portefeulje
waarde as die gereguleerde proses. Merton het die Hamilton-Jacobi-Bellman (HJB)
vergelyking afgelei en vir die spesiale geval van die mags, logaritmies en eksponensi ele
nutsfunksies het hy 'n oplossing in geslote-vorm gevind. 'n Groot nadeel van hierdie benadering
is die vereiste van die Markov eienskap vir die aandele pryse. Die sogenaamde
martingale metode verteenwoordig die tweede benadering vir die oplossing van nutsmaksimering
portefeulje probleme in kontinue tyd. Dit was voorgestel deur Pliska [Pli86], Cox
en Huang [CH89, CH91] en Karatzas et al. [KLS87] in verskillende wisselvorme. Dit word
aangevoer deur argumente van konvekse dualiteit, waar dit in staat stel om die aanvanklike
dinamiese portefeulje optimalisering probleem te omvorm na 'n statiese een en dit op te
los sonder dat' n \Markov" aanname gemaak hoef te word. 'n Bepalende antwoord (met
die nodige en voldoende voorwaardes) tot die nutsmaksimering portefeulje probleem vir
terminale vermo e is verkry deur Kramkov en Schachermayer [KS99]. In hierdie proefskrif
bestudeer ons die konveks dualiteit benadering tot die verwagte nuts maksimering probleem
(van terminale vermo e) in kontinue tyd stogastiese markte, wat soos reeds vermeld is
teruggevoer kan word na die seminale werk van Merton [Mer69, Mer71]. Voordat ons die
struktuur van ons tesis uitl^e, wil ons graag beklemtoon dat die beginpunt van ons werk
gebaseer is op Hoofstuk 7 van Pham [P09] se onlangse handboek. Die noukeurige leser
sal egter opmerk, dat ons belangrike begrippe en resultate verdiep en bygelas het (soos
die studie van die boonste (onderste) verskansing, die karakterisering van die noodsaaklike
supremum van alle moontlike pryse, vergelyk Stelling 7.2.2 in Pham [P09] met ons verklaarde
Stelling 2.4.9, die dinamiese programerings vergelyking 2.31, die superverskansing stelling 2.6.1...) en ons het 'n aansienlike inspanning in die bewyse gemaak. Trouens,
verskeie bewyse van stellings in Pham cite (P09) het ernstige gapings (nie te praat van
setfoute nie) en selfs foute (kyk byvoorbeeld die bewys van Stelling 7.3.2 in Pham [P09]
en ons bewys van Stelling 3.4.8). In die eerste hoofstuk, sit ons die verwagte nutsmaksimering
probleem uit een en motiveer ons die konveks duaale benadering gebaseer op 'n
voorbeeld van Rogers [KR07, R03]. Ons gee ook 'n kort oorsig van die von Neumann -
Morgenstern Verwagte Nutsteorie. In die tweede hoofstuk, begin ons met die formulering
van die superreplikasie probleem soos voorgestel deur El Karoui en Quenez [KQ95]. Die
fundamentele resultaat in die literatuur oor super-verskansing is die duaale karakterisering
van die versameling van alle eerste skenkings wat lei tot 'n super-verskans van' n Europese
voorwaardelike eis. El Karoui en Quenez [KQ95] het eers die super-verskansing stelling
2.6.1 bewys in 'n It^o di usie raamwerk en Delbaen en Schachermayer [DS95, DS98] het
dit veralgemeen na, onderskeidelik, 'n plaaslik begrensde en onbegrensde semimartingale
model, met 'n Hahn-Banach skeidings argument. Die superreplikasie probleem het 'n prag
resultaat ge nspireer, genaamd die opsionele ontbinding stelling vir supermartingales 2.4.1
in stogastiese ontledings teorie. Hierdie belangrike stelling wat deur El Karoui en Quenez
[KQ95] voorgestel is en tot volle veralgemening uitgebrei is deur Kramkov [Kra96] is uiteengesit
in Afdeling 2.4 en bewys aan die einde van Afdeling 2.7. Die derde hoofstuk vorm
die teoretiese basis van hierdie proefskrif en bevat die verklaring en gedetailleerde bewys
van die beroemde Kramkov-Schachermayer stelling wat die dualiteit van nutsmaksimering
portefeulje probleme adresseer. Eerstens, wys ons in Lemma 3.2.1 hoe om die dinamiese
nutsmaksimering probleem te omskep in 'n statiese maksimerings probleem. Dit kan gedoen
word te danke aan die duaale voorstelling van die versameling Europese voorwaardelike
eise, wat oorheers (of super-verskans) kan word byna seker van 'n aanvanklike skenking x en
'n toelaatbare self- nansierings portefeulje strategie wat in Gevolgtrekking 2.5 gegee word
en verkry is as gevolg van die opsionele ontbinding van supermartingale. In die tweede plek,
met sekere aannames oor die nutsfunksie, is die bestaan en uniekheid van die oplossing van
die statiese probleem gegee in Stelling 3.2.3. Omdat die oplossing van die statiese probleem
nie maklik verkrygbaar is nie, sal ons kyk na die duaale vorm. Ons sintetiseer dan die
duale probleem van die prim^ere probleem met konvekse toegevoegde funksies. Voordat ons
die Kramkov-Schachermayer Stelling 3.4.1 beskryf, gee ons die Inada voorwaardes en die
Asimptotiese Elastisiteits Voorwaarde vir Nutsfunksies. Ter wille van duidelikheid, verdeel
ons die lang en tegniese bewys van die Kramkov-Schachermayer Stelling ref in verskeie lemmas en proposisies op, elk van onafhanklike belang waar die nodige aannames duidelik
uiteengesit is vir elke stap van die bewys. Die belangrikste argument in die bewys van die
Kramkov-Schachermayer Stelling is 'n oneindig-dimensionele weergawe van die minimax
stelling (die klassieke metode om 'n saalpunt vir die Lagrange-funksie te bekom is nie genoeg
in die geval nie), wat noodsaaklik is in die teorie van Lagrange-multiplikators. Vir
die, meld en bewys ons die tegniese Lemmata 3.4.5 en 3.4.6. Die belangrikste stappe in
die bewys van die die Kramkov-Schachermayer Stelling 3.4.1 is:
Ons wys in Proposisie 3.4.9 dat die oplossing vir die duale probleem bestaan en ons
karaktiriseer dit in Proposisie 3.4.12.
Uit die konstruksie van die duale probleem vind ons 'n versameling nodige en voldoende
voorwaardes (3.1.1), (3.1.2), (3.3.1) en (3.3.7) wat die prim^ere en duale probleem
oplossings elk moet aan voldoen.
Deur hierdie voorwaardes te gebruik, kan ons die bestaan van die oplossing vir die
gegewe probleem wys en dit karakteriseer in terme van die mark parameters en die
oplossing vir die duale probleem.
In die laaste hoofstuk sal ons konkrete voorbeelde van die nutsmaksimering portefeulje
probleem bestudeer vir spesi eke markte. Ons kyk eers na die volledige markte geval waar
geslote-vorm oplossings maklik verkrygbaar is. Die gedetailleerde oplossing vir die klassieke
Merton probleem met mags nutsfunksie word voorsien. Ten slotte, hanteer ons onvolledige
markte onderhewig aan It^o prosesse en die Brown ltrering raamwerk. Die oplossing vir
die logaritmiese nutsfunksie, sowel as die mags nutsfunksie word aangebied.
|
12 |
Asynchronous Parallel Algorithms for Big-Data Nonconvex OptimizationLoris Cannelli (6933851) 13 August 2019 (has links)
<div>The focus of this Dissertation is to provide a unified and efficient solution method for an important class of nonconvex, nonsmooth, constrained optimization problems. Specifically, we are interested in problems where the objective function can be written as the sum of a smooth, nonconvex term, plus a convex, but possibly nonsmooth, regularizer. It is also considered the presence of nonconvex constraints. This kind of structure arises in many large-scale applications, as diverse as information processing, genomics, machine learning, or imaging reconstruction.</div><div></div><div>We design the first parallel, asynchronous, algorithmic framework with convergence guarantees to stationary points of the class of problems under exam. The method we propose is based on Successive Convex Approximation techniques; it can be implemented with both fixed and diminishing stepsizes; and enjoys sublinear convergence rate in the general nonconvex case, and linear convergence case under strong convexity or under less stringent standard error bound conditions.The algorithmic framework we propose is very abstract and general and can be applied to different computing architectures (e.g., message-passing systems, cluster of computers, shared-memory environments), always converging under the same set of assumptions. </div><div></div><div>In the last Chapter we consider the case of distributed multi-agent systems. Indeed, in many practical applications the objective function has a favorable separable structure. In this case, we generalize our framework to take into consideration the presence of different agents, where each one of them knows only a portion of the overall function, which they want cooperatively to minimize. The result is the first fully decentralized asynchronous method for the setting described above. The proposed method achieve sublinear convergence rate in the general case, and linear convergence rate under standard error bound conditions.</div><div></div><div>Extensive simulation results on problems of practical interest (MRI reconstruction, LASSO, matrix completion) show that the proposed methods compare favorably to state-of-the art-schemes.</div>
|
13 |
Uma metaheurística baseada em interação social para otimização não-linear de domínios contínuosSchmidt, Vinicius José 29 February 2016 (has links)
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2016-05-09T16:17:46Z
No. of bitstreams: 1
Vinicius José Schmidt_.pdf: 1328036 bytes, checksum: 6da7dd7865c3b2470fda338536aa5d69 (MD5) / Made available in DSpace on 2016-05-09T16:17:46Z (GMT). No. of bitstreams: 1
Vinicius José Schmidt_.pdf: 1328036 bytes, checksum: 6da7dd7865c3b2470fda338536aa5d69 (MD5)
Previous issue date: 2016-02-29 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Este trabalho apresenta um modelo de metaheurística baseado em interação social de agentes inteligentes, utilizando-se do aprendizado social reproduzido por uma população de agentes para alcançar a otimização de problemas. O método aqui descrito é focado em interação social de seres humanos, tais como: comparação de status social, tendência da sociedade baseada nas pessoas mais influentes, troca de conhecimento, pessoas menos influentes seguindo os mais influentes no seu grupo e a busca de uma pessoa pelo local onde ela alcança seu melhor desempenho. A influência de um agente é medida através de seu status social, sendo assim, cada agente possui um raio de influência proporcional ao seu status. Esses conceitos foram modelados para a criação da técnica, sendo o aprendizado entre agentes ocorrido quando um agente menos influente encontra-se na região de influência de um agente mais bem-sucedido que ele. Resultados de testes, tanto de benchmark quanto de problemas reais, são apresentados e discutidos. Os testes indicam que a metaheurística é um modelo populacional promissor. / This work presents a metaheuristic model based on intelligent agents social interaction, using the social learning of a population of agents to achieve problems optimization. The method described here is based on humans social interaction, such as: comparison of social status, trend of society based on the most influential people, knowledge exchange, less influential people following the ones with most influential in their group and the search of a person where he achieves its best performance. An agent’s influence is measured through its social status, therefore, each agent have an influence radius proportional to its status. These concepts were modeled to create the technique, the learning among agents occurs when a less influential agent is located inside the influence region of an agent with more social status than him. Tests results, benchmark
and real life problems, are presented and discussed. Those tests indicate that the model is a promising population metaheuristic.
|
14 |
Solving Hard Combinatorial Optimization Problems using Cooperative Parallel Metaheuristics / Utilisation de méta-heuristiques coopératives parallèles pour la résolution de problèmes d'optimisation combinatoire difficilesMunera Ramirez, Danny 27 September 2016 (has links)
Les Problèmes d’Optimisation Combinatoire (COP) sont largement utilisés pour modéliser et résoudre un grand nombre de problèmes industriels. La résolution de ces problèmes pose un véritable défi en raison de leur inhérente difficulté, la plupart étant NP-difficiles. En effet, les COP sont difficiles à résoudre par des méthodes exactes car la taille de l’espace de recherche à explorer croît de manière exponentielle par rapport à la taille du problème. Les méta-heuristiques sont souvent les méthodes les plus efficaces pour résoudre les problèmes les plus difficiles. Malheureusement, bien des problèmes réels restent hors de portée des meilleures méta-heuristiques. Le parallélisme permet d’améliorer les performances des méta-heuristiques. L’idée de base est d’avoir plusieurs instances d’une méta-heuristique explorant de manière simultanée l’espace de recherche pour accélérer la recherche de solution. Les meilleures techniques font communiquer ces instances pour augmenter la probabilité de trouver une solution. Cependant, la conception d’une méthode parallèle coopérative n’est pas une tâche aisée, et beaucoup de choix cruciaux concernant la communication doivent être résolus. Malheureusement, nous savons qu’il n’existe pas d’unique configuration permettant de résoudre efficacement tous les problèmes. Ceci explique que l’on trouve aujourd’hui des systèmes coopératifs efficaces mais conçus pour un problème spécifique ou bien des systèmes plus génériques mais dont les performances sont en général limitées. Dans cette thèse nous proposons un cadre général pour les méta-heuristiques parallèles coopératives (CPMH). Ce cadre prévoit plusieurs paramètres permettant de contrôler la coopération. CPMH organise les instances de méta-heuristiques en équipes ; chaque équipe vise à intensifier la recherche dans une région particulière de l’espace de recherche. Cela se fait grâce à des communications intra-équipes. Des communications inter-équipes permettent quant a` elles d’assurer la diversification de la recherche. CPMH offre à l’utilisateur la possibilité d’ajuster le compromis entre intensification et diversification. De plus, ce cadre supporte différentes méta-heuristiques et permet aussi l’hybridation de méta-heuristiques. Nous proposons également X10CPMH, une implémentation de CPMH, écrite en langage parallèle X10. Pour valider notre approche, nous abordons deux COP du monde industriel : des variantes difficiles du Problème de Stable Matching (SMP) et le Problème d’Affectation Quadratique (QAP). Nous proposons plusieurs méta-heuristiques originales en version séquentielle et parallèle, y compris un nouvelle méthode basée sur l’optimisation extrémale ainsi qu’un nouvel algorithme hybride en parallèle coopératif pour QAP. Ces algorithmes sont implémentés grâce à X10CPMH. L’évaluation expérimentale montre que les versions avec parallélisme coopératif offrent un très bon passage à l’échelle tout en fournissant des solutions de haute qualité. Sur les variantes difficiles de SMP, notre méthode coopérative offre des facteurs d’accélération super-linéaires. En ce qui concerne QAP, notre méthode hybride en parallèle coopératif fonctionne très bien sur les cas les plus difficiles et permet d’améliorer les meilleures solutions connues de plusieurs instances. / Combinatorial Optimization Problems (COP) are widely used to model and solve real-life problems in many different application domains. These problems represent a real challenge for the research community due to their inherent difficulty, as many of them are NP-hard. COPs are difficult to solve with exact methods due to the exponential growth of the problem’s search space with respect to the size of the problem. Metaheuristics are often the most efficient methods to make the hardest problems tractable. However, some hard and large real-life problems are still out of the scope of even the best metaheuristic algorithms. Parallelism is a straightforward way to improve metaheuristics performance. The basic idea is to perform concurrent explorations of the search space in order to speed up the search process. Currently, the most advanced techniques implement some communication mechanism to exchange information between metaheuristic instances in order to try and increase the probability to find a solution. However, designing an efficient cooperative parallel method is a very complex task, and many issues about communication must be solved. Furthermore, it is known that no unique cooperative configuration may efficiently tackle all problems. This is why there are currently efficient cooperative solutions dedicated to some specific problems or more general cooperative methods but with limited performances in practice. In this thesis we propose a general framework for Cooperative Parallel Metaheuristics (CPMH). This framework includes several parameters to control the cooperation. CPMH organizes the explorers into teams; each team aims at intensifying the search in a particular region of the search space and uses intra-team communication. In addition, inter-team communication is used to ensure search diversification. CPMH allows the user to tune the trade-off between intensification and diversification. However, our framework supports different metaheuristics and metaheuristics hybridization. We also provide X10CPMH, an implementation of our CPMH framework developed in the X10 parallel language. To assess the soundness of our approach we tackle two hard real-life COP: hard variants of the Stable Matching Problem (SMP) and the Quadratic Assignment Problem (QAP). For all problems we propose new sequential and parallel metaheuristics, including a new Extremal Optimization-based method and a new hybrid cooperative parallel algorithm for QAP. All algorithms are implemented thanks to X10CPMH. A complete experimental evaluation shows that the cooperative parallel versions of our methods scale very well, providing high-quality solutions within a limited timeout. On hard and large variants of SMP, our cooperative parallel method reaches super-linear speedups. Regarding QAP, the cooperative parallel hybrid algorithm performs very well on the hardest instances, and improves the best known solutions of several instances.
|
15 |
Análise não suave e aplicações em otimizaçãoCosta, Tiago Mendonça de [UNESP] 28 February 2011 (has links) (PDF)
Made available in DSpace on 2014-06-11T19:22:18Z (GMT). No. of bitstreams: 0
Previous issue date: 2011-02-28Bitstream added on 2014-06-13T20:48:40Z : No. of bitstreams: 1
costa_tm_me_sjrp.pdf: 1425800 bytes, checksum: f5b08954e14201ee5211145299b1e813 (MD5) / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Neste trabalho, estamos interessados em apresentar uma abordagem relacionando a análise não suave com a otimização. Primeiramente, é realizado um estudo sobre conceitos da análise não suave, como cones normais, cone tangente de Bouligand, subdiferenciais proximal, estrita, limite e de clarke. Com esses conceitos exibimos uma série de resultados, por exemplo, uma caracterização par funções de Lipschitz, subdiferencais da soma, produto e máximo de funções semi-contínuas inferior, uma versão não suave dos multiplicadores de Lagrange, i.e., condições de primeira ordem para otimalidade de problemas de otimização não suaves. Também é feito um estudo sobre as condições de segunda ordem para otimalidade em problemas de otimização não suaves e para isso, foi necessário a apresentação de outros conceitos e propriedades como os de Hessiana generalizada, Jacobiana aproximada a Hessiana proximada. Após a apresentação desses resultados, é feita uma análise sobre dois Teoremas que fornecem, com abordagens distintas, condições suficiente de segunda ordem para problemas de otimização não suaves e este trabalho é finalizado com a aprsentação de um resultado que é considerado uma unificação desses dois Teoremas / In this work we are interested in the presentation of an approach relating Nonsmooth Analysis to Optimization. First we make a study about concepts of nonsmooth analysis such as, normal cone, Bouligand's tangent cone, proximal, strict and limiting Subdiferential, as well as Clarke's Suddifferential. After these, we exhibit a series of results, for example, a characterization of Lipschitz functions, Subdifferential sum, product and maxium rules of lower semicontinuous functions and a nonsmooth version of Lagrange's multiplier rule, that is, a first order necessary condition of optimality for nonsmooth optimization problems. We also made a study about second order optimality conditions for nonsmooth optimization problems. In order to do that, it was necessary to present other concepts and properties about generalized Hessian, approximate Jacobian and approximate Hessian. After presenting these concepts and results, an analysis of two theorems that provide, with different approches, second order conditions for optimality for nonsmooth problems is made. Finally, this dissertation is completed with the exposition of a result that is considered a unification of these two theorems
|
16 |
Hardware implementation of autonomous probabilistic computersAhmed Zeeshan Pervaiz (7586213) 31 October 2019 (has links)
<pre><p>Conventional digital computers are built using stable deterministic units known as "bits".
These conventional computers have greatly evolved into sophisticated machines,
however there are many classes of problems such as optimization, sampling and
machine learning that still cannot be addressed efficiently with conventional
computing. Quantum computing, which uses q-bits, that are in a delicate
superposition of 0 and 1, is expected to perform some of these tasks
efficiently. However, decoherence, requirements for cryogenic operation and
limited many-body interactions pose significant challenges to scaled quantum
computers. Probabilistic computing is another unconventional computing paradigm
which introduces the concept of a probabilistic bit or "p-bit"; a robust
classical entity fluctuating between 0 and 1 and can be interconnected
electrically. The primary contribution of this thesis is the first experimental
proof-of-concept demonstration of p-bits built by slight modifications to the
magnetoresistive random-access memory (MRAM) operating at room temperature.
These p-bits are connected to form a clock-less autonomous probabilistic
computer. We first set the stage, by demonstrating a high-level emulation of
p-bits which establishes important rules of operation for autonomous
p-computers. The experimental demonstration is then followed by a low-level
emulation of MRAM based p-bits which will allow further study of device
characteristics and parameter variations for proper operation of p-computers.
We lastly demonstrate an FPGA based scalable synchronous probabilistic computer
which uses almost 450 digital p-bits to demonstrate large p-circuits.</p>
</pre>
|
17 |
Robust Discrete OptimizationBertsimas, Dimitris J., Sim, Melvyn 01 1900 (has links)
We propose an approach to address data uncertainty for discrete optimization problems that allows controlling the degree of conservatism of the solution, and is computationally tractable both practically and theoretically. When both the cost coefficients and the data in the constraints of an integer programming problem are subject to uncertainty, we propose a robust integer programming problem of moderately larger size that allows to control the degree of conservatism of the solution in terms of probabilistic bounds on constraint violation. When only the cost coefficients are subject to uncertainty and the problem is a 0 - 1 discrete optimization problem on n variables, then we solve the robust counterpart by solving n + 1 instances of the original problem. Thus, the robust counterpart of a polynomially solvable 0 -1 discrete optimization problem remains polynomially solvable. Moreover, we show that the robust counterpart of an NP-hard α-approximable 0 - 1 discrete optimization problem remains α-approximal. / Singapore-MIT Alliance (SMA)
|
18 |
Global Optimization with PolynomialsHan, Deren 01 1900 (has links)
The class of POP (Polynomial Optimization Problems) covers a wide rang of optimization problems such as 0 - 1 integer linear and quadratic programs, nonconvex quadratic programs and bilinear matrix inequalities. In this paper, we review some methods on solving the unconstraint case: minimize a real-valued polynomial p(x) : Rn â R, as well the constraint case: minimize p(x) on a semialgebraic set K, i.e., a set defined by polynomial equalities and inequalities. We also summarize some questions that we are currently considering. / Singapore-MIT Alliance (SMA)
|
19 |
Evolutionary Optimization Algorithms for Nonlinear SystemsRaj, Ashish 01 May 2013 (has links)
Many real world problems in science and engineering can be treated as optimization problems with multiple objectives or criteria. The demand for fast and robust stochastic algorithms to cater to the optimization needs is very high. When the cost function for the problem is nonlinear and non-differentiable, direct search approaches are the methods of choice. Many such approaches use the greedy criterion, which is based on accepting the new parameter vector only if it reduces the value of the cost function. This could result in fast convergence, but also in misconvergence where it could lead the vectors to get trapped in local minima. Inherently, parallel search techniques have more exploratory power. These techniques discourage premature convergence and consequently, there are some candidate solution vectors which do not converge to the global minimum solution at any point of time. Rather, they constantly explore the whole search space for other possible solutions. In this thesis, we concentrate on benchmarking three popular algorithms: Real-valued Genetic Algorithm (RGA), Particle Swarm Optimization (PSO), and Differential Evolution (DE). The DE algorithm is found to out-perform the other algorithms in fast convergence and in attaining low-cost function values. The DE algorithm is selected and used to build a model for forecasting auroral oval boundaries during a solar storm event. This is compared against an established model by Feldstein and Starkov. As an extended study, the ability of the DE is further put into test in another example of a nonlinear system study, by using it to study and design phase-locked loop circuits. In particular, the algorithm is used to obtain circuit parameters when frequency steps are applied at the input at particular instances.
|
20 |
Geometric Approximation Algorithms in the Online and Data Stream ModelsZarrabi-Zadeh, Hamid January 2008 (has links)
The online and data stream models of computation have recently attracted considerable research attention due to many real-world applications in various areas such as data mining, machine learning, distributed computing, and robotics. In both these models, input items arrive one at a time, and the algorithms must decide based on the partial data received so far, without any secure information about the data that will arrive in the future.
In this thesis, we investigate efficient algorithms for a number of fundamental geometric optimization problems in the online and data stream models. The problems studied in this thesis can be divided into two major categories: geometric clustering and computing various extent measures of a set of points.
In the online setting, we show that the basic unit clustering problem admits non-trivial algorithms even in the simplest one-dimensional case: we show that the naive upper bounds on the competitive ratio of algorithms for this problem can be beaten using randomization. In the data stream model, we propose a new streaming algorithm for maintaining "core-sets" of a set of points in fixed dimensions, and also, introduce a new simple framework for transforming a class of offline algorithms to their equivalents in the data stream model. These results together lead to improved streaming approximation algorithms for a wide variety of geometric optimization problems in fixed dimensions, including diameter, width, k-center, smallest enclosing ball, minimum-volume bounding box, minimum enclosing cylinder, minimum-width enclosing spherical shell/annulus, etc. In high-dimensional data streams, where the dimension is not a constant, we propose a simple streaming algorithm for the minimum enclosing ball (the 1-center) problem with an improved approximation factor.
|
Page generated in 0.1177 seconds