Spelling suggestions: "subject:"portfoliooptimierung"" "subject:"erfolgsoptimierung""
1 |
Portfoliooptimierung im Bereich niedrigen RisikosLorenz, Nicole 19 May 2008 (has links) (PDF)
In Banken wird zunehmend das Modell von Markowitz zur Portfoliooptimierung
als verkaufsförderndes Instrument verwendet. Dieses Modell stellt jedoch lediglich
eine theoretische Grundlage zur Portfoliobildung dar, berücksichtigt jedoch keine
Transaktionskosten oder Besonderheiten von Kleinanlegern.
Es wird in die Thematik der Portfoliooptimierung eingeführt und mit Hilfe
praktischer Überlegungen zur Kostenstruktur eine Modellwelt zur Ermittlung des
erwartenen (Nutzen des) Endvermögens entwickelt. Dabei wird das Black-Scholes-
Modell verwendet um in Simulationen Handlungsempfehlungen unter Berücksichtigung
besonderes Eigenschaften von Kleinanlegern herauszuarbeiten und den Einfluss
von Kosten auf das Endvermögen zu analysieren. Zur Bestimmung optimaler
Portfolios kommt die Martingalmethode zur Lösung eines dynamischen Optimierungsproblems
zum Einsatz.
|
2 |
Portfoliooptimierung im Bereich niedrigen RisikosLorenz, Nicole 19 May 2008 (has links)
In Banken wird zunehmend das Modell von Markowitz zur Portfoliooptimierung
als verkaufsförderndes Instrument verwendet. Dieses Modell stellt jedoch lediglich
eine theoretische Grundlage zur Portfoliobildung dar, berücksichtigt jedoch keine
Transaktionskosten oder Besonderheiten von Kleinanlegern.
Es wird in die Thematik der Portfoliooptimierung eingeführt und mit Hilfe
praktischer Überlegungen zur Kostenstruktur eine Modellwelt zur Ermittlung des
erwartenen (Nutzen des) Endvermögens entwickelt. Dabei wird das Black-Scholes-
Modell verwendet um in Simulationen Handlungsempfehlungen unter Berücksichtigung
besonderes Eigenschaften von Kleinanlegern herauszuarbeiten und den Einfluss
von Kosten auf das Endvermögen zu analysieren. Zur Bestimmung optimaler
Portfolios kommt die Martingalmethode zur Lösung eines dynamischen Optimierungsproblems
zum Einsatz.
|
3 |
Realisierbarer Portfoliowert in illiquiden FinanzmärktenBaum, Dietmar 23 July 2001 (has links)
Wir untersuchen eine zeitstetige Variante des zeitlich diskreten Modells von Jarrow für einen illiquiden Finanzmarkt. In dieser kann mit einem Bond und einer Aktie gehandelt werden. Während im Standardmodell eines liquiden Finanzmarktes die stochastische Dynamik des Aktienpreises durch ein festes Semimartingal modelliert wird, hängt der Aktienpreis in unserem Modell einerseits von einem fundamentalen Semimartingal, das sich als kumulative Nachfrage vieler kleiner Investoren interpretieren läßt, andererseits aber auch monoton wachsend vom Aktienbestand der Handelsstrategie eines ökonomischen Agenten ab. Wegen des damit verbundenen Rückkopplungseffekts ist es, im Gegensatz zu liquiden Finanzmärkten, nicht möglich, die bekannten Darstellungssätze der Stochastischen Analysis zu verwenden, um Zufallsvariablen als stochastische Integrale bezüglich des Prozesses der abdiskontierten Aktienpreise darzustellen und auf dieser Basis Absicherungsstrategien für Derivate zu konstruieren. Wir definieren den realisierbaren Portfoliowert als den abdiskontierten Erlös einer idealisierten, in einem gewissen Sinne optimalen, Liquidationsstrategie. Mit Hilfe der Ito-Formel leiten wir eine Zerlegung der Dynamik des realisierbaren Portfoliowertes selbstfinanzierender Strategien in ein stochastisches Integral und einen fallenden Prozeß her. Dabei ist der Integrator des stochastischen Integrals ein von der betrachteten Strategie unabhängiges lokales Martingal unter einem äquivalenten Martingalmaß . Aus dieser Zerlegung ergibt sich ein Beweis für die Arbitragefreiheit des Modells. Der Zerlegungssatz zeigt insbesondere, daß der realisierbare Portfoliowert stetiger Strategien von beschränkter Variation ein lokales Martingal unter einem äquivalenten Martingalmaß ist. Wir beweisen deshalb einen Approximationssatz für stochastische Integrale, der es erlaubt, sich bei der Absicherung von Derivaten auf solche Strategien zu beschränken. Durch Kombination des Approximationssatzes und des Zerlegungssatzes können wir Superreplikationspreise von Derivaten bestimmen und die relevanten Portfoliooptimierungsprobleme lösen. / We study a continuous time version of Jarrows model for an illiquid financial market in discrete time. In this model one can trade with a bond and a stock. In standard models for liquid financial markets, the stochastic dynamic of stock prices is modelled as a given semimartingale. In contrast, stock prices in our model depend on a fundamental semimartingale that can be interpreted as the cumulative demand of small investors and, in a monotone increasing way, on the strategy of an economic agent. Because of the resulting feedback effects, it is no longer possible to use the well known representation theorems of stochastic analysis to write random variables as stochastic integrals with respect to discounted stock prices and to use this to find hedging strategies for derivatives. We define realisable portfolio wealth as the discounted proceeds of an idealised liquidation strategy that is optimal in a certain sense. Using Itos formula, we can write the dynamics of the realisable portfolio wealth of self-financing strategies as the sum of a stochastic integral and a decreasing process. The integrator in the stochastic integral is a local martingale under an equivalent martingale measure that does not depend on the self-financing strategy. This decomposition yields a proof for the fact that our model is arbitrage free. The decomposition theorem shows that the realisable portfolio wealth of continuous strategies of bounded variation is a local martingale under an equivalent martingale measure. Therefore, we proof an approximation result for stochastic integrals that shows that we can restrict the search for hedging strategies to continuous strategies of bounded variation. By combining the approximation result and the decomposition theorem we can calculate superreplication prices for derivatives and solve the relevant portfolio optimisation problems.
|
4 |
Profillinie 6: Modellierung, Simulation, HochleistungsrechnenRehm, Wolfgang, Hofmann, Bernd, Meyer, Arnd, Steinhorst, Peter, Weinelt, Wilfried, Rünger, Gudula, Platzer, Bernd, Urbaneck, Thorsten, Lorenz, Mario, Thießen, Friedrich, Kroha, Petr, Benner, Peter, Radons, Günter, Seeger, Steffen, Auer, Alexander A., Schreiber, Michael, John, Klaus Dieter, Radehaus, Christian, Farschtschi, Abbas, Baumgartl, Robert, Mehlan, Torsten, Heinrich, Bernd 11 November 2005 (has links) (PDF)
An der TU Chemnitz haben sich seit über zwei Jahrzehnten die Gebiete der rechnergestützten Wissenschaften (Computational Science) sowie des parallelen und verteilten Hochleistungsrechnens mit zunehmender Verzahnung entwickelt. Die Koordinierung und Bündelung entsprechender Forschungsarbeiten in der Profillinie 6 “Modellierung, Simulation, Hochleistungsrechnen” wird es ermöglichen, im internationalen Wettbewerb des Wissens mitzuhalten.
|
5 |
Profillinie 6: Modellierung, Simulation, Hochleistungsrechnen:Rehm, Wolfgang, Hofmann, Bernd, Meyer, Arnd, Steinhorst, Peter, Weinelt, Wilfried, Rünger, Gudula, Platzer, Bernd, Urbaneck, Thorsten, Lorenz, Mario, Thießen, Friedrich, Kroha, Petr, Benner, Peter, Radons, Günter, Seeger, Steffen, Auer, Alexander A., Schreiber, Michael, John, Klaus Dieter, Radehaus, Christian, Farschtschi, Abbas, Baumgartl, Robert, Mehlan, Torsten, Heinrich, Bernd 11 November 2005 (has links)
An der TU Chemnitz haben sich seit über zwei Jahrzehnten die Gebiete der rechnergestützten Wissenschaften (Computational Science) sowie des parallelen und verteilten Hochleistungsrechnens mit zunehmender Verzahnung entwickelt. Die Koordinierung und Bündelung entsprechender Forschungsarbeiten in der Profillinie 6 “Modellierung, Simulation, Hochleistungsrechnen” wird es ermöglichen, im internationalen Wettbewerb des Wissens mitzuhalten.
|
6 |
Proximal Splitting Methods in Nonsmooth Convex OptimizationHendrich, Christopher 25 July 2014 (has links) (PDF)
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems.
After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators.
The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.
|
7 |
Proximal Splitting Methods in Nonsmooth Convex OptimizationHendrich, Christopher 17 July 2014 (has links)
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems.
After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators.
The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.
|
8 |
Application of the Duality TheoryLorenz, Nicole 15 August 2012 (has links) (PDF)
The aim of this thesis is to present new results concerning duality in scalar optimization. We show how the theory can be applied to optimization problems arising in the theory of risk measures, portfolio optimization and machine learning.
First we give some notations and preliminaries we need within the thesis. After that we recall how the well-known Lagrange dual problem can be derived by using the general perturbation theory and give some generalized interior point regularity conditions used in the literature. Using these facts we consider some special scalar optimization problems having a composed objective function and geometric (and cone) constraints. We derive their duals, give strong duality results and optimality condition using some regularity conditions. Thus we complete and/or extend some results in the literature especially by using the mentioned regularity conditions, which are weaker than the classical ones. We further consider a scalar optimization problem having single chance constraints and a convex objective function. We also derive its dual, give a strong duality result and further consider a special case of this problem. Thus we show how the conjugate duality theory can be used for stochastic programming problems and extend some results given in the literature.
In the third chapter of this thesis we consider convex risk and deviation measures. We present some more general measures than the ones given in the literature and derive formulas for their conjugate functions. Using these we calculate some dual representation formulas for the risk and deviation measures and correct some formulas in the literature. Finally we proof some subdifferential formulas for measures and risk functions by using the facts above.
The generalized deviation measures we introduced in the previous chapter can be used to formulate some portfolio optimization problems we consider in the fourth chapter. Their duals, strong duality results and optimality conditions are derived by using the general theory and the conjugate functions, respectively, given in the second and third chapter. Analogous calculations are done for a portfolio optimization problem having single chance constraints using the general theory given in the second chapter. Thus we give an application of the duality theory in the well-developed field of portfolio optimization.
We close this thesis by considering a general Support Vector Machines problem and derive its dual using the conjugate duality theory. We give a strong duality result and necessary as well as sufficient optimality conditions. By considering different cost functions we get problems for Support Vector Regression and Support Vector Classification. We extend the results given in the literature by dropping the assumption of invertibility of the kernel matrix. We use a cost function that generalizes the well-known Vapnik's ε-insensitive loss and consider the optimization problems that arise by using this. We show how the general theory can be applied for a real data set, especially we predict the concrete compressive strength by using a special Support Vector Regression problem.
|
9 |
Application of the Duality Theory: New Possibilities within the Theory of Risk Measures, Portfolio Optimization and Machine LearningLorenz, Nicole 28 June 2012 (has links)
The aim of this thesis is to present new results concerning duality in scalar optimization. We show how the theory can be applied to optimization problems arising in the theory of risk measures, portfolio optimization and machine learning.
First we give some notations and preliminaries we need within the thesis. After that we recall how the well-known Lagrange dual problem can be derived by using the general perturbation theory and give some generalized interior point regularity conditions used in the literature. Using these facts we consider some special scalar optimization problems having a composed objective function and geometric (and cone) constraints. We derive their duals, give strong duality results and optimality condition using some regularity conditions. Thus we complete and/or extend some results in the literature especially by using the mentioned regularity conditions, which are weaker than the classical ones. We further consider a scalar optimization problem having single chance constraints and a convex objective function. We also derive its dual, give a strong duality result and further consider a special case of this problem. Thus we show how the conjugate duality theory can be used for stochastic programming problems and extend some results given in the literature.
In the third chapter of this thesis we consider convex risk and deviation measures. We present some more general measures than the ones given in the literature and derive formulas for their conjugate functions. Using these we calculate some dual representation formulas for the risk and deviation measures and correct some formulas in the literature. Finally we proof some subdifferential formulas for measures and risk functions by using the facts above.
The generalized deviation measures we introduced in the previous chapter can be used to formulate some portfolio optimization problems we consider in the fourth chapter. Their duals, strong duality results and optimality conditions are derived by using the general theory and the conjugate functions, respectively, given in the second and third chapter. Analogous calculations are done for a portfolio optimization problem having single chance constraints using the general theory given in the second chapter. Thus we give an application of the duality theory in the well-developed field of portfolio optimization.
We close this thesis by considering a general Support Vector Machines problem and derive its dual using the conjugate duality theory. We give a strong duality result and necessary as well as sufficient optimality conditions. By considering different cost functions we get problems for Support Vector Regression and Support Vector Classification. We extend the results given in the literature by dropping the assumption of invertibility of the kernel matrix. We use a cost function that generalizes the well-known Vapnik's ε-insensitive loss and consider the optimization problems that arise by using this. We show how the general theory can be applied for a real data set, especially we predict the concrete compressive strength by using a special Support Vector Regression problem.
|
Page generated in 0.0831 seconds