• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 8
  • 4
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 45
  • 45
  • 15
  • 14
  • 10
  • 10
  • 7
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Riziková averze v eficienci portfolia / Risk aversion in portfolio efficiency

Puček, Samuel January 2019 (has links)
This thesis deals with selecting the optimal portfolio for a risk averse investor. Firstly, we present the risk measures, specifically spectral risk me- asures which consider an individual risk aversion of the investor. Then we propose a diversification-consistent data envelopment analysis model. The model is searching for an efficient portfolio with respect to second-order sto- chastic dominance. The crux of the thesis is a model based on the theory of multi-criteria optimization and spectral risk measures. The presented mo- del is searching for an optimal portfolio suitable for the investor with a given risk aversion. In addition, the optimal portfolio is also consistent with second- order stochastic dominance efficiency. The topic of the practical part is a nu- merical study in which both models are implemented in MATLAB. Models are applied to a dataset from real financial markets. Personal contribution lies in comparing the diversification-consistent data envelopment analysis model and model based on multi-criteria optimization, both with respect to second order stochastic dominance efficiency.
32

Using parameterized efficient sets to model alternatives for systems design decisions

Malak, Richard J., Jr. 17 November 2008 (has links)
The broad aim of this research is to contribute knowledge that enables improvements in how designers model decision alternatives at the systems level—i.e., how they model different system configurations and concepts. There are three principal complications: (1) design concepts and systems configurations are partially-defined solutions to a problem that correspond to a large set of possible design implementations, (2) each concept or configuration may operate on different physical principles, and (3) decisions typically involve tradeoffs between multiple competing objectives that can include "non-engineering" considerations such as production costs and profits. This research is an investigation of a data-driven approach to modeling partially-defined system alternatives that addresses these issues. The approach is based on compositional strategy in which designers model a system alternative using abstract models of its components. The component models are representations of the rational tradeoffs available to designers when implementing the components. Using these models, designers can predict key properties of the final implementation of each system alternative. A new construct, called a parameterized efficient set, is introduced as the decision-theoretic basis for generating the component-level tradeoff models. Appropriate efficiency criteria are defined for the cases of deterministic and uncertain data. It is shown that the model composition procedure is mathematically sound under reasonable assumptions for the case of deterministic data. This research also introduces an approach for describing the valid domain of a data-driven model based on the use of support-vector machines. Engineering examples include performing requirements allocation for a hydraulic log splitter and architecture selection for a hybrid vehicle.
33

O desempenho dos hedge funds brasileiros a partir da não normalidade de seus retornos

Risério, Guilherme Silva 26 February 2014 (has links)
Submitted by Guilherme Risério (guilherme.riserio@gmail.com) on 2014-03-17T18:48:40Z No. of bitstreams: 1 O DESEMPENHO DOS HEDGE FUNDS BRASILEIROS A PARTIR DA NÃO NORMALIDADE DE SEUS RETORNOS - GUILHERME RISERIO.pdf: 4785756 bytes, checksum: 67e0577d6e7563bff201f16f19a2fd59 (MD5) / Approved for entry into archive by PAMELA BELTRAN TONSA (pamela.tonsa@fgv.br) on 2014-03-17T19:15:53Z (GMT) No. of bitstreams: 1 O DESEMPENHO DOS HEDGE FUNDS BRASILEIROS A PARTIR DA NÃO NORMALIDADE DE SEUS RETORNOS - GUILHERME RISERIO.pdf: 4785756 bytes, checksum: 67e0577d6e7563bff201f16f19a2fd59 (MD5) / Made available in DSpace on 2014-03-17T19:26:19Z (GMT). No. of bitstreams: 1 O DESEMPENHO DOS HEDGE FUNDS BRASILEIROS A PARTIR DA NÃO NORMALIDADE DE SEUS RETORNOS - GUILHERME RISERIO.pdf: 4785756 bytes, checksum: 67e0577d6e7563bff201f16f19a2fd59 (MD5) Previous issue date: 2014-02-26 / Devido à utilização de estratégias distintas de investimento dos hedge funds brasileiros caracterizadas pelo uso de derivativos, operações alavancadas e vendas a descoberto, esses fundos apresentam significante não normalidade dos retornos gerados. Portanto, as medidas usuais de avaliação de performance são incapazes de fornecer resultados consistentes com o verdadeiro desempenho dos portfólios de hedge fund. Este trabalho irá utilizar duas metodologias não tradicionais para analisar a performance dos hedge funds brasileiros e determinar qual estratégia supera o mercado acionário. Serão utilizadas duas medidas não paramétricas, Almost Stochastic Dominância (ASD) e Manipulation-Proof Performance Measure (MPPM). Os resultados demonstram que os hedge funds brasileiros não superam os benckmaks utilizados na dominância de primeira ordem, mas quando analisada a dominância de segunda ordem sete estratégias apresentaram desempenho superior ao Índice Ibovespa.
34

Análise dos modelos baseados em lower partial moments: um estudo empírico para o Ibovespa e Dow Jones através da distância Hansen-Jagannathan

Herrera, Christian Jonnatan Jacobsen Soto 01 March 2017 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-06-28T19:37:30Z No. of bitstreams: 1 christianjonnatanjacobsensotoherrera.pdf: 883027 bytes, checksum: 3ee1cf348a7392e28d4ef150125ad72c (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-08-07T21:48:11Z (GMT) No. of bitstreams: 1 christianjonnatanjacobsensotoherrera.pdf: 883027 bytes, checksum: 3ee1cf348a7392e28d4ef150125ad72c (MD5) / Made available in DSpace on 2017-08-07T21:48:11Z (GMT). No. of bitstreams: 1 christianjonnatanjacobsensotoherrera.pdf: 883027 bytes, checksum: 3ee1cf348a7392e28d4ef150125ad72c (MD5) Previous issue date: 2017-03-01 / Esta dissertação propõe testar empiricamente, através de otimizações in sample, os modelos de downside risk, Sortino, Upside Pontential Ratio, Omega e Kappa, comparado-os com o tradicional CAPM, derivado a partir da fronteira de média e variância, utilizando as ações listadas no Ibovespa e Dow Jones (DJIA) para construção de carteiras de mercado para cada um dos modelos. Estas duas classes de modelos distinguem-se quanto aos pressupostos e à mensuração do risco. Enquanto o CAPM considera apenas os dois primeiros momentos da distribuição de retornos, as outras medidas levam em conta os momentos superiores. Através da distância Hansen-Jagannathan, que mede o erro de mensuração do Stochastic Discount Factor (SDF) gerado pelos modelos, observou-se grande distinção dos modelos nos dois mercados. Enquanto o CAPM performou melhor no Dow Jones, os modelos de downside risk apresentaram melhores resultados para o Ibovespa, sugerindo vantagem na utilização destes modelos em mercados com menor liquidez e maior assimetria. / This dissertation proposes empirically test the downside risk models, Sortino, Upside Pontential Ratio, Omega and Kappa, by comparing them with the traditional CAPM, derived from the mean and variance boundary, using the listed shares in the Ibovespa and Dow Jones (DJIA) for the construction of market portfolios for each of the models. These two classes of models are distinguished in terms of assumptions and risk measurement. While the CAPM considers only the first two moments of the returns distribution, the other measures take into account the higher moments of such distributions. The Hansen-Jagannathan distance, which measures the Stochastic Discount Factor (SDF) measurement error generated by the models, showed a great distinction of the models in the two markets. While the CAPM performed better in the Dow Jones, the downside risk models presented better results for the Ibovespa, suggesting an advantage in the use of such models in markets with lower liquidity and greater asymmetry.
35

Úlohy stochastického programovaní pro řízení aktiv a pasiv / Stochastic Programming Problems in Asset-Liability Management

Rusý, Tomáš January 2017 (has links)
The main objective of this thesis is to build a multi-stage stochastic pro- gram within an asset-liability management problem of a leasing company. At the beginning, the business model of such a company is introduced and the stochastic programming formulation is derived. Thereafter, three various risk constraints, namely the chance constraint, the Value-at-Risk constraint and the conditional Value-at-Risk constraint along with the second-order stochastic dominance constraint are applied to the model to control for riski- ness of the optimal strategy. Their properties and their effects on the optimal decisions are thoroughly investigated, while various risk limits are considered. In order to obtain solutions of the problems, random elements in the model formulation had to be approximated by scenarios. The Hull - White model calibrated by a newly proposed method based on maximum likelihood esti- mation has been used to generate scenarios of future interest rates. In the end, the performances of the optimal solutions of the problems for unconsid- ered and unfavourable crisis scenarios were inspected. The used methodology of such a stress test has not yet been implemented in stochastic programming problems within an asset-liability management. 1
36

Theory of rational decision-making and its applications to adaptive transmission

Kotelba, A. (Adrian) 21 September 2013 (has links)
Abstract In this thesis, adaptive transmission power control algorithms for reliable communication in channels with state are explored and further developed. In channels with state, strict adherence to Shannon-sense capacity may lead to very conservative system designs. In many practical systems, error-free communication is not required because these systems can cope with decoding errors. These considerations give rise to other information-theoretic notions where the rate of reliable communications is considered a random variable which depends not only on the statistical properties of the channel but also on the adaptive transmission strategy. Numerous studies on adaptive transmission in channels with state have already been conducted using expected value of communication rate or information outage probability as the relevant performance metrics. However, these metrics, although intuitively pleasing, have usually been introduced without rigorous justification. This thesis contributes to the state of the art in a number of ways. These include the development of new conceptual viewpoints on performance assessment of adaptive communication systems in channels with state as well as a new set of adaptive transmission power control algorithms. In particular, the models and methods of rational decision theory are introduced and systematically used in developing a unified framework for analysis and optimization of adaptive transmission in channels with state. The proposed framework properly addresses the limitation of finite coding length, takes into account the decision maker's preferences, considers uncertainties relevant in a given decision, and determines the optimal decision by maximizing some numerical index. A central finding of the theoretical studies is that many of the previously proposed performance metrics can be rigorously justified within the newly proposed framework. In addition, adaptive transmission power control in parallel Gaussian channels is considered with the aim of obtaining new classes of algorithms. The safety-first approach, risk theory, and expected utility theory are applied to derive novel transmission power control algorithms. The performance of the proposed power control algorithms is evaluated by computer simulations and compared against the performance of some other well-known algorithms. / Tiivistelmä Tässä väitöskirjassa tutkitaan ja kehitetään edelleen adaptiivisia lähettimen tehonsäätöalgoritmeja luotettavaan tietoliikenteeseen kanavissa, joilla on tila. Tällaisissa kanavissa Shannonin määrittelemän kapasiteetin tiukka noudattaminen saattaa johtaa konservatiivisiin järjestelmiin. Monissa käytännön järjestelmissä virheetöntä tiedonsiirtoa ei vaadita, koska niissä voidaan helposti selviytyä dekoodausvirheistä. Nämä pohdinnat johtavat toisenlaisiin informaatioteoreettisiin näkökulmiin, joissa luotettavan tietoliikenteen nopeutta pidetään satunnaismuuttujana, joka ei riipu ainoastaan kanavan tilastollisista ominaisuuksista vaan myös adaptiivisesta lähetysstrategiasta. Adaptiivisesta siirrosta kanavissa, joilla on tila, on jo tehty lukuisia tutkimuksia käyttäen tietoliikennenopeuden odotusarvoa tai informaation katkostodennäköisyyttä asiaankuuluvina suorituskykymittareina. Näitä mittareita on käytetty tavallisesti ilman tarkkaa perustelua, vaikka ne ovat intuitiivisesti houkuttelevia. Tämä väitöskirja tuottaa uusia tuloksia alan kehityksen nykytasoon monella tavalla. Näihin kuuluvat uudet käsitteelliset näkökulmat adaptiivisten tietoliikennejärjestelmien suorituskyvyn arviointiin kanavissa, joilla on tila, sekä uusi joukko adaptiivisia tehonsäätöalgoritmeja. Erityisesti rationaalisen päätöksenteon malleja ja menetelmiä on otettu käyttöön systemaattisesti kehitettäessä yhtenäistä puitetta adaptiivisen siirron analyysiin ja optimointiin kanavissa, joilla on tila. Ehdotettu puite arvioi asianmukaisesti äärellisen koodauspituuden rajoitusta, ottaa huomioon päätöksentekijän mieltymykset, tarkastelee määrättyyn päätökseen liittyviä oleellisia epävarmuuksia ja määrittää optimaalisen päätöksen maksimoimalla jonkin numeerisen päätösmuuttujan. Keskeinen löytö on se, että monet aikaisemmin ehdotetut suorituskykymittarit voidaan perustella tarkasti uuden, tässä ehdotetun puitteen sisällä. Lisäksi tarkastellaan adaptiivista lähettimen tehonsäätöä rinnakkaisissa Gaussin jakaumaa noudattavissa kanavissa. Tavoitteena on saada aikaan uusia lähettimen tehonsäätöalgoritmien luokkia. Turvallisuus ensin -lähestymistapaa, riskiteoriaa ja odotetun hyödyn teoriaa sovelletaan uusien lähettimen tehonsäätöalgoritmien johtamiseen. Ehdotettujen tehonsäätöalgoritmien suorituskykyä on mitattu tietokonesimuloinneilla ja verrattu joidenkin muiden hyvin tunnettujen algoritmien suorituskykyyn.
37

Performance evaluation of portfolio insurance strategies / L'évaluation de la performance des stratégies d'assurance de portefeuille

Tawil, Dima 10 November 2015 (has links)
Cette thèse a pour objectif d’évaluer et de comparer la performance des stratégies d’assurance de portefeuille pour tenter de définir quelles stratégies doivent être privilégiées par les investisseurs. Nous comparons de nombreuses stratégies d’assurance (OBPI, CPPI, put synthétique et Stop-loss) entre elles mais également avec quelques autres stratégies de référence. Nous utilisons différents critères de comparaison qui comprennent: 1. Les distributions de pay-off, le niveau de protection, la dominance stochastique et le coût d’assurance dans différentes conditions de marché identifiées par des modèles à changements de régime markovien. 2. Les mesures de la performance ajustée au risque qui peuvent refléter les préférences des investisseurs vis-à-vis du risque et de la rentabilité. 3. Les préférences des investisseurs en intégrant la théorie cumulative des perspectives (TCP). Nos résultats semblent mettre en évidence une dominance des stratégies CPPI dans la majorité des cas et pour la majorité des critères de comparaison. / This thesis is set out with the objective of evaluating and comparing the performance of portfolio insurance strategies. We try to figure out when and why one portfolio insurance strategy should be preferred by investors in practice. To meet this objective, main portfolio insurance strategies (OBPI, CPPI, Synthetic put and Stop-loss) are compared relatively to each other and to some benchmark strategies. Portfolio insurance strategies are applied within different implementation scenarios and compared according to various criteria that include:1. The payoff functions, stochastic dominance, the level of protection and the cost of insurance under bull and bear market conditions. 2. Various risk adjusted performance measures that reflect different investors’ preferences toward risk and return. 3. The preferences of investors who act according to cumulative prospect theory (CPT). Our results reveal a dominant role of CPPI strategy at the majority of cases and according to the majority of comparison criteria.
38

Essays in Health Economics

Appiah Minta, Audrey 19 October 2022 (has links)
My doctoral thesis examines the broad question of the effect of some recent health policies on health and also tries to measure socioeconomic inequalities. The first essay investigates the effect of public health insurance on people with vulnerable health. The second chapter analyses the effect of the legalization of marijuana on health, while the third chapter measures socioeconomic inequalities in health. In chapter 1, I study the evolution of access to health care for individuals in vulnerable health before and after the Affordable Care Act. I define leakage of health care as the aggregation of accessibility hurdles for individuals in vulnerable health. However, "being in vulnerable health" is a linguistic concept that does not have a sharp mathematical definition. I draw on the fuzzy sets theory and assume a non-dichotomous membership function to capture the linguistic imprecision. However, the task of choosing the "right" membership function remains an issue. To circumscribe this additional issue, I use a stochastic dominance approach to test for changes in leakage. In order to establish causality, I exploit two quasi-experimental settings offered by the dependent coverage and the states in which medicaid expansion took place. In order to use these quasi-experiments in a stochastic dominance framework, I extend Athey and Imbens (2006) changes in changes approach to a bivariate setting. Using data from the National Health Interview Survey, the results from a before and after analysis show that leakages are much lower in 2015 compared to 2009 in the US. These before and after results hold irrespective of a person's sex or socio-economic status. The causal analysis shows that leakages in not having insurance and access are reduced in medicaid expansion states after the ACA. Chapter 2 analyzes the implications of these recreational marijuana legalization (RML) on Body Mass Index (BMI) and some healthy behaviours. I exploit the quasi experimental nature of marijuana legalization policy in states using changes in changes and difference in difference approaches to identify the effect of these recreational marijuana policies. Using data from the Behavioral Risk Factor Surveillance System (BRFSS), the results show that recreational marijuana legalization reduces BMI for the entire population. The effect is mainly in the mid and top part of the BMI distribution. Subgroup analysis shows that the reduced BMI resulting from RML is significant among women but not among men. For females, the effect is found both at the lower tail (being underweight) and at the upper tail (morbid obesity). While we found evidence of a reduction in being overweight for both whites and non-whites due to RML, the reduction in obesity and morbid obesity was only found for non-whites. In addition, RML reduces obesity for those below 45 years. I also found evidence that RML increases alcohol consumption, has no effect on smoking of tobacco and binge drinking but reduces the probability of doing any physical activity. The final chapter explores the measurement of socioeconomic inequality using ordinal variables. Most measures of socioeconomic inequality are developed for ratio scale variables. These measures use the mean as a reference point which is non-robust in the presence of categorical variables. This chapter extends Allison and Foster (2004) median based approach to measuring inequalities to a bivariate case and provides conditions to robustly rank any two distributions of socioeconomic inequalities in well-being or mental health. Using the Canadian Community Health Survey (CCHS), I provide robust ordering for socioeconomic inequalities in well-being and mental health for different sub-populations in 2015. The results show that there is less socioeconomic inequality in life satisfaction, happiness, mental health, and general health status among employed males and females compared to their respective unemployed groups in 2015.
39

Rozhodovací úlohy a empirická data; aplikace na nové typy úloh / Decision Problems and Empirical Data; Applications to New Types of Problems

Odintsov, Kirill January 2013 (has links)
This thesis concentrates on different approaches of solving decision making problems with an aspect of randomness. The basic methodologies of converting stochastic optimization problems to deterministic optimization problems are described. The proximity of solution of a problem and its empirical counterpart is shown. The empirical counterpart is used when we don't know the distribution of the random elements of the former problem. The distribution with heavy tails, stable distribution and their relationship is described. The stochastic dominance and the possibility of defining problems with stochastic dominance is introduced. The proximity of solution of problem with second order stochastic dominance and the solution of its empirical counterpart is proven. A portfolio management problem with second order stochastic dominance is solved by solving the equivalent empirical problem. Powered by TCPDF (www.tcpdf.org)
40

[en] POWER GENERATION INVESTMENTS SELECTION / [pt] SELEÇÃO DE PROJETOS DE INVESTIMENTO EM GERAÇÃO DE ENERGIA ELÉTRICA

LEONARDO BRAGA SOARES 22 July 2008 (has links)
[pt] A reestruturação do setor de energia elétrica, iniciada nos anos 90, teve como uma de suas principais implicações a introdução da competição na atividade de geração. A expansão do parque gerador, necessária para garantir o equilíbrio estrutural entre oferta e demanda, é estimulada por contratos de longo prazo negociados em leilões, na modalidade de menor tarifa. Destarte, o investidor deve oferecer um limite de preço para que o seu projeto seja competitivo (de forma a ganhar a licitação), mas que ao mesmo tempo seja suficiente para remunerar seu investimento, custos de operação e, sobretudo, protegê-lo contra todos os riscos intrínsecos ao projeto. Nesse contexto, as duas principais contribuições do presente trabalho são: (i) a proposição de uma metodologia de precificação de riscos, utilizando o critério do Value at Risk (VaR), que indica a máxima perda admitida pelo invetidor avesso a risco, com um determinado nível de confiança, e (ii) a aplicação de diferentes modelos de seleção de carteiras, que incorporam o critério do VaR para otimizar um portfolio com diferentes tecnologias de geração de energia. Os resultados da precificação de riscos são úteis para determinar os componentes críticos do projeto e calcular a competitividade (preço) de cada tecnologia. A aplicação de diferentes métodos de seleção de carteiras busca determinar o modelo mais indicado para o perfil das distribuições de retorno dos projetos de geração, que apresentam assimetria e curtose elevada (caldas pesadas). / [en] The new structure of the brazilian electric sector, consolidated by the end of the 90s main implication the introduction of competition in the power generation activity. The expansion of generation capacity, responsible to ensure structural equilibrium between supply and demand, is stimulated by long-term contracts negotiated through energy auctions. Therefore, the investor must give a competitive price (in order to win the auction), but also sufficient to pay his investment, operational costs and, especially, protect him against all project risks. In this role, the two main contributions of this work are: (i) to suggest a methodology of risk pricing, using the Value at Risk (VaR) criterium, which gives the maximum loss admitted by the risk averse investor, with a specified confidence level, and (ii) to apply different portfolio selection models, which incorporates the VaR criterium to optimize a portfolio with different power generation technologies. The risk pricing results are usefull to determine the project critical components and to calculate the competitiviness (price) of each technology. The study of different portfolio selection methods aims to investigate the most suitable model for the return distribution shape, characterized by having assimetry and curtosis (heavy tails).

Page generated in 0.0683 seconds