• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 149
  • 57
  • 17
  • 16
  • 11
  • 9
  • 8
  • 7
  • 6
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 341
  • 341
  • 155
  • 113
  • 61
  • 55
  • 53
  • 49
  • 49
  • 49
  • 37
  • 35
  • 31
  • 31
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Can it be Good to be Bad? : Evidence on the performance of US sin stocks

Karlén, Anders, Poulsen, Sebastian January 2013 (has links)
Investment decisions grounded in personal values and societal norms has seen a growth in the last decades, to a point where large institutional investors are abstaining from certain industries that share a specific characteristic altogether. The affiliation with sinful industries that promote human vice is not viewed as socially responsible in the eyes of the public, a reason why socially responsible investment funds that screen out these companies has experienced an increase in popularity. This study sets out to investigate the performance of American sin stocks in an attempt to increase the awareness of how these shunned industries has performed. While the existing literature provides evidence which proves sin stocks outperforms the market, we will provide further evidence concentrating on a mix of industries previously not focused on. Additionally we will extend the observation period beyond what has been done in the past. In this study, the definition of sin incorporates the industries of alcohol, defense, gambling and tobacco, and investigates the performance of a survivorship-free sample of 159 companies between July 1973 and June 2012. As performance measure, the four factor model is employed to capture any abnormal performance in relation to the market with three additional risk factors. In addition, we set out to investigate the performance of the different industries individually, to find if there is any that acts as a driver of the performance. Further, we look into the persistency of the performance over time. We find that the sample outperforms the market with 5.8% annually, and where the tobacco industry stands out with the highest abnormal return, the other industries grouped together still produce significant outperformance. The sinful index examined in this degree project has shown persistent performance, with no obvious trends of growth or decline. Unlike what has been found in previous research, the sample has shown a substantial difference in performance depending on the weighting scheme applied, not only individually for the industries, but also collectively.
82

Essays on Lifetime Uncertainty: Models, Applications, and Economic Implications

Zhu, Nan 07 August 2012 (has links)
My doctoral thesis “Essays on Lifetime Uncertainty: Models, Applications, and Economic Implications” addresses economic and mathematical aspects pertaining to uncertainties in human lifetimes. More precisely, I commence my research related to life insurance markets in a methodological direction by considering the question of how to forecast aggregate human mortality when risks in the resulting projections is important. I then rely on the developed method to study relevant applied actuarial problems. In a second strand of research, I consider the uncertainty in individual lifetimes and its influence on secondary life insurance market transactions. Longevity risk is becoming increasingly crucial to recognize, model, and monitor for life insurers, pension plans, annuity providers, as well as governments and individuals. One key aspect to managing this risk is correctly forecasting future mortality improvements, and this topic has attracted much attention from academics as well as from practitioners. However, in the existing literature, little attention has been paid to accurately modeling the uncertainties associated with the obtained forecasts, albeit having appropriate estimates for the risk in mortality projections, i.e. identifying the transiency of different random sources affecting the projections, is important for many applications. My first essay “Coherent Modeling of the Risk in Mortality Projections: A Semi-Parametric Approach” deals with stochastically forecasting mortality. In contrast to previous approaches, I present the first data-driven method that focuses attention on uncertainties in mortality projections rather than uncertainties in realized mortality rates. Specifically, I analyze time series of mortality forecasts generated from arbitrary but fixed forecasting methodologies and historic mortality data sets. Building on the financial literature on term structure modeling, I adopt a semi-parametric representation that encompasses all models with transitions parameterized by a Normal distributed random vector to identify and estimate suitable specifications. I find that one to two random factors appear sufficient to capture most of the variation within all of our data sets. Moreover, I observe similar systematic shapes for their volatility components, despite stemming from different forecasting methods and/or different mortality data sets. I further propose and estimate a model variant that guarantees a non-negative process of the spot force of mortality. Hence, the resulting forward mortality factor models present parsimonious and tractable alternatives to the popular methods in situations where the appraisal of risks within medium or long-term mortality projections plays a dominant role. Relying on a simple version of the derived forward mortality factor models, I take a closer look at their applications in the actuarial context in the second essay “Applications of Forward Mortality Factor Models in Life Insurance Practice. In the first application, I derive the Economic Capital for a stylized UK life insurance company offering traditional product lines. My numerical results illustrate that (systematic) mortality risk plays an important role for a life insurer's solvency. In the second application, I discuss the valuation of different common mortality-contingent embedded options within life insurance contracts. Specifically, I present a closed-form valuation formula for Guaranteed Annuity Options within traditional endowment policies, and I demonstrate how to derive the fair option fee for a Guaranteed Minimum Income Benefit within a Variable Annuity Contract based on Monte Carlo simulations. Overall my results exhibit the advantages of forward mortality factor models in terms of their simplicity and compatibility with classical life contingencies theory. The second major part of my doctoral thesis concerns the so-called life settlement market, i.e. the secondary market for life insurance policies. Evolving from so-called “viatical settlements” popular in the late 1980s that targeted severely ill life insurance policyholders, life settlements generally involve senior insureds with below average life expectancies. Within such a transaction, both the liability of future contingent premiums and the benefits of a life insurance contract are transferred from the policyholder to a life settlement company, which may further securitize a bundle of these contracts in the capital market. One interesting and puzzling observation is that although life settlements are advertised as a high-return investment with a low “Beta”, the actual market systematically underperformed relative to expectations. While the common explanation in the literature for this gap between anticipated and realized returns falls on the allegedly meager quality of the underlying life expectancy estimates, my third essay “Coherent Pricing of Life Settlements under Asymmetric Information” proposes a different viewpoint: The discrepancy may be explained by adverse selection. Specifically, by assuming information with respect to policyholders’ health states is asymmetric, my model shows that a discrepancy naturally arises in a competitive market when the decision to settle is taken into account for pricing the life settlement transaction, since the life settlement company needs to shift its pricing schedule in order to balance expected profits. I derive practically applicable pricing formulas that account for the policyholder’s decision to settle, and my numerical results reconfirm that---depending on the parameter choices---the impact of asymmetric information on pricing may be considerable. Hence, my results reveal a new angle on the financial analysis of life settlements due to asymmetric information. Hence, all in all, my thesis includes two distinct research strands that both analyze certain economic risks associated with the uncertainty of individuals’ lifetimes---the first at the aggregate level and the second at the individual level. My work contributes to the literature by providing both new insights about how to incorporate lifetime uncertainty into economic models, and new insights about what repercussions---that are in part rather unexpected---this risk factor may have.
83

Potential und Grenzen des Fünf-Faktoren-Modell basierten Prototypenansatzes

Herzberg, Philipp Yorck 19 September 2013 (has links) (PDF)
Ausgehend von den klassischen vier Paradigmen zur Messung individueller Differenzen wird die dominierende variablenzentrierte Forschungsausrichtung in der Differentiellen Psychologie hinterfragt und dafür plädiert, diese um einen personenzentrierten Ansatz zu ergänzen. Die Operationalisierung des personenzentrierten Zugangs erfolgt durch einen Prototypenansatz, der auf dem Fünf-Faktoren-Modell der Persönlichkeit basiert und dessen Potential und Grenzen in dieser Arbeit untersucht wurden. Zuerst wurde die Anzahl der Prototypen untersucht und diese Prototypenlösung anschließend validiert. Die auf Basis von zwei bevölkerungsrepräsentativen Stichproben sowie einer umfangreichen Internetstichprobe durchgeführten Analysen konnten übereinstimmend zeigen, dass anhand der ausgewählten multiplen Entscheidungskriterien eine Fünf-Cluster Lösung anderen Clusterlösungen vorzuziehen ist. Die Replizierbarkeit der Prototypen über unterschiedliche Stichproben verschiedenen Alters, Geschlechts, regionaler Herkunft, Bildungshintergrund, sozioökonomischem Status, Gesundheit (Allgemeinbevölkerung, Patientenstichproben), Erhebungsinstrumente (Selbst- und Fremdbeurteilungsverfahren, Fragebogen, Adjektivlisten, Papier-Bleistift-Verfahren und internetbasiert) und Extraktionsverfahren (Clusteranalyse, Mischverteilungsmodelle) zeigt, dass Persönlichkeitstypen eine Möglichkeit der Klassifikation von Personen nach der Ähnlichkeit ihrer Persönlichkeitsprofile darstellen. In vier Validierungsstudien konnten die Befunde zu emotionalen, kognitiven, verhaltensbezogenen und gesundheitsbezogenen Unterschieden zwischen den Prototypen im Erwachsenenalter repliziert und erweitert werden. Wie im Kindes- und Jugendalter zeigt auch der resiliente Prototyp im Erwachsenenalter die beste psychosoziale Anpassung. Für den über- und unterkontrollierten Prototyp lassen sich die Befunde einer hohen psychischen Belastung ebenfalls ins Erwachsenenalter übertragen. Der zuversichtliche und der reservierte Prototyp nehmen eine mittlere Position im Kontinuum der psychosozialen Anpassung zwischen dem resilienten und dem über- und dem unterkontrollierten Prototyp ein. Weiterhin wurden der variablenzentrierte und der personenzentrierten Ansatz hinsichtlich seiner Prädiktionsleistung verglichen. Anhand von zwei umfangreichen und heterogenen Stichproben konnten konsistente Zusammenhänge zwischen der Zugehörigkeit zu einem Persönlichkeitsprototyp und einer Vielzahl relevanter Straßenverkehrskriterien bestätigt werden. Abschließend wurde das Potential der Prototypen als Moderatoren geprüft. Es konnte demonstriert werden, dass die Prototypen den Zusammenhang zwischen dem CRP-Wert und der täglich verwendeten Dosis Prednisolon zur Behandlung der Symptome einer rheumatoiden Arthritis moderieren.
84

Factor Models to Describe Linear and Non-linear Structure in High Dimensional Gene Expression Data

Mayrink, Vinicius Diniz January 2011 (has links)
<p>An important problem in the analysis of gene expression data is the identification of groups of features that are coherently expressed. For example, one often wishes to know whether a group of genes, clustered because of correlation in one data set, is still highly co-expressed in another data set. For some microarray platforms there are many, relatively short, probes for each gene of interest. In this case, it is possible that a given probe is not measuring its targeted transcript, but rather a different gene with a similar region (called cross-hybridization). Similarly, the incorrect mapping of short nucleotide sequences to a target gene is a common issue related to the young technology producing RNA-Seq data. The expression pattern across samples is a valuable source of information, which can be used to address distinct problems through the application of factor models. Our first study is focused on the identification of the presence/absence status of a gene in a sample. We compare our factor model to state-of-the-art detection methods; the results suggest superior performance of the factor analysis for detecting transcripts. In the second study, we apply factor models to investigate gene modules (groups of coherently expressed genes). Variation in the number of copies of regions of the genome is a well known and important feature of most cancers. Copy number alteration is detected for a group of genes in breast cancer; our goal is to examine this abnormality in the same chromosomal region for other types of tumors (Ovarian, Lung and Brain). In the third application, the expression pattern related to RNA-Seq count data is evaluated through a factor model based on the Poisson distribution. Here, the presence/absence of coherent patterns is closely associated with the number of incorrect read mappings. The final study of this dissertation is dedicated to the analysis of multi-factor models with linear and non-linear structure of interactions between latent factors. The interaction terms can have important implications in the model; they represent relationships between genes which cannot be captured in an ordinary analysis.</p> / Dissertation
85

Bayesian Sparse Learning for High Dimensional Data

Shi, Minghui January 2011 (has links)
<p>In this thesis, we develop some Bayesian sparse learning methods for high dimensional data analysis. There are two important topics that are related to the idea of sparse learning -- variable selection and factor analysis. We start with Bayesian variable selection problem in regression models. One challenge in Bayesian variable selection is to search the huge model space adequately, while identifying high posterior probability regions. In the past decades, the main focus has been on the use of Markov chain Monte Carlo (MCMC) algorithms for these purposes. In the first part of this thesis, instead of using MCMC, we propose a new computational approach based on sequential Monte Carlo (SMC), which we refer to as particle stochastic search (PSS). We illustrate PSS through applications to linear regression and probit models.</p><p>Besides the Bayesian stochastic search algorithms, there is a rich literature on shrinkage and variable selection methods for high dimensional regression and classification with vector-valued parameters, such as lasso (Tibshirani, 1996) and the relevance vector machine (Tipping, 2001). Comparing with the Bayesian stochastic search algorithms, these methods does not account for model uncertainty but are more computationally efficient. In the second part of this thesis, we generalize this type of ideas to matrix valued parameters and focus on developing efficient variable selection method for multivariate regression. We propose a Bayesian shrinkage model (BSM) and an efficient algorithm for learning the associated parameters .</p><p>In the third part of this thesis, we focus on the topic of factor analysis which has been widely used in unsupervised learnings. One central problem in factor analysis is related to the determination of the number of latent factors. We propose some Bayesian model selection criteria for selecting the number of latent factors based on a graphical factor model. As it is illustrated in Chapter 4, our proposed method achieves good performance in correctly selecting the number of factors in several different settings. As for application, we implement the graphical factor model for several different purposes, such as covariance matrix estimation, latent factor regression and classification.</p> / Dissertation
86

Bayesian Semi-parametric Factor Models

Bhattacharya, Anirban January 2012 (has links)
<p>Identifying a lower-dimensional latent space for representation of high-dimensional observations is of significant importance in numerous biomedical and machine learning applications. In many such applications, it is now routine to collect data where the dimensionality of the outcomes is comparable or even larger than the number of available observations. Motivated in particular by the problem of predicting the risk of impending diseases from massive gene expression and single nucleotide polymorphism profiles, this dissertation focuses on building parsimonious models and computational schemes for high-dimensional continuous and unordered categorical data, while also studying theoretical properties of the proposed methods. Sparse factor modeling is fast becoming a standard tool for parsimonious modeling of such massive dimensional data and the content of this thesis is specifically directed towards methodological and theoretical developments in Bayesian sparse factor models.</p><p>The first three chapters of the thesis studies sparse factor models for high-dimensional continuous data. A class of shrinkage priors on factor loadings are introduced with attractive computational properties, with operating characteristics explored through a number of simulated and real data examples. In spite of the methodological advances over the past decade, theoretical justifications in high-dimensional factor models are scarce in the Bayesian literature. Part of the dissertation focuses on exploring estimation of high-dimensional covariance matrices using a factor model and studying the rate of posterior contraction as both the sample size & dimensionality increases. </p><p>To relax the usual assumption of a linear relationship among the latent and observed variables in a standard factor model, extensions to a non-linear latent factor model are also considered.</p><p>Although Gaussian latent factor models are routinely used for modeling of dependence in continuous, binary and ordered categorical data, it leads to challenging computation and complex modeling structures for unordered categorical variables. As an alternative, a novel class of simplex factor models for massive-dimensional and enormously sparse contingency table data is proposed in the second part of the thesis. An efficient MCMC scheme is developed for posterior computation and the methods are applied to modeling dependence in nucleotide sequences and prediction from high-dimensional categorical features. Building on a connection between the proposed model & sparse tensor decompositions, we propose new classes of nonparametric Bayesian models for testing associations between a massive dimensional vector of genetic markers and a phenotypical outcome.</p> / Dissertation
87

Monetary transmission mechanism in Taiwan- Application of FAVECM model.

Lin, An-ni 06 July 2010 (has links)
This study discusses the monetary policy transmission mechanism in the different channels. The analysis is conducted using generalized impulse response functions derived from a factor-augmented vector error correction (FAVECM) model. The FAVECM methodology as developed by Lee (2009) extends the factoraugmented vector autoregression (FAVAR) model to analyze long-run and shortrun dynamics of non-stationary variables. This recenly derived FAVECM model combines the advantages of factor model and the VECM model. The estimations are conducted using 174 macroeconomic time series in monthly frequency for the period January 2000 to September 2009. Results indicate that interbank call loan rate, deposit rate and prime lending rate are conintegrated, which provides sufficient evidence of the existence of the credit channel in monetary transmission system. Other GIRF results are generally consistent of the expected monetary policy effectiveness.
88

Purchasing power parity and exchange rate transmission channel analysis - Application of FAVECM

Pan, Ying-ying 15 July 2010 (has links)
This study revists Purchasing Power Parity (PPP) and discusses the monetary policy transmission mechanism in exchange rate channels. The analysis is conducted using generalized impulse response functions derived from a Factor- Augmented Vector Error Correction (FAVECM) model. The FAVECM methodology as developed by Lee (2009) extends the Factor- Augmented Vector Autoregression (FAVAR) model to analyze long-run and shortrun dynamics of non-stationary variables. This recently derived FAVECM model combines the advantages of factor model and the VECM model. The estimations are conducted using 157 macroeconomic time series in monthly frequency for the period January 2000 to September 2009. Results indicate that PPP exists and expansionary devaluation effect in Taiwan. Other GIRF results are generally consistent of the expected exchange rate effectiveness.
89

The Application of Multi-factor Model on Enhanced electronic index fund construction

Lu, Shih-han 11 February 2011 (has links)
In Taiwan, the trading value of electronics related stocks makes up over 60% of Taiwan stock market and has grown gradually to the recent high of 70.03% in Dec. 2009. The high correlation between the TAIEX and TAIEX Electronic Index raises our interest to build a fund aiming to outperform TAIEX Electronic Index performance with similar risk as index by constructing an enhanced fund. We are keen to investigate if active management gain higher return than passive one according to our empirical study. This paper presents a combination effect of multi-factor model in the electronic sector and illiquidity, that expected returns are increasing in illiquidity. The major outcome is that we construct single industry Multi-Factor Model (MFM) and test for its prediction ability. The other is we form a proxy for illiquidity and incorporate it into the multi-factor model using Principal Component Analysis (PCA). The objective of this study is to discover mispriced stocks and make adjustments to build an enhanced fund, targeting 3% tracking error. As a result, the most stable factors based on cumulative return in forecasting electronic sector are Leverage, Value3, ValueToGrowth, EarningQulity respectively. The average explanatory power of electronic multi-factor model (ELE-MFM) is around 52.4% over the sample from 2004/1 to 2009/12. For illiquidity measure, we run cross-regression of stock return on illiquidity and other stock characteristics from the period of 2000/1 to 2009/12. What we find is sub-period is the significant evidence for the work of illiquidity. With the PCA combination of electronic multi-factor model and illiquidity measure into scores coming from the first principal component, we rank stocks through it. With the appropriate constraint rules added into our quadratic programming, the portfolio using the techniques combining multi-factor model and liquidity measures shows IR 0.69, TE 3% and Alpha 2.04% in our sample period. The work of the electronic Multi-Factor Model (MFM) and the illiquidity measure showing satisfactory result support enhanced skills.
90

A Sector-Specific Multi-Factor Alpha Model- With Application in Taiwan Stock Market

Chen, Ting-Hsuan 27 June 2011 (has links)
This study constructs a quantitative stock selection model across multiple sectors with the application of the Bayesian method. It employees factors from the Taiwan stock market which could explain stock returns. Under this structure, each sector that has different significant factors is allowed to be imported into sub models. The factors are calculated into alpha scores and used to do stock selection. Therefore, the demonstration of both intra and inter-sector alpha scores into sector-specific integration alpha scores is an important concept in this study. Furthermore, an enhanced index fund is built based on the model and related to the benchmark to illustrate the power of this model. Once the contents of a portfolio are decided, this model could provide stock selection criterion based on the predictive power of stock return. Finally, the results demonstrate that this model is practical and flexible for local stock portfolio analysis.

Page generated in 0.0405 seconds