Spelling suggestions: "subject:"3factor model"" "subject:"3factor godel""
21 |
Exploring language bias in the NEO-PI-RFranklin, Dee Ross 08 March 2010 (has links)
The study explores language bias in the NEO-PI-R both quantitatively and qualitatively. A sample of 28 postgraduate psychology student volunteers completed a questionnaire containing the NEO-PI-R and two open-ended questions about the instrument. These responses were then analysed across English first language and second language speakers to explore issues of bias. Reliability of the NEO-PI-R appeared to be robust at a domain level. The reliability of the facets, while appropriate for the most part, still yielded low alpha coefficients for the Excitement-seeking, Actions, Values and Straightforwardness facets. ANOVA’s at the domain and facet scale levels indicated no significant differences across home language. However ANOVA’s at the item level yielded 33 in total that were problematic, comprising of 12 items that were significant at the 5% level of significance and 21 items at the 10% level of significance. These items were primarily from the N and E domains. Thematic analysis of the open-ended questions of the questionnaire indicated 26 items were difficult to understand and/or inappropriate for the South African context. These items were primarily from the E and A domains. From the original sample of 28 volunteers, two focus groups were formed, comprising of volunteers from the initial sample. The focus groups explored several qualitative issues, including concepts of personality, language and culture and the applicability of the NEO-PI-R for South African user groups. It was perceived that there is a presence of American socio-cultural references within the use of language, grammar and socio-cultural context in the instrument. Thus the results show evidence of language bias in the NEO-PI-R, and subsequently identify particular aspects and items of the instrument that are especially problematic for a South African user group. The present study suggests that the NEO-PI-R would need to be revised to suit the South African context by changing the problematic items.
|
22 |
Five Factor Personality Traits in Schizophrenics with a History of Violent BehaviorLust, Ashley 01 January 2017 (has links)
The diagnosis of schizophrenia has been associated with increased risk of violence and aggression. However, the extent of this association in relation to displayed personality traits among individuals diagnosed with schizophrenia have not been fully investigated. The lack of research has resulted in an inability to determine why only some individuals with schizophrenia display violent tendencies when others do not. Guided by Costa and McCrae's five-factor model of personality and Eysenck's theory of personality and crime, the purpose of this study was to investigate the relationship between the five personality traits and the display of violence among individuals with schizophrenia, as well as the predictability of violence. A personality assessment was used to explore the personality of the participants (n = 111), individuals obtained by convenience sampling of data originally collected by Ohi, Shimada, and Kawasaki. Each of the participants included had been diagnosed with schizophrenia by at least two clinical physicians. One-way analyses of variance were performed for each of the five personality traits in order to distinguish any relationships. A binary logistic regression model was conducted in order to discover a model of predictability in regards to violent behavior among individuals with schizophrenia. Results confirmed previous research findings of a statistically significant relationship between neuroticism and violence. However, adding to the research was the result of a significant contribution of neuroticism in the prediction of violence among schizophrenics. Positive social changes arising from these findings include practitioners having the future abilities of designing specific treatment options for individuals diagnosed with schizophrenia based on personality.
|
23 |
Youth Character Strengths, Peer Victimization, and Well-Being: Understanding Associations between Positive Traits, Social Experiences, and Positive Psychological OutcomesFrank, Michael James 31 December 2014 (has links)
The advent of positive psychology has increased awareness of factors that lead individuals to thrive in life, allowing for a more comprehensive model of mental health service delivery. However, while measurement and understanding of character strengths and well-being have improved over the last decade, the interaction of these factors with social risk factors is not entirely understood. The current study analyzed an archival dataset consisting of self-report data from 425 high school students, to examine the extent to which high school students' specific character strengths (i.e., social competence, self-regulation, responsibility, and empathy) are associated with positive psychological outcomes (i.e., gratitude, life satisfaction, and hope), and moderate the relationships between positive psychological outcomes and relational and overt peer victimization. All measured character strengths were positively associated with life satisfaction and hope except for empathy, which was negatively associated with both in multivariate analyses. Social competence and self-regulation were positively associated with gratitude. Relational victimization (but not overt victimization) was inversely associated with life satisfaction and gratitude, and indirectly predicted hope as mediated by gratitude. Gratitude and hope predicted life satisfaction in both models, and served as partial mediators of character strengths and relational victimization. For overt victimization, social competence served as a protective factor and self-regulation served as a risk factor to gratitude. For relational victimization, self-regulation served as a protective factor to gratitude. Implications for research and practice are discussed.
|
24 |
The establishment of implicit perspectives of personality in Tshivenda-speaking South Africans / R.T. NtsieniNtsieni, Rejoyce Talifhani January 2006 (has links)
Thesis (M.A. (Industrial Psychology))--North-West University, Potchefstroom Campus, 2007.
|
25 |
Essays in Dynamic MacroeconometricsBañbura, Marta 26 June 2009 (has links)
The thesis contains four essays covering topics in the field of macroeconomic forecasting.
The first two chapters consider factor models in the context of real-time forecasting with many indicators. Using a large number of predictors offers an opportunity to exploit a rich information set and is also considered to be a more robust approach in the presence of instabilities. On the other hand, it poses a challenge of how to extract the relevant information in a parsimonious way. Recent research shows that factor models provide an answer to this problem. The fundamental assumption underlying those models is that most of the co-movement of the variables in a given dataset can be summarized by only few latent variables, the factors. This assumption seems to be warranted in the case of macroeconomic and financial data. Important theoretical foundations for large factor models were laid by Forni, Hallin, Lippi and Reichlin (2000) and Stock and Watson (2002). Since then, different versions of factor models have been applied for forecasting, structural analysis or construction of economic activity indicators. Recently, Giannone, Reichlin and Small (2008) have used a factor model to produce projections of the U.S GDP in the presence of a real-time data flow. They propose a framework that can cope with large datasets characterised by staggered and nonsynchronous data releases (sometimes referred to as “ragged edge”). This is relevant as, in practice, important indicators like GDP are released with a substantial delay and, in the meantime, more timely variables can be used to assess the current state of the economy.
The first chapter of the thesis entitled “A look into the factor model black box: publication lags and the role of hard and soft data in forecasting GDP” is based on joint work with Gerhard Rünstler and applies the framework of Giannone, Reichlin and Small (2008) to the case of euro area. In particular, we are interested in the role of “soft” and “hard” data in the GDP forecast and how it is related to their timeliness.
The soft data include surveys and financial indicators and reflect market expectations. They are usually promptly available. In contrast, the hard indicators on real activity measure directly certain components of GDP (e.g. industrial production) and are published with a significant delay. We propose several measures in order to assess the role of individual or groups of series in the forecast while taking into account their respective publication lags. We find that surveys and financial data contain important information beyond the monthly real activity measures for the GDP forecasts, once their timeliness is properly accounted for.
The second chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on joint work with Michele Modugno. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone, Reichlin and Small (2008), we can handle datasets that are not only characterised by a “ragged edge”, but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. Applied for small factor models by e.g. Geweke (1977), Sargent and Sims (1977) or Watson and Engle (1983), it has been shown by Doz, Giannone and Reichlin (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz, Giannone and Reichlin (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm (used for the small model by Watson and Engle, 1983). Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model, in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the
latter in the case of simultaneous releases. We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data. In particular, we are able to examine the effect of quarterly variables and short history monthly series like the Purchasing Managers' surveys on the forecast.
The third chapter is entitled “Large Bayesian VARs” and is based on joint work with Domenico Giannone and Lucrezia Reichlin. It proposes an alternative approach to factor models for dealing with the curse of dimensionality, namely Bayesian shrinkage. We study Vector Autoregressions (VARs) which have the advantage over factor models in that they allow structural analysis in a natural way. We consider systems including more than 100 variables. This is the first application in the literature to estimate a VAR of this size. Apart from the forecast considerations, as argued above, the size of the information set can be also relevant for the structural analysis, see e.g. Bernanke, Boivin and Eliasz (2005), Giannone and Reichlin (2006) or Christiano, Eichenbaum and Evans (1999) for a discussion. In addition, many problems may require the study of the dynamics of many variables: many countries, sectors or regions. While we use standard priors as proposed by Litterman (1986), an
important novelty of the work is that we set the overall tightness of the prior in relation to the model size. In this we follow the recommendation by De Mol, Giannone and Reichlin (2008) who study the case of Bayesian regressions. They show that with increasing size of the model one should shrink more to avoid overfitting, but when data are collinear one is still able to extract the relevant sample information. We apply this principle in the case of VARs. We compare the large model with smaller systems in terms of forecasting performance and structural analysis of the effect of monetary policy shock. The results show that a standard Bayesian VAR model is an appropriate tool for large panels of data once the degree of shrinkage is set in relation to the model size.
The fourth chapter entitled “Forecasting euro area inflation with wavelets: extracting information from real activity and money at different scales” proposes a framework for exploiting relationships between variables at different frequency bands in the context of forecasting. This work is motivated by the on-going debate whether money provides a reliable signal for the future price developments. The empirical evidence on the leading role of money for inflation in an out-of-sample forecast framework is not very strong, see e.g. Lenza (2006) or Fisher, Lenza, Pill and Reichlin (2008). At the same time, e.g. Gerlach (2003) or Assenmacher-Wesche and Gerlach (2007, 2008) argue that money and output could affect prices at different frequencies, however their analysis is performed in-sample. In this Chapter, it is investigated empirically which frequency bands and for which variables are the most relevant for the out-of-sample forecast of inflation when the information from prices, money and real activity is considered. To extract different frequency components from a series a wavelet transform is applied. It provides a simple and intuitive framework for band-pass filtering and allows a decomposition of series into different frequency bands. Its application in the multivariate out-of-sample forecast is novel in the literature. The results indicate that, indeed, different scales of money, prices and GDP can be relevant for the inflation forecast.
|
26 |
Credit Risk in the Swedish Economy – A quantitative study of default ratesHuseynov, Ruslan January 2012 (has links)
The aim of this research is to produce a model allowing me to estimate the credit risks in the aggregate and the sectors levels of the Swedish economy in response to the evaluation of key macroeconomic variables. In order to estimate the credit risk models for the Swedish economy, one-factor models were used and the employed data were covering the period from 2003 to 2011. One factor models’ estimations for the sectors facilitate a comparison of default rates’ determiners between different sectors. The analyze part of the thesis starts with the estimation of the credit risk model at the aggregate economy level and it follows by the estimation of the models for different sectors. Ten different sectors are analyzed and for all sectors, the default rate models are produced. Furthermore, the paper presents some examples of applying the estimated models to macro stress testing. The findings demonstrate that in the transport and in the sector others, the most significant macroeconomic indicators were GDP, interest rates and repo rates. But, in all other sectors: GDP, interest rates and inflation rates showed the highest significant results. All coefficients were significant at the 5 % confidence level either in aggregate level or in sectors level. The interest rates showed positive relations with the default rates while the GDP and the inflation rates showed opposite relations. Reciprocal analyzes of the sectors indicated that compared to other sectors, the default rates in the financial sector strongly depended on the GDP and in the construction sector it weakly depended on inflation rates. In addition, the credit risks were varying between the sectors. At the education and the sector others, default rates were low, fluctuated between 0 and 0.05%. In contrast, at the manufacturing, the wholesale, the transportation, and the finance sectors the default rates were very high. It fluctuated between 0.03% and 0.16%. Finally, estimated models were used for the sensitive analyze of default rates by creating shocks over the independent variables. So, these calculations provided that, the default rates in financial activities sector were the most sensible sector during the shock at the GDP and the default rates in the construction sector were the most insensible ones during the shock at the interest rates and the inflation rates. To conclude, the results of this thesis can help understand the relationship between credit risk and macroeconomic indicators. This research provides important findings on how the macroeconomic indicators influence the default rates of Swedish economy either at the aggregate or at the sectors level. The calculated models can be used for the default rates’ prediction or stress testing.
|
27 |
The Market Sentiment-Adjusted Strategy under Stock Selecting of MFM ModelLee, Chun-Yi 25 July 2010 (has links)
The objective of this study is to discover the non-linear effect of market sentiment to characteristic factor returns. We run ¡¥Quantile Regression¡¦ to help us extract useful information and design an effective strategy. Based on the quantitative investment method, using the platform of Multi-Factor Model (MFM), we attempt to construct a portfolio and enhance portfolio performance. If the market-sentiment variable increases performance, we could conclude that some characteristic factors in a high sentiment period will perform better or worse in the next period.
What is the market or investor sentiment? It is still a problem in the finance field. There is no co-definition or consensus so far. We do our best to collect the indirect data, such as transaction data, price and volume data, and some indicators in other studies, VIX, put/call ratio and so on. Then, we test the proxy variables independently, and obtain some interesting results. The market turnover, the ratio of margin lending on funds/ margin lending on securities, and the growth rate of consumer confidence index have significant effects on some of the characteristic factors. This holds that some market sentiment variables could influence stocks with certain characteristics, and the factor timing approach could improve portfolio performance under examination by information ratio.
|
28 |
Multi-Factor Model and Enhanced Index Fund Performance Analysis in ChinaLee, Cheng-ju 27 July 2010 (has links)
In recent years, the economic exchanges between China and Taiwan have become more frequent, hence the Chinese financial market is the main target that we should research and participate in actively.
This study refers to Barra Multi-Factor Modeling process to construct a China Multi-Factor Model. We then apply MFM to establish a Shanghai Stock Exchange 50 enhanced index fund.
The first objective of this study is to discover significant factors which can explain excess return of securities. The second is to identify significant factors to forecast stock returns and show the alpha effect in an Enhanced Index Fund via a new weight allocating model developed by this study.
The result shows that the eight significant factors are Earning Quality, Efficiency, Growth, Momentum, Size, Trading Activity, Value, and Volatility. The performance of Enhanced Index Fund is better than that of the benchmark. Information ratio is 0.86, and turnover rate is 213%, which is acceptable.
|
29 |
Enhanced Index Fund Performance Analysis under Multi-Factor Alpha ModelHsu, Yu-hsiang 28 July 2010 (has links)
The objective of this study is to build a complete process of quantitative stockselection model construction that combines a Multi-Factor Model and informationanalysis. Based on the quantitative stock selection model, we construct anenhanced index fund that uses the Taiwan 50 index as its benchmark.
Stock prices change for a multitude of reasons, and these reasons may changeover time. In this study, we use a Multi-Factor Model and information analysis to
find the relationship between stock price behavior and a factor‟s condition. Wecan use this relationship as a basis for stock selection.
Moreover, the purpose of this study is to construct an enhanced index fund,hence we need to control the tracking error. We use an intuitive portfolio
construction method, the original weight retention rate of the benchmark, to control tracking error. In addition, the turnover rate of a portfolio is also a significant problem as it may cause the profit of a portfolio to decreasesignificantly. In this study, we use the smoothing alpha score method to control
the turnover rate of our portfolio.
|
30 |
Synchronization of Economic Fluctuations across Countries---The Application of the Dynamic Factor Model in State SpaceWang, Bao-Huei 27 July 2011 (has links)
In this thesis, we use the dynamic factor model in state space, proposed by Stock and Watson (1989), to estimate the fluctuations of common factor by using lots of macroeconomic variables. Besides, with the combination of two stage dynamic factor analysis model which is proposed by Aruba et. al (2010), we want to discuss the possibility for the correlation of economic fluctuations across countries to change with different time periods.
The thesis verifies the following three conclusions: First, the correlations of the economic fluctuations across countries are significant due to the regional economics. Second, the global or regional common shocks will increase the correlations of the economic fluctuations across countries. Finally, developed countries and emerging countries response differently during the Financial Tsunami from 2008 to 2009.
|
Page generated in 0.0348 seconds