81 |
A Study of Four Statistics, Used in Analysis of Contingency Tables, in the Presence of Low Expected FrequenciesPost, Jane R. 01 May 1975 (has links)
Four statistics used for the analysis of categorical data were observed in the presence of many zero cell frequencies in two way classification contingency tables. The purpose of this study was to determine the effect of many zero cell frequencies upon the distribution properties of each of the four statistics studied. It was found that Light and Margolin's C and Pearson's Chi-square statistic closely approximated the Chi-square distribution as long as less than one-third of the table cells were empty. It was found that the mean and variance of Kullbach's 21 were larger than the expected values in the presence of few empty cells. The mean for 21 was found to become small in the presence of large numbers of empty cells. Ku's corrected 21 statistic was found, in the presence of many zero cell frequencies, to have a much larger mean value than would be expected in a Chi-square distribution. Kullback's 21 demonstrated a peculiar distribution change in the presence of large numbers of zero cell frequencies. 21 first enlarged, then decreased in average value.
|
82 |
Elaborations on Multiattribute Utility Theory DominanceVairo, David L 01 January 2019 (has links)
ELABORATIONS ON MULTIATTRIBUTE UTILITY THEORY DOMINANCE
By David L. Vairo
A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at Virginia Commonwealth University.
Virginia Commonwealth University, 2019.
Major Director: Dissertation director’s name, Dr. Jason Merrick, Supply Chain Management and Analytics
Multiattribute Utility Theory (MAUT) is used to structure decisions with more than one factor (attribute) in play. These decisions become complex when the attributes are dependent on one another. Where linear modeling is concerned with how factors are directly related or correlated with each other, MAUT is concerned with how a decision maker feels about the attributes. This means that direct elicitation of value or utility functions is required. This dissertation focuses on expanding the types of dominance forms used within MAUT. These forms reduce the direct elicitation needed to help structure decisions. Out of this work comes support for current criticisms of gain/loss separability that is assumed as part of Prospect Theory. As such, an alternative to Prospect Theory is presented, derived from within MAUT, by modeling the probability an event occurs as an attribute.
|
83 |
Pricing Strategy and the Formation and Evolution of Reference Price Perceptions in New Product CategoriesLowe, Benjamin, n/a January 2006 (has links)
This study examines how pioneer and follower pricing strategies affect the formation and evolution of reference price perceptions in new product categories. It contributes to our understanding of pricing new products by integrating two important research streams in the field of marketing - reference price theory and the theory of pioneer brand advantage. This is the first research to address reference price effects for radically new product categories. Prior research has focused solely on products in existing categories, typically in fast moving consumer goods categories. Using three experiments to causally establish the consequences of pioneer and follower pricing strategies on consumer perceptions, three critical research issues are addressed for the first time, consistent with calls for research in the literature: 1. Which reference price do consumers utilise in new product categories? 2. What is the role of consumer confidence in reference price for new product categories? 3. How do reference price perceptions form and evolve as a result of pioneer and follower pricing strategy? In the literature, a frequently cited issue is the fragmented operationalisation of reference price perceptions. With little theory to guide researchers in terms of which measures should be used, experiment 1 provides new theory, finding as hypothesised, that fair price perceptions as opposed to expected price perceptions are more likely to be evoked by consumers for new product categories. Experiment 1 also finds that using consumers' confidence in their reference price beliefs as an additional explanatory variable, does not improve over current reference price models. Overconfidence, a robust consumer behavioural phenomenon (Alba and Hutchinson 2000), might explain this result. Prior research has made several contributions to understanding reference price perceptions in established product categories. However, not much is known about how these reference price perceptions initially form and evolve. Experiments 2 and 3 address this gap by simulating an emerging market and examining the role of pioneership in shaping reference price perceptions. Experiment 2 found the pioneer, due to its perceptual prominence, is able to define the reference price and subsequently define perceptions of value. That is, the value consumers place on a product and their intentions to purchase the product are about the same whether the pioneer follows a penetration (initial low price) or skimming (initial high price) strategy. Experiment 3 extends experiment 2 by examining what happens in the emerging market when a follower brand enters. The follower enters at a large or small discount to the pioneer, and the pioneer completes its penetration or skimming strategy, converging to a 'regular' price. As predicted, the pioneer's initial price frames subsequent price and value perceptions, signifying the importance of the pioneer as a referent brand. Lower initial prices erode value perceptions, whereas higher initial prices substantiate value perceptions. The follower's pricing strategy does not have as much influence as the pioneer's pricing strategy. Other findings from experiment 3 related to reference price theory in general. Specifically, there was strong evidence of an averaging process when forming reference prices. This adds theory to the measurement debate about operationalising reference price as some past price such as last price paid or some average of past prices. Experiment 3 also provides a further measurement contribution by supporting the use of brand specific measures of reference price, rather than category based measures. More generally, because of the causal research design, this thesis provides strong evidence of the use of reference prices in consumer decision making: a key concern emphasised by one of the area's seminal articles (i.e., Kalyanaram and Winer 1995), which stresses the need to provide evidence that consumers actually use reference prices, and not just act as if they do.
|
84 |
NIG distribution in modelling stock returns with assumption about stochastic volatility : Estimation of parameters and application to VaR and ETL.Kucharska, Magdalena, Pielaszkiewicz, Jolanta January 2009 (has links)
<p>We model Normal Inverse Gaussian distributed log-returns with the assumption of stochastic volatility. We consider different methods of parametrization of returns and following the paper of Lindberg, [21] we</p><p>assume that the volatility is a linear function of the number of trades. In addition to the Lindberg’s paper, we suggest daily stock volumes and amounts as alternative measures of the volatility.</p><p>As an application of the models, we perform Value-at-Risk and Expected Tail Loss predictions by the Lindberg’s volatility model and by our own suggested model. These applications are new and not described in the</p><p>literature. For better understanding of our caluclations, programmes and simulations, basic informations and properties about the Normal Inverse Gaussian and Inverse Gaussian distributions are provided. Practical applications of the models are implemented on the Nasdaq-OMX, where we have calculated Value-at-Risk and Expected Tail Loss</p><p>for the Ericsson B stock data during the period 1999 to 2004.</p>
|
85 |
Determinants of population health : A panel data study on 24 countriesLarsson, Anders January 2007 (has links)
<p>This study aim at investigating whether income inequality ceteris paribus is a determinant of population health measured by infant mortality rate and average expected lifetime. Earlier research has found results pointing in different directions but the income inequality hypothesis suggests that income inequality alone is something bad for the population. The study uses data on income distribution from the Luxembourg Income Study (LIS) and the World Income Inequality Database (WIID). Data on economic development and health indicators comes from the OECD database. An econometric model which applies country fixed effects is specified and the results indicates no effect from income inequality on infant mortality rate but some indications of a negative effect on average expected lifetime.</p>
|
86 |
NIG distribution in modelling stock returns with assumption about stochastic volatility : Estimation of parameters and application to VaR and ETL.Kucharska, Magdalena, Pielaszkiewicz, Jolanta January 2009 (has links)
We model Normal Inverse Gaussian distributed log-returns with the assumption of stochastic volatility. We consider different methods of parametrization of returns and following the paper of Lindberg, [21] we assume that the volatility is a linear function of the number of trades. In addition to the Lindberg’s paper, we suggest daily stock volumes and amounts as alternative measures of the volatility. As an application of the models, we perform Value-at-Risk and Expected Tail Loss predictions by the Lindberg’s volatility model and by our own suggested model. These applications are new and not described in the literature. For better understanding of our caluclations, programmes and simulations, basic informations and properties about the Normal Inverse Gaussian and Inverse Gaussian distributions are provided. Practical applications of the models are implemented on the Nasdaq-OMX, where we have calculated Value-at-Risk and Expected Tail Loss for the Ericsson B stock data during the period 1999 to 2004.
|
87 |
Determinants of population health : A panel data study on 24 countriesLarsson, Anders January 2007 (has links)
This study aim at investigating whether income inequality ceteris paribus is a determinant of population health measured by infant mortality rate and average expected lifetime. Earlier research has found results pointing in different directions but the income inequality hypothesis suggests that income inequality alone is something bad for the population. The study uses data on income distribution from the Luxembourg Income Study (LIS) and the World Income Inequality Database (WIID). Data on economic development and health indicators comes from the OECD database. An econometric model which applies country fixed effects is specified and the results indicates no effect from income inequality on infant mortality rate but some indications of a negative effect on average expected lifetime.
|
88 |
Topics in Delayed Renewal Risk ModelsKim, So-Yeun January 2007 (has links)
Main focus is to extend the analysis of the ruin related
quantities, such as the surplus immediately prior to ruin, the
deficit at ruin or the ruin probability, to the delayed renewal
risk models.
First, the background for the delayed renewal risk model is
introduced and two important equations that are used as frameworks
are derived. These equations are extended from the ordinary
renewal risk model to the delayed renewal risk model. The first
equation is obtained by conditioning on the first drop below the
initial surplus level, and the second equation by conditioning on
the amount and the time of the first claim.
Then, we consider the deficit at ruin in particular among many
random variables associated with ruin and six main results are
derived. We also explore how the Gerber-Shiu expected discounted
penalty function can be expressed in closed form when
distributional assumptions are given for claim sizes or the time
until the first claim.
Lastly, we consider a model that has premium rate reduced when the
surplus level is above a certain threshold value until it falls
below the threshold value. The amount of the reduction in the
premium rate can also be viewed as a dividend rate paid out from
the original premium rate when the surplus level is above some
threshold value. The constant barrier model is considered as a
special case where the premium rate is reduced to $0$ when the
surplus level reaches a certain threshold value. The dividend
amount paid out during the life of the surplus process until ruin,
discounted to the beginning of the process, is also considered.
|
89 |
Evaluation of the Swedish Trade Council’s Business Opportunity ProjectsAllerup, Jonas January 2010 (has links)
The purpose of this paper is to investigate the effects of the Business Opportunity Projects (BOPs) that the Swedish Trade Council uses when promoting export for small enterprises. The Business Opportunity Projects have the same type of setup for all offices where the Swedish Trade Council is established and are subsidized by 60 percent from the government. A dataset on firms’ financial state on a ten year basis is used and survey interviews conducted in 2005/06 and 2007/08. From this data three types of methods are used; a calculations on expected values of return; a panel data model and a probit model.The results show that the expected return of one project is around 250 000 SEK and if the project is successful the average return is around 1 000 000 SEK. The governmental return is around 22 times the invested money. The probability of creating business volume directly or indirectly is around 45 percent. It is also shown that the projects have an impact on the export turnover of the participating firms. The effect comes after two years and it increases until four years after the BOP. The interpretation of the exact effect should be made with caution due to estimation issues. The result also indicates that the BOP generates around 1.5 employees on averages.The results show that the participating firms do not have advantage being larger, or being from the middle region of Sweden nor in a specific branch in order to have a successful project. Firms from north part of Sweden that have a slightly smaller chance of having a successful project, if the project is made in Western European offices, the firms have a higher probability to succeed compared to other offices.
|
90 |
Applying Value at Risk (VaR) analysis to Brent Blend Oil pricesAli Mohamed, Khadar January 2011 (has links)
The purpose with this study is to compare four different models to VaR in terms of accuracy, namely Historical Simulation (HS), Simple Moving Average (SMA), Exponentially Weighted Moving Average (EWMA) and Exponentially Weighted Historical Simulation (EWHS). These VaR models will be applied to one underlying asset which is the Brent Blend Oil using these confidence levels 95 %, 99 % and 99, 9 %. Concerning the return of the asset the models under two different assumptions namely student t-distribution and normal distribution will be studied
|
Page generated in 0.0441 seconds