Spelling suggestions: "subject:"ariance."" "subject:"cariance.""
151 |
Monte Carlo Simulation of Boundary Crossing Probabilities for a Brownian Motion and Curved BoundariesDrabeck, Florian January 2005 (has links) (PDF)
We are concerned with the probability that a standard Brownian motion W(t) crosses a curved boundary c(t) on a finite interval [0, T]. Let this probability be denoted by Q(c(t); T). Due to recent advances in research a new way of estimating Q(c(t); T) seems feasible: Monte Carlo Simulation. Wang and Pötzelberger (1997) derived an explicit formula for the boundary crossing probability of piecewise linear functions which has the form of an expectation. Based on this formula we proceed as follows: First we approximate the general boundary c(t) by a piecewise linear function cm(t) on a uniform partition. Then we simulate Brownian sample paths in order to evaluate the expectation in the formula of the authors for cm(t). The bias resulting when estimating Q(c_m(t); T) rather than Q(c(t); T) can be bounded by a formula of Borovkov and Novikov (2005). Here the standard deviation - or the variance respectively - is the main limiting factor when increasing the accuracy. The main goal of this dissertation is to find and evaluate variance reducing techniques in order to enhance the quality of the Monte Carlo estimator for Q(c(t); T). Among the techniques we discuss are: Antithetic Sampling, Stratified Sampling, Importance Sampling, Control Variates, Transforming the original problem. We analyze each of these techniques thoroughly from a theoretical point of view. Further, we test each technique empirically through simulation experiments on several carefully chosen boundaries. In order to asses our results we set them in relation to a previously established benchmark. As a result of this dissertation we derive some very potent techniques that yield a substantial improvement in terms of accuracy. Further, we provide a detailed record of our simulation experiments. (author's abstract)
|
152 |
A Comparison of Two Differential Item Functioning Detection Methods: Logistic Regression and an Analysis of Variance Approach Using Rasch EstimationWhitmore, Marjorie Lee Threet 08 1900 (has links)
Differential item functioning (DIF) detection rates were examined for the logistic regression and analysis of variance (ANOVA) DIF detection methods. The methods were applied to simulated data sets of varying test length (20, 40, and 60 items) and sample size (200, 400, and 600 examinees) for both equal and unequal underlying ability between groups as well as for both fixed and varying item discrimination parameters. Each test contained 5% uniform DIF items, 5% non-uniform DIF items, and 5% combination DIF (simultaneous uniform and non-uniform DIF) items. The factors were completely crossed, and each experiment was replicated 100 times. For both methods and all DIF types, a test length of 20 was sufficient for satisfactory DIF detection. The detection rate increased significantly with sample size for each method. With the ANOVA DIF method and uniform DIF, there was a difference in detection rates between discrimination parameter types, which favored varying discrimination and decreased with increased sample size. The detection rate of non-uniform DIF using the ANOVA DIF method was higher with fixed discrimination parameters than with varying discrimination parameters when relative underlying ability was unequal. In the combination DIF case, there was a three-way interaction among the experimental factors discrimination type, relative ability, and sample size for both detection methods. The error rate for the ANOVA DIF detection method decreased as test length increased and increased as sample size increased. For both methods, the error rate was slightly higher with varying discrimination parameters than with fixed. For logistic regression, the error rate increased with sample size when relative underlying ability was unequal between groups. The logistic regression method detected uniform and non-uniform DIF at a higher rate than the ANOVA DIF method. Because the type of DIF present in real data is rarely known, the logistic regression method is recommended for most cases.
|
153 |
Using Real Time Statistical Data To Improve Long Term Voltage Stability In Stochastic Power SystemsChevalier, Samuel 01 January 2016 (has links)
In order to optimize limited infrastructure, many power systems are frequently operated close to critical, or bifurcation, points. While operating close to such critical points can be economically advantageous, doing so increases the probability of a blackout. With the continued deployment of Phasor Measurement Units (PMUs), high sample rate data are dramatically increasing the real time observability of the power grids. Prior research has shown that the statistics of these data can provide useful information regarding network stability and associated bifurcation proximity. Currently, it is not common practice for transmission and distribution control centers to leverage the higher order statistical properties of PMU data. If grid operators have the tools to determine when these statistics warrant control action, though, then the otherwise unused statistical data present in PMU streams can be transformed into actionable information.
In order to address this problem, we present two methods that aim to gauge and improve system stability using the statistics of PMU data. The first method shows how sensitivity factors associated with the spectral analysis of the reduced power flow Jacobian can be used to weight and filter incoming PMU data. We do so by demonstrating how the derived participation factors directly
predict the relative strength of bus voltage variances throughout a system. The second method leverages an analytical solver to determine a range of "critical" bus voltage variances. The monitoring and testing of raw statistical data in a highly observable load pocket of a large system are then used to reveal when control actions are needed to mitigate the risk of voltage collapse. A simple reactive power controller is then implemented that pushes the stability of the system back to a stable operating paradigm. Full order dynamic time domain simulations are used in order to test this method on both the IEEE 39 bus system and the 2383 bus Polish system. We also compare this method to two other, more conventional, controllers. The first relies on voltage magnitude signals, and the second depends only on local control of a reactive power resource. This comparison illustrates how the use of statistical information from PMU measurements can substantially improve
the performance of voltage collapse mitigation methods.
|
154 |
Moyenne conditionnelle tronquée pour un portefeuille de risques corrélésErmilov, Andrey January 2005 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
|
155 |
Modelling and comperative analysis of volatility spillover between US, Czech Republic and Serbian stock marketsMarković, Jelena January 2015 (has links)
MASTER THESIS MODELLING AND COMPARATIVE ANALYZES OF VOLATILITY SPILLOVER BETWEEN US, CZECH REPUBLIC AND SERBIAN STOCK MARKETS Abstract This paper estimates Serbian, Czech and US stock markets volatility. Few studies analyzed stock market linkages for these three markets. The mean equation is estimated using the vector auto- regression model. The second moments is further estimated using different multivariate GARCH models. We find that current conditional volatilities for each stock is highly affected by the past innovations. Cross-market correlations are significant as well. However, there is a higher conditional correlation between Czech and US stock market indices compared to the conditional correlation between Serbian and US stock indices.
|
156 |
The Impact of the U.S. and Mexican Monetary Policy on Mexican GDP and PricesRodríguez Hernández, Lorenzo January 2015 (has links)
No description available.
|
157 |
Variance Stabilization Revisited: A Case For Analysis Based On Data PoolingFowler, A. M. 07 1900 (has links)
The traditional approach to standardizing tree-ring time series is to divide raw ring widths by a
fitted curve. Although the derived ratios are conceptually elegant and have a more homogenous
variance through time than simple differences, residual heteroscedasticity associated with variance dependence on local mean ring width may remain. Incorrect inferences about climate forcing may result if this heteroscedasticity is not corrected for, or at least recognized (with appropriate caveats). A new variance stabilization method is proposed that specifically targets this source of heteroscedasticity. It is based on stabilizing the magnitude of differences from standardization curves to a common reference local mean ring width and uses data pooled from multiple radii. Application of the method to a multi-site kauri (Agathis australis (D. Don) Lindley) data set shows that (a) the heteroscedasticity issue addressed may be generic rather than radius-specific, at least for some species, (b) variance stabilization using pooled data works well for standardization curves of variable flexibility, (c) in the case of kauri, simple ratios do not appear to be significantly affected by this cause of heteroscedasticity, and (d) centennial-scale variance trends are highly sensitive to the analytical methods used to build tree-ring chronologies.
|
158 |
Modern portfolio theory tools: a methodological design and applicationWang, Sin Han 26 March 2009 (has links)
A passive investment management model was developed via a critical literature review of
portfolio methodologies. This model was developed based on the fundamental models
originated by both Markowitz and Sharpe. The passive model was automated via the
development of a computer programme that can be used to generate the required outputs
as suggested by Markowitz and Sharpe. For this computer programme MATLAB is
chosen and the model’s logic is designed and validated.
The demonstration of the designed programme using securities traded is performed on
Johannesburg Securities Exchange. The selected portfolio has been sub-categorised into
six components with a total of twenty- seven shares. The shares were grouped into
different components due to the investors’ preferences and investment time horizon. The
results demonstrate that a test portfolio outperforms a risk- free money market instrument
(the government R194 bond), but not the All Share Index for the period under
consideration. This design concludes the reason for this is due in part to the use of the
error term from Sharpe’s single index model. An investor following the framework
proposed by this design may use this to determine the risk- return relationship for
selected portfolios, and hopefully, a real return.
|
159 |
Sustainability for Portfolio OptimizationAnane, Asomani Kwadwo January 2019 (has links)
The 2007-2008 financial crash and the looming climate change and global warming have heightened interest in sustainable investment. But whether the shift is as a result of the financial crash or a desire to preserve the environment, a sustainable investment might be desirable. However, to maintain this interest and to motivate investors in indulging in sustainability, there is the need to show the possibility of yielding positive returns. The main objective of the thesis is to investigate whether the sustainable investment can lead to higher returns. The thesis focuses primarily on incorporating sustainability into Markowitz portfolio optimization. It looks into the essence of sustainability and its impact on companies by comparing different concepts. The analysis is based on the 30 constituent stocks from the Dow Jones industrial average or simply the Dow. The constituents stocks of the Dow, from 2007-12-31 to 2018-12-31 are investigated. The thesis compares the cumulative return of the Dow with the sustainable stocks in the Dow based on their environmental, social and governance (ESG) rating. The results are then compared with the Dow Jones Industrial Average denoted by the symbol (^DJI) which is considered as the benchmark for my analysis. The constituent stocks are then optimized based on the Markowitz mean-variance framework and a conclusion is drawn from the constituent stocks, ESG, environmental, governance and social asset results. It was realized that the portfolio returns for stocks selected based on their environmental and governance ratings were the highest performers. This could be due to the fact that most investors base their investment selection on the environmental and governance performance of companies and the demand for stocks in that category could have gone up over the period, contributing significantly to their performance.
|
160 |
Transaction costs and resampling in mean-variance portfolio optimizationAsumeng-Denteh, Emmanuel 30 April 2004 (has links)
Transaction costs and resampling are two important issues that need great attention in every portfolio investment planning. In practice costs are incurred to rebalance a portfolio. Every investor tries to find a way of avoiding high transaction cost as much as possible. In this thesis, we investigated how transaction costs and resampling affect portfolio investment. We modified the basic mean-variance optimization problem to include rebalancing costs we incur on transacting securities in the portfolio. We also reduce trading as much as possible by applying the resampling approach any time we rebalance our portfolio. Transaction costs are assumed to be a percentage of the amount of securities transacted. We applied the resampling approach and tracked the performance of portfolios over time, assuming transaction costs and then no transaction costs are incurred. We compared how the portfolio is affected when we incorporated the two issues outlined above to that of the basic mean-variance optimization.
|
Page generated in 0.1269 seconds