Spelling suggestions: "subject:"forecasting -- amathematical models"" "subject:"forecasting -- dmathematical models""
51 |
Evolutionary dynamics of coexisting species.Muir, Peter William. January 2000 (has links)
Ever since Maynard-Smith and Price first introduced the concept of an evolutionary stable strategy (ESS) in 1973, there has been a growing amount of work in and around this field. Many new concepts have been introduced, quite often several times over, with different acronyms by different authors. This led to other authors trying to collect and collate the various terms (for example Lessard, 1990 & Eshel, 1996) in order to promote better understanding ofthe topic. It has been noticed that dynamic selection did not always lead to the establishment of an ESS. This led to the development ofthe concept ofa continuously stable strategy (CSS), and the claim that dynamic selection leads to the establishment of an ESSif it is a CSS. It has since been proved that this is not always the case, as a CSS may not be able to displace its near neighbours in pairwise ecological competitions. The concept of a neighbourhood invader strategy (NIS) was introduced, and when used in conjunction with the concept of an ESS, produced the evolutionary stable neighbourhood invader strategy (ESNIS) which is an unbeatable strategy. This work has tried to extend what has already been done in this field by investigating the dynamics of coexisting species, concentrating on systems whose dynamics are governed by Lotka-Volterra competition models. It is proved that an ESNIS coalition is an optimal strategy which will displace any size and composition of incumbent populations, and which will be immune to invasions by any other mutant populations, because the ESNIS coalition, when it exists, is unique. It has also been shown that an ESNIS coalition cannot exist in an ecologically stable state with any finite number of strategies in its neighbourhood. The equilibrium population when the ESNIS coalition is the only population present is globally stable in a n-dimensional system (for finite n), where the ESNIS coalition interacts with n - 2 other strategies in its neighbourhood. The dynamical behaviour of coexisting species was examined when the incumbent species interacted with various invading species. The different behaviour ofthe incumbent population when invaded by a coalition using either an ESNIS or an NIS phenotype underlines the difference in the various strategies. Similar simulations were intended for invaders who were using an ESS phenotype, but unfortunately the ESS coalition could not be found. If the invading coalition use NIS phenotypes then the outcome is not certain. Some, but not all of the incumbents might become extinct, and the degree to which the invaders flourish is very dependent on the nature ofthe incumbents. However, if the invading species form an ESNIS coalition, one is certain of the outcome. The invaders will eliminate the incumbents, and stabilise at their equilibrium populations. This will occur regardless of the composition and number of incumbent species, as the ESNIS coalition forms a globally stable equilibrium point when it is at its equilibrium populations, with no other species present. The only unknown fact about the outcome in this case is the number ofgenerations that will pass before the system reaches the globally stable equilibrium consisting ofjust the ESNIS. For systems whose dynamics are not given by Lotka-Volterra equations, the existence ofa unique, globally stable ESNIS coalition has not been proved. Moreover, simulations of a non Lotka-Volterra system designed to determine the applicability ofthe proof were inconclusive, due to the ESS coalition not having unique population sizes. Whether or not the proof presented in this work can be extended to non Lotka-Volterra systems remains to be determined. / Thesis (M.Sc.)-University of Natal, Pietermaritzburg, 2000.
|
52 |
Validation and Investigation of the Four Aspects of Cycle Regression: A New Algorithm for Extracting CyclesMehta, Mayur Ravishanker 12 1900 (has links)
The cycle regression analysis algorithm is the most recent addition to a group of techniques developed to detect "hidden periodicities." This dissertation investigates four major aspects of the algorithm. The objectives of this research are 1. To develop an objective method of obtaining an initial estimate of the cycle period? the present procedure of obtaining this estimate involves considerable subjective judgment; 2. To validate the algorithm's success in extracting cycles from multi-cylical data; 3. To determine if a consistent relationship exists among the smallest amplitude, the error standard deviation, and the number of replications of a cycle contained in the data; 4. To investigate the behavior of the algorithm in the predictions of major drops.
|
53 |
Wenkriteria vir konvensionele landgevegteWagner, William John 11 1900 (has links)
Text in Afrikaans / Hierdie studie is onderneem met die doel om 'n model te ontwikkel waarmee die wenner in 'n
konvensionele landgeveg voorspel kan word. Gegewe die omvang van die vakgebied oorlog,
is die studie beperk tot die taktiese vlak en fokus op landgevegte tydens konvensionele
oorlogvoering.
As eerste stap in die ontwikkelingpsproses, is die faktore wat wen kan bepaal krygskundig
nagevors. Die sogenaamde honderdgevegte-datastel is saamgestel uit data van 100 gevegte
uit die twintigste eeu en net vroeer, met die klem op gevegte waarin Suid-Afrikaanse magte
betrokke was. Verskeie statistiese tegnieke is ondersoek om 'n geskikte tegniek vir die
ontleding van die data te vind. Die ondersoek het aangetoon dat logistiese regressie die beste
tegniek is vir die data. 'n Ontwikkelingsproses met drie voorspellers is ook saamgestel.
Verskeie modelle is ondersoek, naamlik
1 'n Voorspellingsmodel met eensydige sub-modelle sonder gevegshouding, met
en sonder opponentdata.
I
2 'n Voorspellingsmodel met eensydige sub-modelle met gevegshouding, met en
sonder opponentdata.
3 'n Voorspellingsmodel met tweesydige sub-modelle met opponentdata..
Die ontwikkelingsproses lewer verskeie modelle wat baie goed presteer sensitiwiteit > 80%).
'n Finale keuse lewer die volgende resultaat:
1 Vir die geval waar opponentdata nie beskikbaar is nie, is 'n eensydige submode!
sonder gevegshouding ontwikkel waarvan die resultaat teen 'n
skeidingsgrens gemeet word om die uitslag te bepaal. Die model het 'n
sensitiwiteit van 85%, maar kan net 'n wen of gelykop, of, verloor of gelykop
voorspel.
2 Vir die geval waar opponentdata beskikbaar is, is 'n eensydige sub-model
ivsonder
gevegshouding ontwikkel wat in staat is om, deur die opponente se
uitslag met mekaar te vergelyk, die wenner aan te wys. Hierdie model het 'n
sensitiwiteit van 83,8%
Verskeie statistiese en krygskundige gevolgtrekkings word gemaak, die belangrikste waarvan
dat die gekose modelle wel daartoe in staat is om gevegsvoorspellings akkuraat te kan
uitvoer. Die modelle kan ook aangewend word om gevegte te ontleed en tendense te
verklaar. Krygskundig bevestig die resultaat die noodsaaklikheid van die
maneuvreringsbenadering en goeie leierskap.
Die resultaat van die studie het wye aanwendingspotensiaal op die gebied van die
krygskunde, krygsfilosofie, krygspele en militere operasionele navorsing en laat ruimte vir
interessante en noodsaaklike verdere navorsing in operasionele navorsing sowel as in die
krygskunde. / The aim of this study is to develop models for the efficient prediction of the outcome of a land
battle. The study is confined to conventional warfare at the tactical level.
The first step was to identify the variables that may determine victory. Thirty such variables
enjoying the support of various military historians and philosophers were selected. The
hundred-battle data set, consisting of coded data for a hundred twentieth-century battles, was
compiled. The thirty variables were encoded for each combatant. Since the outcome and
most of the prediction variables are binary but a few are continuous, ordinary linear regression
could not be used and several statistical and other techniques were evaluated. Logistic
regression was found to be the best. A formalized development and selection process was
applied to a number of broad model classes.
These were
1 prediction models with one-sided sub-models without combat posture and with
(without) opponent data
2 prediction models with one-sided sub-models with combat posture and with
(without) opponent data
3 prediction models with two-sided sub-models without combat posture and with
opponent data.
The process provided several very good models and the following were selected.
Without opponent data. A one-sided sub-model without combat posture, utilizing a
discriminator was selected. It determines the outcome with a sensitivity of 85%. However, it
only predicts victory or a draw, defeat or a draw.
With opponent data. A one-sided sub-model without combat posture was selected. It
predicts the outcome of battle by comparing the results of the two opponents. This model
vishowed
a sensitivity of 83,8%.
Several statistical and military scientific conclusions followed, the most important being that
the chosen models can accurately predict battle outcome or post facto determine the
outcome. The models can also be used to analyze battles. In this role they confirm the
importance of maneuver warfare and good leadership.
The results of this study can be applied in military science, military philosophy and war
gaming. The work fuses military philosophy with statistical analysis, is a first in the field and
offers the possibility of breaking out of the mind-set of personal views and biases prevalent in
military science. The method as such can be applied to different data bases representing war
at other levels or with other technologies. / Philosophy / D.Phil. (Philosophy)
|
54 |
A comparison of the Philips price earnings multiple model and the actual future price earnings multiple of selected companies listed on the Johannesburg stock exchangeCoetzee, G. J 12 1900 (has links)
Thesis (MBA)--Stellenbosch University, 2000. / ENGLISH ABSTRACT: The price earnings multiple is a ratio of valuation and is published widely in the media as a
comparative instrument of investment decisions. It is used to compare company valuation
levels and their future growth/franchise opportunities. There have been numerous research
studies done on the price earnings multiple, but no study has been able to design or derive a
model to successfully predict the future price earnings multiple where the current stock price
and following year-end earnings per share is used.
The most widely accepted method of share valuation is to discount the future cash flows by an
appropriate discount rate. Popular and widely used stock valuation models are the Dividend
Discount Model and the Gordon Model. Both these models assume that future dividends are
cash flows to the shareholder.
Thomas K. Philips, the chief investment officer at Paradigm Asset Management in New York,
constructed a valuation model at the end of 1999, which he published in The Journal of
Portfolio Management. The model (Philips price earnings multiple model) was derived from
the Dividend Discount Model and calculates an implied future price earnings multiple. The
Philips price earnings multiple model includes the following independent variables: the cost
of equity, the return on equity and the dividend payout ratio. Each variable in the Philips
price earnings multiple model is a calculated present year-end point value, which was used to
calculate the implied future price earnings multiple (present year stock price divided by
following year-end earnings per share). This study used a historical five year (1995-2000)
year-end data to calculate the implied and actual future price earnings multiple.
Out of 225, Johannesburg Stock Exchange listed companies studied, only 36 were able to
meet the criteria of the Philips price earnings multiple model. Correlation and population mean tests were conducted on the implied and constructed data sets. It proved that the Philips
price earnings multiple model was unsuccesful in predicting the future price earnings
multiple, at a statistical 0,20 level of significance.
The Philips price earnings multiple model is substantially more complex than the Discount
Dividend Model and includes greater restrictions and more assumptions. The Philips price
earnings multiple model is a theoretical instrument which can be used to analyse hypothetical
(with all model assumptions and restrictions having been met) companies. The Philips price
earnings multiple model thus has little to no applicability in the practical valuation of stock
price on Johannesburg Stock Exchange listed companies. / AFRIKAANSE OPSOMMING: Die prysverdienste verhouding is 'n waarde bepalingsverhouding en word geredelik
gepubliseer in die media. Hierdie verhouding is 'n maatstaf om maatskappye se waarde
vlakke te vergelyk en om toekomstige groei geleenthede te evalueer. Daar was al verskeie
navorsingstudies gewy aan die prysverdiensteverhouding, maar nog geen model is ontwikkel
wat die toekomstige prysverdiensteverhouding (die teenswoordige aandeelprys en
toekomstige jaareind verdienste per aandeel) suksesvol kon modelleer nie.
Die mees aanvaarbare metode vir waardebepaling van aandele is om toekomstige
kontantvloeie te verdiskonteer teen 'n toepaslike verdiskonteringskoers. Van die vernaamste
en mees gebruikte waardeberamings modelle is die Dividend Groei Model en die Gordon
Model. Beide modelle gebruik die toekomstige dividendstroom as die toekomstige
kontantvloeie wat uitbetaal word aan die aandeelhouers.
Thomas K. Philips, die hoof beleggingsbeampte by Paradigm Asset Management in New
York, het 'n waardeberamingsmodel ontwerp in 1999. Die model (Philips prysverdienste
verhoudingsmodei) was afgelei vanaf die Dividend Groei Model en word gebruik om 'n
geïmpliseerde toekomstige prysverdiensteverhouding te bereken. Die Philips prysverdienste
verhoudingsmodel sluit die volgende onafhanklike veranderlikes in: die koste van kapitaal,
die opbrengs op aandeelhouding en die uitbetalingsverhouding. Elke veranderlike in hierdie
model is 'n berekende teenswoordige jaareinde puntwaarde, wat gebruik was om die
toekomstige geïmpliseerde prysverdiensteverhouding (teenswoordige jaar aandeelprys gedeel
deur die toekomstige verdienste per aandeel) te bereken. In hierdie studie word vyf jaar
historiese jaareind besonderhede gebruik om die geïmpliseerde en werklike toekomstige
prysverdiensteverhouding te bereken. Van die 225 Johannesburg Effektebeurs genoteerde maatskappye, is slegs 36 gebruik wat aan
die vereistes voldoen om die Philips prysverdienste verhoudingsmodel te toets. Korrelasie en
populasie gemiddelde statistiese toetse is op die berekende en geïmpliseerde data stelle
uitgevoer en gevind dat die Philips prysverdienste verhoudingsmodel, teen 'n statistiese 0,20
vlak van beduidenheid, onsuksesvol was om die toekomstige prysverdiensteverhouding
vooruit te skat.
Die Philips prysverdienste verhoudingsmodel is meer kompleks as die Dividend Groei Model
met meer aannames en beperkings. Die Philips prysverdienste verhoudingsmodel is 'n
teoretiese instrument wat gebruik kan word om hipotetiese (alle model aannames en
voorwaardes is nagekom) maatskappye te ontleed. Dus het die Philips prysverdienste
verhoudingsmodel min tot geen praktiese toepassingsvermoë in die werkilke waardasie van
aandele nie.
|
55 |
Evidence of volatility clustering on the FTSE/JSE top 40 indexLouw, Jan Paul 12 1900 (has links)
Thesis (MBA (Business Management))--Stellenbosch University, 2008. / ENGLISH ABSTRACT: This research report investigated whether evidence of volatility clustering exists on the FTSE/JSE Top 40 Index. The presence of volatility clustering has practical implications relating to market decisions as well as the accurate measurement and reliable forecasting of volatility. This research report was conducted as an in-depth analysis of volatility, measured over five different return interval sizes covering the sample in non-overlapping periods. Each of the return interval sizes' volatility were analysed to reveal the distributional characteristics and if it violated the normality assumption. The volatility was also analysed to identify in which way, if any, subsequent periods are correlated. For each of the interval sizes one-step-ahead volatility forecasting was conducted using Linear Regression, Exponential Smoothing, GARCH(1,1) and EGARCH(1,1) models.
The results were analysed using appropriate criteria to determine which of the
forecasting models were more powerful. The forecasting models range from very simple to very complex, the rationale for this was to determine if more complex models outperform simpler models.
The analysis showed that there was sufficient evidence to conclude that there was volatility clustering on the FTSE/JSE Top 40 Index. It further showed that more complex models such as the GARCH(1,1) and EGARCH(1,1) only marginally outperformed less complex models, and does not offer any real benefit over simpler models such as Linear Regression. This can be ascribed to the mean reversion effect of volatility and gives further insight into the volatility structure over the sample period. / AFRIKAANSE OPSOMMING: Die navorsingsverslag ondersoek die FTSE/JSE Top 40 Indeks om te bepaal of daar genoegsame bewyse is dat volatiliteitsbondeling teenwoordig is. Die teenwoordigheid van volatiliteitsbondeling het praktiese implikasies vir besluite in finansiele markte en akkurate en betroubare volatiliteitsvooruitskattings. Die verslag doen 'n diepgaande ontleding van volatiliteit, gemeet oor vyf verskillende opbrengs interval groottes wat die
die steekproef dek in nie-oorvleuelende periodes. Elk van die opbrengs interval groottes se volatiliteitsverdelings word ontleed om te bepaal of dit verskil van die normaalverdeling. Die volatiliteit van die intervalle word ook ondersoek om te bepaal tot watter mate, indien enige, opeenvolgende waarnemings gekorreleer is. Vir elk van die interval groottes word 'n een-stap-vooruit vooruitskatting gedoen van volatiliteit. Dit word gedoen deur middel van Lineêre Regressie, Eksponensiële Gladstryking, GARCH(1,1) en die EGARCH(1,1) modelle. Die resultate word ontleed deur middel van erkende kriteria om te bepaal watter model die beste vooruitskattings
lewer. Die modelle strek van baie eenvoudig tot baie kompleks, die rasionaal is om te bepaal of meer komplekse modelle beter resultate lewer as eenvoudiger modelle. Die ontleding toon dat daar genoegsame bewyse is om tot die gevolgtrekking te kom dat daar volatiliteitsbondeling is op die FTSE/JSE Top 40 Indeks. Dit toon verder dat meer komplekse vooruitskattingsmodelle soos die GARCH(1,1) en die EGARCH(1,1) slegs marginaal beter presteer het as die eenvoudiger vooruitskattingsmodelle en nie enige werklike voordeel soos Lineêre Regressie bied nie. Dit kan toegeskryf word aan die neiging van volatiliteit am terug te keer tot die gemiddelde,
wat verdere insig lewer oor volatiliteit gedurende die steekproef.
|
56 |
Modelling and forecasting the telephone services application calls.January 1998 (has links)
by Moon-Tong Chan. / Thesis submitted in: December 1997. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1998. / Includes bibliographical references (leaves 123-124). / Abstract also in Chinese. / Chapter Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- The Data Set --- p.8 / Chapter Chapter 2 --- The Box-Jenkins Time Series Models --- p.15 / Chapter 2.1 --- The White-noise Process --- p.16 / Chapter 2.2 --- Stationarity of Time Series --- p.17 / Chapter 2.3 --- Differencing --- p.19 / Chapter 2.4 --- Seasonal ARIMA Models - SARIMA Models --- p.20 / Chapter 2.5 --- Intervention Models --- p.22 / Chapter 2.6 --- The Three Phases of ARMA Procedure --- p.24 / Chapter Chapter 3 --- Seasonal ARMA Models with Several Mean Levels --- p.38 / Chapter 3.1 --- Review of Linear Models --- p.40 / Chapter 3.1.1 --- Method of Weighted Least Squares --- p.41 / Chapter 3.2 --- The Proposed Model --- p.41 / Chapter 3.2.1 --- The Weightings --- p.43 / Chapter 3.2.2 --- Selection of Submodels --- p.45 / Chapter 3.2.3 --- Estimation of Model (3.4) --- p.46 / Chapter 3.3 --- Model Adequacy Checking --- p.55 / Chapter 3.3.1 --- Checking of Independence of Residuals --- p.56 / Chapter 3.3.2 --- Checking of Normality of Residuals --- p.58 / Chapter 3.4 --- Forecasting --- p.62 / Chapter Chapter 4 --- Comparison --- p.77 / Chapter 4.1 --- Similarities and Differences Between the Two Models --- p.78 / Chapter 4.2 --- Model Comparative Criterion --- p.81 / Chapter 4.2.1 --- Model Fitting Comparison --- p.82 / Chapter 4.2.2 --- Model Forecasting Comparison --- p.83 / Chapter 4.3 --- Conclusion --- p.90 / Chapter 4.4 --- Generation of Predicted Hourly Calls --- p.91 / Chapter 4.5 --- Extension --- p.92 / Appendix A --- p.97 / Appendix B --- p.105 / Appendix C --- p.122 / References --- p.123
|
57 |
Modelling the interactions across international stock, bond and foreign exchange marketsHakim, Abdul January 2009 (has links)
[Truncated abstract] Given the theoretical and historical evidence that support the benefit of investing internationally. there is Iittle knowledge available of proper international portfolio construction in terms of how much should be invested in foreign countries, which countries should be targeted, and types of assets to be included in the portfolio. The prospects of these benefits depend on the market volatilities, cross-country correlations, and currency risks to change in the future. Another important issue in international portfolio diversification is the growth of newly emerging markets which have different characteristics from the developed ones. Addressing the issues, the thesis intends to investigate the nature of volatility, conditional correlations, and the impact of currency risks in international portfolio, both in developed and emerging markets. Chapter 2 provides literature review on volatility spillovers, conditional correlations, and forecasting both VaR and conditional correlations using GARCH-type models. Attention is made on the estimated models, type of assets, regions of markets, and tests of forecasts. Chapter 3 investigates the nature of volatility spillovers across intemational assets, which is important in determining the nature of portfolio's volatility when most assets are seems to be connected. ... The impacts of incorporating volatility spillovers and asymmetric effect on the forecast performance of conditional correlation will also be examined in this thesis. The VARMA-AGARCH of McAleer, Hoti and Chan (2008) and the VARMA-GARCH model of Ling and McAleer (2003) will be estimated to accommodate volatility spillovers and asymmetric effect. The CCC model of Bollerslev (1990) will also be estimated as benchmark as the model does not incorporate both volatility spillovers and asymmetric effects. Given the information about the nature of conditional correlations resulted from the forecasts using a rolling window technique, Section 2 of Chapter 4 investigates the nature of conditional correlations by estimating two multivariate GARCH models allowing for time-varying conditional correlations, namely the DCC model of Engle (2002) and the GARCC model of McAleer et al. (2008). Chapter 5 conducts VaR forecast considering the important role of VaR as a standard tool for risk management. Especially, the chapter investigates whether volatility spillovers and time-varying conditional correlations discussed in the previous two chapters are of helps in providing better VaR forecasts. The BEKK model of Engle and Kroner (1995) and the DCC model of Engle (2002) will be estimated to incorporate volatility spillovers and conditional correlations, respectively. The DVEC model of Bollerslev et al. (1998) and the CCC model of Bollerslev (1990) will be estimated to serve benchmarks, as both models do not incorporate both volatility spillovers and timevarying conditional correlations. Chapter 6 concludes the thesis and lists somc possible future research.
|
58 |
Wenkriteria vir konvensionele landgevegteWagner, William John 11 1900 (has links)
Text in Afrikaans / Hierdie studie is onderneem met die doel om 'n model te ontwikkel waarmee die wenner in 'n
konvensionele landgeveg voorspel kan word. Gegewe die omvang van die vakgebied oorlog,
is die studie beperk tot die taktiese vlak en fokus op landgevegte tydens konvensionele
oorlogvoering.
As eerste stap in die ontwikkelingpsproses, is die faktore wat wen kan bepaal krygskundig
nagevors. Die sogenaamde honderdgevegte-datastel is saamgestel uit data van 100 gevegte
uit die twintigste eeu en net vroeer, met die klem op gevegte waarin Suid-Afrikaanse magte
betrokke was. Verskeie statistiese tegnieke is ondersoek om 'n geskikte tegniek vir die
ontleding van die data te vind. Die ondersoek het aangetoon dat logistiese regressie die beste
tegniek is vir die data. 'n Ontwikkelingsproses met drie voorspellers is ook saamgestel.
Verskeie modelle is ondersoek, naamlik
1 'n Voorspellingsmodel met eensydige sub-modelle sonder gevegshouding, met
en sonder opponentdata.
I
2 'n Voorspellingsmodel met eensydige sub-modelle met gevegshouding, met en
sonder opponentdata.
3 'n Voorspellingsmodel met tweesydige sub-modelle met opponentdata..
Die ontwikkelingsproses lewer verskeie modelle wat baie goed presteer sensitiwiteit > 80%).
'n Finale keuse lewer die volgende resultaat:
1 Vir die geval waar opponentdata nie beskikbaar is nie, is 'n eensydige submode!
sonder gevegshouding ontwikkel waarvan die resultaat teen 'n
skeidingsgrens gemeet word om die uitslag te bepaal. Die model het 'n
sensitiwiteit van 85%, maar kan net 'n wen of gelykop, of, verloor of gelykop
voorspel.
2 Vir die geval waar opponentdata beskikbaar is, is 'n eensydige sub-model
ivsonder
gevegshouding ontwikkel wat in staat is om, deur die opponente se
uitslag met mekaar te vergelyk, die wenner aan te wys. Hierdie model het 'n
sensitiwiteit van 83,8%
Verskeie statistiese en krygskundige gevolgtrekkings word gemaak, die belangrikste waarvan
dat die gekose modelle wel daartoe in staat is om gevegsvoorspellings akkuraat te kan
uitvoer. Die modelle kan ook aangewend word om gevegte te ontleed en tendense te
verklaar. Krygskundig bevestig die resultaat die noodsaaklikheid van die
maneuvreringsbenadering en goeie leierskap.
Die resultaat van die studie het wye aanwendingspotensiaal op die gebied van die
krygskunde, krygsfilosofie, krygspele en militere operasionele navorsing en laat ruimte vir
interessante en noodsaaklike verdere navorsing in operasionele navorsing sowel as in die
krygskunde. / The aim of this study is to develop models for the efficient prediction of the outcome of a land
battle. The study is confined to conventional warfare at the tactical level.
The first step was to identify the variables that may determine victory. Thirty such variables
enjoying the support of various military historians and philosophers were selected. The
hundred-battle data set, consisting of coded data for a hundred twentieth-century battles, was
compiled. The thirty variables were encoded for each combatant. Since the outcome and
most of the prediction variables are binary but a few are continuous, ordinary linear regression
could not be used and several statistical and other techniques were evaluated. Logistic
regression was found to be the best. A formalized development and selection process was
applied to a number of broad model classes.
These were
1 prediction models with one-sided sub-models without combat posture and with
(without) opponent data
2 prediction models with one-sided sub-models with combat posture and with
(without) opponent data
3 prediction models with two-sided sub-models without combat posture and with
opponent data.
The process provided several very good models and the following were selected.
Without opponent data. A one-sided sub-model without combat posture, utilizing a
discriminator was selected. It determines the outcome with a sensitivity of 85%. However, it
only predicts victory or a draw, defeat or a draw.
With opponent data. A one-sided sub-model without combat posture was selected. It
predicts the outcome of battle by comparing the results of the two opponents. This model
vishowed
a sensitivity of 83,8%.
Several statistical and military scientific conclusions followed, the most important being that
the chosen models can accurately predict battle outcome or post facto determine the
outcome. The models can also be used to analyze battles. In this role they confirm the
importance of maneuver warfare and good leadership.
The results of this study can be applied in military science, military philosophy and war
gaming. The work fuses military philosophy with statistical analysis, is a first in the field and
offers the possibility of breaking out of the mind-set of personal views and biases prevalent in
military science. The method as such can be applied to different data bases representing war
at other levels or with other technologies. / Philosophy, Practical and Systematic Theology / D.Phil. (Philosophy)
|
59 |
Essays on real-time econometrics and forecastingModugno, Michèle 14 September 2011 (has links)
The thesis contains four essays covering topics in the field of real time econometrics and forecasting.<p><p>The first Chapter, entitled “An area wide real time data base for the euro area” and coauthored with Domenico Giannone, Jerome Henry and Magda Lalik, describes how we constructed a real time database for the euro area covering more than 200 series regularly published in the European Central Bank Monthly Bulletin, as made available ahead of publication to the Governing Council members before their first meeting of the month.<p><p>Recent research has emphasised that the data revisions can be large for certain indicators and can have a bearing on the decisions made, as well as affect the assessment of their relevance. It is therefore key to be in a position to reconstruct the historical environment of economic decisions at the time they were made by private agents and policy-makers rather than using the data as they become available some years later. For this purpose, it is necessary to have the information in the form of all the different vintages of data as they were published in real time, the so-called "real-time data" that reflect the economic situation at a given point in time when models are estimated or policy decisions made.<p><p>We describe the database in details and study the properties of the euro area real-time data flow and data revisions, also providing comparisons with the United States and Japan. We finally illustrate how such revisions can contribute to the uncertainty surrounding key macroeconomic ratios and the NAIRU.<p><p>The second Chapter entitled “Maximum likelihood estimation of large factor model on datasets with arbitrary pattern of missing data” is based on a joint work with Marta Banbura. It proposes a methodology for the estimation of factor models on large cross-sections with a general pattern of missing data. In contrast to Giannone et al (2008), we can handle datasets that are not only characterised by a 'ragged edge', but can include e.g. mixed frequency or short history indicators. The latter is particularly relevant for the euro area or other young economies, for which many series have been compiled only since recently. We adopt the maximum likelihood approach, which, apart from the flexibility with regard to the pattern of missing data, is also more efficient and allows imposing restrictions on the parameters. It has been shown by Doz et al (2006) to be consistent, robust and computationally feasible also in the case of large cross-sections. To circumvent the computational complexity of a direct likelihood maximisation in the case of large cross-section, Doz et al (2006) propose to use the iterative Expectation-Maximisation (EM) algorithm. Our contribution is to modify the EM steps to the case of missing data and to show how to augment the model in order to account for the serial correlation of the idiosyncratic component. In addition, we derive the link between the unexpected part of a data release and the forecast revision and illustrate how this can be used to understand the sources of the latter in the case of simultaneous releases.<p><p>We use this methodology for short-term forecasting and backdating of the euro area GDP on the basis of a large panel of monthly and quarterly data.<p><p>The third Chapter is entitled “Nowcasting Inflation Using High Frequency Data” and it proposes a methodology for nowcasting and forecasting inflation using data with sampling frequency higher than monthly. In particular, this Chapter focuses on the energy component of inflation given the availability of data like the Weekly Oil Bulletin Price Statistics for the euro area, the Weekly Retail Gasoline and Diesel Prices for the US and the daily spot and future prices of crude oil.<p><p>Although nowcasting inflation is a novel idea, there is a rather long literature focusing on nowcasting GDP. The use of higher frequency indicators in order to Nowcast/Forecast lower frequency indicators had started with monthly data for GDP. GDP is a quarterly variable released with a substantial time delay (e.g. two months after the end of the reference quarter for the euro area GDP). <p><p>The estimation adopts the methodology described in Chapter 2, modeling the data as a trading day frequency factor model with missing observations in a state space representation. In contrast to other procedures, the methodology proposed models all the data within a unified single framework that allows one to produce forecasts of all the involved variables from a factor model, which, by definition, does not suffer from overparametrisation. Moreover, this offers the possibility to disentangle model-based "news" from each release and then to assess their impact on the forecast revision. The Chapter provides an illustrative example of this procedure, focusing on a specific month.<p><p>In order to assess the importance of using high frequency data for forecasting inflation this Chapter compares the forecast performance of the univariate models, i.e. random walk and autoregressive process, with the forecast performance of the model that uses weekly and daily data. The provided empirical evidence shows that exploiting high frequency data relative to oil not only let us nowcast and forecast the energy component of inflation with a precision twice better than the proposed benchmarks, but we obtain a similar improvement even for total inflation.<p><p>The fourth Chapter entitled “The forecasting power of international yield curve linkages”, coauthored with Kleopatra Nikolaou, investigates dependency patterns between the yield curves of Germany and the US, by using an out-of-sample forecast exercise.<p><p>The motivation for this Chapter stems from the fact that our up to date knowledge on dependency patterns among yields curves of different countries is limited. Looking at the yield curve literature, the empirical evidence to-date informs us of strong contemporaneous interdependencies of yield curves across countries, in line with increased globalization and financial integration. Nevertheless, this yield curve literature does not investigate non-contemporaneous correlations. And yet, clear indication in favour of such dependency patterns is recorded in studies focusing on specific interest rates, which look at the role of certain countries as global players (see Frankel et al. (2004), Chinn and Frankel (2005) and Wang et al. (2007)). Evidence from these studies suggests a leading role for the US. Moreover, dependency patterns recorded in the real business cycles between the US and the euro area (Giannone and Reichlin, 2007) can also rationalize such linkages, to the extent that output affects nominal interest rates.<p><p>We propose, estimate and forecast (out-of-sample) a novel dynamic factor model for the yield curve, where dynamic information from foreign yield curves is introduced into domestic yield curve forecasts. This is the International Dependency Model (IDM). We want to compare the yield curve forecast under the IDM versus a purely domestic model and a model that allows for contemporaneous common global factors. These models serve as useful comparisons. The domestic model bears direct modeling links with IDM, as it can be seen as a nested model of IDM. The global model bears less direct links in terms of modeling, but, in line with IDM, it is also an international model that serves to highlight the advantages of introducing international information in yield curve forecasts. However, the global model aims to identify contemporaneous linkages in the yield curve of the two countries, whereas the IDM also allows for detecting dependency patterns.<p><p>Our results that shocks appear to be diffused in a rather asymmetric manner across the two countries. Namely, we find a unidirectional causality effect that runs from the US to Germany. This effect is stronger in the last ten years, where out-of-sample forecasts of Germany using the US information are even more accurate than the random walk forecasts. Our statistical results demonstrate a more independent role for the US. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished
|
60 |
Effects of HRU Size on PRMS Performance in 30 Western U.S. BasinsSteele, Madeline Olena 18 April 2013 (has links)
Semi-distributed hydrological models are often used for streamflow forecasting, hydrological climate change impact assessments, and other applications. In such models, basins are broken up into hydrologic response units (HRUs), which are assumed to have a relatively homogenous response to precipitation. HRUs are delineated in a variety of ways, and the procedure used may impact model performance. HRU delineation procedures have been researched, but it is still not clear how important these subdivision schemes are or which delineation methods are most effective. To start addressing this knowledge gap, this project investigated whether or not HRU size has a significant effect on streamflow simulation at the mouth of a watershed. To test this, 30 gaged, relatively unimpaired western U.S. basins were each modeled with 6 HRU sets of different sizes using the Precipitation Runoff Modeling System (PRMS). To isolate size as a variable, HRUs were delineated using stream catchments. For each basin, streams were defined with 6 different threshold levels, producing HRUs of differing sizes. Nineteen model parameters were derived for each HRU using nationally consistent GIS datasets, and all other model parameters were left at default values. Climate inputs were derived from a national 4-km2 gridded daily climate dataset. After calibration, 4 goodness-of-fit metrics were calculated for daily streamflow for each HRU set. Uncalibrated model performance was generally poor for a variety of reasons, but comparison of the models was still informative. Results for the 30 basins across the 6 HRU size classes showed that HRU size did not significantly impact model performance across all basins. However, in basins that had less total precipitation and higher elevation, sensitivity of model performance to HRU subdivision levels was slightly greater, though not significantly so. Findings indicate that, in most basins, little subdivision may be required for good model performance, allowing for desirable simplicity and fewer degrees of freedom without sacrificing runoff simulation accuracy.
|
Page generated in 0.1067 seconds