Spelling suggestions: "subject:"sannolikhetsteori"" "subject:"sannolikhetsteorin""
181 |
Portofolio management using structured products - The capital guarnatee puzzleHveem, Markus January 2011 (has links)
No description available.
|
182 |
Can a simple model for the interaction between value and momentum traders explain how equity futures react to earnings announcements?Dellner, Johan January 2011 (has links)
No description available.
|
183 |
Betting on Volatility: A Delta Hedging Approachzhong, Liang January 2011 (has links)
No description available.
|
184 |
On Constructing a Market Consistent Economic Scenario GeneratorBaldvindsdottir, Ebba-Kristin January 2011 (has links)
No description available.
|
185 |
Performance and risk analysis of the Hodrick-Prescott filterBERGROTH, JONAS January 2011 (has links)
No description available.
|
186 |
On Prediction and Filtering of Stock Index Returns :HALLGREN, FREDRIK January 2011 (has links)
No description available.
|
187 |
Markov chain Monte Carlo for rare-event simulation in heavy-tailed settingsGudmundsson, Thorbjörn January 2013 (has links)
No description available.
|
188 |
BAGGED PREDICTION ACCURACY IN LINEAR REGRESSIONKimby, Daniel January 2022 (has links)
Bootstrap aggregation, or bagging, is a prominent method used in statistical inquiry suggested to improve predictive performance. It is useful to confirm the efficacy of such improvements and to expand upon them. This thesis investigates whether the results of Leo Breiman's (1996) paper \emph{Bagging predictors} can be replicated, where bagging is shown to lower prediction error. Additionally, predictive performance of weighted bagging is investigated, where we weight using a function of the residual variance. The data used is simulated, consisting of a numerical outcome variable as well as 30 independent variables. Linear regression is run with forward step selection, selecting models with the lowest SSE. Predictions are saved for all 30 models. Separately, we run forward step selection, selecting significant p-values of the added coefficient, saving only one final model. Prediction error is measured in mean squared error. The results suggest that both bagged methods improve upon prediction error, selecting models with the lowest SSE, with unweighted bagging performing the best. The results are congruent with Breiman's (1996) results, with minor differences. P-value selection shows weighted bagging performing the best. Further research should be conducted on real data to verify these results, in particular with reference to weighted bagging.
|
189 |
Evaluating Risk Factors with Regression : A Review and Simulation Study of Current PracticesReinhammar, Ragna January 2022 (has links)
The term ”risk factor” is used synonymously with both predictor and causal factor, and causal aims of explanatory analyses are rarely stated explicitly. Consequently, the concepts of explaining and predicting are conflated in risk factor research. This thesis reviews the current practice of evaluating risk factors with regression in three medical journals and identifies three common covariate selection strategies: adjusting for a pre-specified set, univariable pre-filtering, and stepwise selection. The implication of ”risk factor” varies in the reviewed articles and many authors make implicit causal definitions of the term. In the articles, logistic regression is the most frequently used model, and effect estimates are often reported as conditional odds ratios. The thesis compares current practices to estimating a marginal odds ratio in a simulation study mimicking data from Louapre et al. (2020). The marginal odds ratio is estimated with a regression imputation estimator and an Augmented Inverse Probability Weighting estimator. The simulation study illustrates the difference between conditional and marginal odds ratios and examines the performance of estimators under correctly specified and misspecified models. From the simulation, it is concluded that the estimators of the marginal odds ratio are consistent and robust against certain model misspecifications.
|
190 |
Measures of statistical dependence for feature selection : Computational study / Mått på statistiskt beroende för funktionsval : BeräkningsstudieAlshalabi, Mohamad January 2022 (has links)
The importance of feature selection for statistical and machine learning models derives from their explainability and the ability to explore new relationships, leading to new discoveries. Straightforward feature selection methods measure the dependencies between the potential features and the response variable. This thesis tries to study the selection of features according to a maximal statistical dependency criterion based ongeneralized Pearson’s correlation coefficients, e.g., Wijayatunga’s coefficient. I present a framework for feature selection based on these coefficients for high dimensional feature variables. The results are compared to the ones obtained by applying an elastic net regression (for high-dimensional data). The generalized Pearson’s correlation coefficient is a metric-based measure where the metric is Hellinger distance. The metric is considered as the distance between probability distributions. The Wijayatunga’s coefficient is originally proposed for the discrete case; here, we generalize it for continuous variables by discretization and kernelization. It is interesting to see how discretization work as we discretize the bins finer. The study employs both synthetic and real-world data to illustrate the validity and power of this feature selection process. Moreover, a new method of normalization for mutual information is included. The results show that both measures have considerable potential in detecting associations. The feature selection experiment shows that elastic net regression is superior to our proposed method; nevertheless, more investigation could be done regarding this subject.
|
Page generated in 0.0441 seconds