• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 21
  • 6
  • 6
  • 5
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 124
  • 65
  • 37
  • 24
  • 19
  • 18
  • 15
  • 14
  • 12
  • 9
  • 9
  • 9
  • 9
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Risk factor modeling of Hedge Funds' strategies / Risk factor modeling of Hedge Funds' strategies

Radosavčević, Aleksa January 2017 (has links)
This thesis aims to identify main driving market risk factors of different strategies implemented by hedge funds by looking at correlation coefficients, implementing Principal Component Analysis and analyzing "loadings" for first three principal components, which explain the largest portion of the variation of hedge funds' returns. In the next step, a stepwise regression through iteration process includes and excludes market risk factors for each strategy, searching for the combination of risk factors which will offer a model with the best "fit", based on The Akaike Information Criterion - AIC and Bayesian Information Criterion - BIC. Lastly, to avoid counterfeit results and overcome model uncertainty issues a Bayesian Model Average - BMA approach was taken. Key words: Hedge Funds, hedge funds' strategies, market risk, principal component analysis, stepwise regression, Akaike Information Criterion, Bayesian Information Criterion, Bayesian Model Averaging Author's e-mail: aleksaradosavcevic@gmail.com Supervisor's e-mail: mp.princ@seznam.cz
82

Community College Trustee Orientation and Training Influence on Use of Best Practices

Stine, Cory M. January 2012 (has links)
No description available.
83

Automatic Development of Pharmacokinetic Structural Models

Hamdan, Alzahra January 2022 (has links)
Introduction: The current development strategy of population pharmacokinetic models is a complex and iterative process that is manually performed by modellers. Such a strategy is time-demanding, subjective, and dependent on the modellers’ experience. This thesis presents a novel model building tool that automates the development process of pharmacokinetic (PK) structural models. Methods: Modelsearch is a tool in Pharmpy library, an open-source package for pharmacometrics modelling, that searches for the best structural model using an exhaustive stepwise search algorithm. Given a dataset, a starting model and a pre-specified model search space of structural model features, the tool creates and fits a series of candidate models that are then ranked based on a selection criterion, leading to the selection of the best model. The Modelsearch tool was used to develop structural models for 10 clinical PK datasets (5 orally and 5 i.v. administered drugs). A starting model for each dataset was generated using the assemblerr package in R, which included a first-order (FO) absorption without any absorption delay for oral drugs, one-compartment disposition, FO elimination, a proportional residual error model, and inter-individual variability on the starting model parameters with a correlation between clearance (CL) and central volume of distribution (VC). The model search space included aspects of absorption and absorption delay (for oral drugs), distribution and elimination. In order to understand the effects of different IIV structures on structural model selection, five model search approaches were investigated that differ in the IIV structure of candidate models: 1. naïve pooling, 2. IIV on starting model parameters only, 3. additional IIV on mean delay time parameter, 4. additional diagonal IIVs on newly added parameters, and 5. full block IIVs. Additionally, the implementation of structural model selection in the workflow of the fully automatic model development was investigated. Three strategies were evaluated: SIR, SRI, and RSI depending on the development order of structural model (S), IIV model (I) and residual error model (R). Moreover, the NONMEM errors encountered when using the tool were investigated and categorized in order to be handled in the automatic model building workflow. Results: Differences in the final selected structural models for each drug were observed between the five different model search approaches. The same distribution components were selected through Approaches 1 and 2 for 6/10 drugs. Approach 2 has also identified an absorption delay component in 4/5 oral drugs, whilst the naïve pooling approach only identified an absorption delay model in 2 drugs. Compared to Approaches 1 and 2, Approaches 3, 4 and 5 tended to select more complex models and more often resulted in minimization errors during the search. For the SIR, SRI and RSI investigations, the same structural model was selected in 9/10 drugs with a significant higher run time in RSI strategy compared to the other strategies. The NONMEM errors were categorized into four categories based on the handling suggestions which is valuable to further improve the tool in its automatic error handling. Conclusions: The Modelsearch tool was able to automatically select a structural model with different strategies of setting the IIV model structure. This novel tool enables the evaluation of numerous combinations of model components, which would not be possible using a traditional manual model building strategy. Furthermore, the tool is flexible and can support multiple research investigations for how to best implement structural model selection in a fully automatic model development workflow.
84

Comparing Assessment Methods As Predictors Of Student Learning In Undergraduate Mathematics

Shorter, Nichole 01 January 2008 (has links)
This experiment was designed to determine which assessment method: continuous assessment (in the form of daily in-class quizzes), cumulative assessment (in the form of online homework), or project-based learning, best predicts student learning (dependent upon posttest grades) in an undergraduate mathematics course. Participants included 117 university-level undergraduate freshmen enrolled in a course titled "Mathematics for Calculus". Initially, a multiple regression model was formulated to model the relationship between the predictor variables (the continuous assessment, cumulative assessment, and project scores) versus the outcome variable (the posttest scores). However, due to the possibility of multicollinearity present between the cumulative assessment predictor variable and the continuous assessment predictor variable, a stepwise regression model was implemented and caused the cumulative assessment predictor variable to be forced out of the resulting model, based on the results of statistical significance and hypothesis testing. The finalized stepwise regression model included continuous assessment scores and project scores as predictor variables of students' posttest scores with a 99% confidence level. Results indicated that ultimately the continuous assessment scores best predicted students' posttest scores.
85

Development of a geovisual analytics environment using parallel coordinates with applications to tropical cyclone trend analysis

Steed, Chad A 13 December 2008 (has links)
A global transformation is being fueled by unprecedented growth in the quality, quantity, and number of different parameters in environmental data through the convergence of several technological advances in data collection and modeling. Although these data hold great potential for helping us understand many complex and, in some cases, life-threatening environmental processes, our ability to generate such data is far outpacing our ability to analyze it. In particular, conventional environmental data analysis tools are inadequate for coping with the size and complexity of these data. As a result, users are forced to reduce the problem in order to adapt to the capabilities of the tools. To overcome these limitations, we must complement the power of computational methods with human knowledge, flexible thinking, imagination, and our capacity for insight by developing visual analysis tools that distill information into the actionable criteria needed for enhanced decision support. In light of said challenges, we have integrated automated statistical analysis capabilities with a highly interactive, multivariate visualization interface to produce a promising approach for visual environmental data analysis. By combining advanced interaction techniques such as dynamic axis scaling, conjunctive parallel coordinates, statistical indicators, and aerial perspective shading, we provide an enhanced variant of the classical parallel coordinates plot. Furthermore, the system facilitates statistical processes such as stepwise linear regression and correlation analysis to assist in the identification and quantification of the most significant predictors for a particular dependent variable. These capabilities are combined into a unique geovisual analytics system that is demonstrated via a pedagogical case study and three North Atlantic tropical cyclone climate studies using a systematic workflow. In addition to revealing several significant associations between environmental observations and tropical cyclone activity, this research corroborates the notion that enhanced parallel coordinates coupled with statistical analysis can be used for more effective knowledge discovery and confirmation in complex, real-world data sets.
86

A Study of the Influence Undergraduate Experiences Have onStudent Performance on the Graduate Management Admission Test

Plessner, Von Roderick January 2014 (has links)
No description available.
87

The association between working capital measures and the returns of South African industrial firms

Smith, Marolee Beaumont 12 1900 (has links)
This study investigates the association between traditional and alternative working capital measures and the returns of industrial firms listed on the Johannesburg Stock E"change. Twenty five variables for all industrial firms listed for the most recent 10 years were derived from standardised annual balance sheet data of the University of Pretoria's Bureau of Financial Analysis. Traditional liquidity ratios measuring working capital position, activity and leverage, and alternative liquidity measures, were calculated for each of the 135 participating firms for the 1 0 years. These working capital measures were tested for association with five return measures for every firm over the same period. This was done by means of a chi-square test for association, followed by stepwise multiple regression undertaken to quantify the underlying structural relationships between the return measures and the working capital measures. The results of the tests indicated that the traditional working capital leverage measures, in particular, total current liabilities divided by funds flow, and to a lesser e"tent, long-term loan capital divided by net working capital, displayed the greatest associations, and e"plained the majority of the variance in the return measures. At-test, undertaken to analyse the size effect on the working capital measures employed by the participating firms, compared firms according to total assets. The results revealed significant differences between the means of the top quartile of firms and the bottom quartile, for eight of the 13 working capital measures included in the study. A nonparametric test was applied to evaluate the sector effect on the working capital measures employed by the participating firms. The rank scores indicated significant differences in the means across the sectors for si" of the 13 working capital measures. A decrease in the working capital leverage measures of current liabilities divided by funds flow, and long-term loan capital divided by net working capital, should signal an increase in returns, and vice versa. It is recommended that financial managers consider these findings when forecasting firm returns. / Business Management / D. Com. (Business Management)
88

Statistical modelling of return on capital employed of individual units

Burombo, Emmanuel Chamunorwa 10 1900 (has links)
Return on Capital Employed (ROCE) is a popular financial instrument and communication tool for the appraisal of companies. Often, companies management and other practitioners use untested rules and behavioural approach when investigating the key determinants of ROCE, instead of the scientific statistical paradigm. The aim of this dissertation was to identify and quantify key determinants of ROCE of individual companies listed on the Johannesburg Stock Exchange (JSE), by comparing classical multiple linear regression, principal components regression, generalized least squares regression, and robust maximum likelihood regression approaches in order to improve companies decision making. Performance indicators used to arrive at the best approach were coefficient of determination ( ), adjusted ( , and Mean Square Residual (MSE). Since the ROCE variable had positive and negative values two separate analyses were done. The classical multiple linear regression models were constructed using stepwise directed search for dependent variable log ROCE for the two data sets. Assumptions were satisfied and problem of multicollinearity was addressed. For the positive ROCE data set, the classical multiple linear regression model had a of 0.928, an of 0.927, a MSE of 0.013, and the lead key determinant was Return on Equity (ROE),with positive elasticity, followed by Debt to Equity (D/E) and Capital Employed (CE), both with negative elasticities. The model showed good validation performance. For the negative ROCE data set, the classical multiple linear regression model had a of 0.666, an of 0.652, a MSE of 0.149, and the lead key determinant was Assets per Capital Employed (APCE) with positive effect, followed by Return on Assets (ROA) and Market Capitalization (MC), both with negative effects. The model showed poor validation performance. The results indicated more and less precision than those found by previous studies. This suggested that the key determinants are also important sources of variability in ROCE of individual companies that management need to work with. To handle the problem of multicollinearity in the data, principal components were selected using Kaiser-Guttman criterion. The principal components regression model was constructed using dependent variable log ROCE for the two data sets. Assumptions were satisfied. For the positive ROCE data set, the principal components regression model had a of 0.929, an of 0.929, a MSE of 0.069, and the lead key determinant was PC4 (log ROA, log ROE, log Operating Profit Margin (OPM)) and followed by PC2 (log Earnings Yield (EY), log Price to Earnings (P/E)), both with positive effects. The model resulted in a satisfactory validation performance. For the negative ROCE data set, the principal components regression model had a of 0.544, an of 0.532, a MSE of 0.167, and the lead key determinant was PC3 (ROA, EY, APCE) and followed by PC1 (MC, CE), both with negative effects. The model indicated an accurate validation performance. The results showed that the use of principal components as independent variables did not improve classical multiple linear regression model prediction in our data. This implied that the key determinants are less important sources of variability in ROCE of individual companies that management need to work with. Generalized least square regression was used to assess heteroscedasticity and dependences in the data. It was constructed using stepwise directed search for dependent variable ROCE for the two data sets. For the positive ROCE data set, the weighted generalized least squares regression model had a of 0.920, an of 0.919, a MSE of 0.044, and the lead key determinant was ROE with positive effect, followed by D/E with negative effect, Dividend Yield (DY) with positive effect and lastly CE with negative effect. The model indicated an accurate validation performance. For the negative ROCE data set, the weighted generalized least squares regression model had a of 0.559, an of 0.548, a MSE of 57.125, and the lead key determinant was APCE and followed by ROA, both with positive effects.The model showed a weak validation performance. The results suggested that the key determinants are less important sources of variability in ROCE of individual companies that management need to work with. Robust maximum likelihood regression was employed to handle the problem of contamination in the data. It was constructed using stepwise directed search for dependent variable ROCE for the two data sets. For the positive ROCE data set, the robust maximum likelihood regression model had a of 0.998, an of 0.997, a MSE of 6.739, and the lead key determinant was ROE with positive effect, followed by DY and lastly D/E, both with negative effects. The model showed a strong validation performance. For the negative ROCE data set, the robust maximum likelihood regression model had a of 0.990, an of 0.984, a MSE of 98.883, and the lead key determinant was APCE with positive effect and followed by ROA with negative effect. The model also showed a strong validation performance. The results reflected that the key determinants are major sources of variability in ROCE of individual companies that management need to work with. Overall, the findings showed that the use of robust maximum likelihood regression provided more precise results compared to those obtained using the three competing approaches, because it is more consistent, sufficient and efficient; has a higher breakdown point and no conditions. Companies management can establish and control proper marketing strategies using the key determinants, and results of these strategies can see an improvement in ROCE. / Mathematical Sciences / M. Sc. (Statistics)
89

Logistic regression to determine significant factors associated with share price change

Muchabaiwa, Honest 19 February 2014 (has links)
This thesis investigates the factors that are associated with annual changes in the share price of Johannesburg Stock Exchange (JSE) listed companies. In this study, an increase in value of a share is when the share price of a company goes up by the end of the financial year as compared to the previous year. Secondary data that was sourced from McGregor BFA website was used. The data was from 2004 up to 2011. Deciding which share to buy is the biggest challenge faced by both investment companies and individuals when investing on the stock exchange. This thesis uses binary logistic regression to identify the variables that are associated with share price increase. The dependent variable was annual change in share price (ACSP) and the independent variables were assets per capital employed ratio, debt per assets ratio, debt per equity ratio, dividend yield, earnings per share, earnings yield, operating profit margin, price earnings ratio, return on assets, return on equity and return on capital employed. Different variable selection methods were used and it was established that the backward elimination method produced the best model. It was established that the probability of success of a share is higher if the shareholders are anticipating a higher return on capital employed, and high earnings/ share. It was however, noted that the share price is negatively impacted by dividend yield and earnings yield. Since the odds of an increase in share price is higher if there is a higher return on capital employed and high earning per share, investors and investment companies are encouraged to choose companies with high earnings per share and the best returns on capital employed. The final model had a classification rate of 68.3% and the validation sample produced a classification rate of 65.2% / Mathematical Sciences / M.Sc. (Statistics)
90

資料窺探與交易策略之獲利性:以亞洲股票市場為例 / Data snooping and the profitability of trading strategies: evidence from the asian stock markets

李榮傑, Lee, Chung Chieh Unknown Date (has links)
於這篇論文中,我們運White (2000)的Reality Check與Romano and Wolf (2005)的stepwise multiple test檢測交易策略的獲利性以更正資料窺探的偏誤。不同於先前運用資料窺探法則的研究,我們的研究以技術分析及時間序列預測兩者為依歸來建立交易策略,另外我們探討的市場集中在六個主要的亞洲股票市場。大致上,我們發現鮮少證據支持技術交易策略的獲利性;於基礎分析中且考慮交易成本時,只有少數幾個獲利性交易法則出現於兩個興新市場。另外在子樣本期間中,我們發現獲利性策略的表現並不穩定且這幾年來獲利性有逐漸變弱的趨勢。在進階分析中,我們發現沒有任何交易策略表現優越於基本的買進持有策略。 / In this paper, we exam the profitability of trading strategies by using both White’s (2000) Reality Check and Romano and Wolf (2005)s’ stepwise multiple test that correct the data snooping bias. Different from previous studies with the data snooping methodology, our analysis set the universe of forecasts (trading strategies) based on both technical analysis and time series prediction, and the markets which our investigation focuses on are six major Asian stock markets. Overall we find little supportive evidence for the profitability of trading strategies. Our basic analysis shows that there are only few profitable trading strategies detected for two emerging markets while transaction costs are taken into account. Moreover, the performances of the profitable strategies are unstable and the profitability becomes much weaker in the recent years as we find in the sub-periods. In further analysis, we also find that there is no trading strategies in our universe that can outperform the basically buy and hold strategy.

Page generated in 0.0221 seconds