1 |
Econometric Analysis of Labour Market InterventionsWebb, Matthew Daniel 08 July 2013 (has links)
This thesis involves three essays that explore the theory and application of econometric analysis to labour market interventions. One essay is methodological, and two essays are applications. The first essay contributes to the literature on inference with data sets containing within-cluster correlation. The essay highlights a problem with current practices when the number of clusters is 11 or fewer. Current practices can result in p-values that are not point identified but are instead p-value intervals. The chapter provides Monte Carlo evidence to support a proposed solution to this problem.
The second essay analyzes a labour market intervention within Canada--the Youth Hires program--which aimed to reduce youth unemployment. We find evidence that the program was able to increase employment among the targeted group. However, the impacts are only present for males, and we find evidence of displacement effects amongst the non-targeted group. The third essay examines a set of Graduate Retention Programs that several Canadian provinces offer. These programs are aimed at mitigating future skill shortages. Once the solution proposed in the first essay is applied, I find little evidence of the effectiveness of these programs in attracting or retaining recent graduates. / Thesis (Ph.D, Economics) -- Queen's University, 2013-07-05 15:56:33.805
|
2 |
An Approach to Improving Test Powers in Cox Proportional Hazards ModelsPal, Subhamoy 15 September 2021 (has links)
No description available.
|
3 |
A Comparative Simulation Study of Robust Estimators of Standard ErrorsJohnson, Natalie 10 July 2007 (has links) (PDF)
The estimation of standard errors is essential to statistical inference. Statistical variability is inherent within data, but is usually of secondary interest; still, some options exist to deal with this variability. One approach is to carefully model the covariance structure. Another approach is robust estimation. In this approach, the covariance structure is estimated from the data. White (1980) introduced a biased, but consistent, robust estimator. Long et al. (2000) added an adjustment factor to White's estimator to remove the bias of the original estimator. Through the use of simulations, this project compares restricted maximum likelihood (REML) with four robust estimation techniques: the Standard Robust Estimator (White 1980), the Long estimator (Long 2000), the Long estimator with a quantile adjustment (Kauermann 2001), and the empirical option of the MIXED procedure in SAS. The results of the simulation show small sample and asymptotic properties of the five estimators. The REML procedure is modelled under the true covariance structure, and is the most consistent of the five estimators. The REML procedure shows a slight small-sample bias as the number of repeated measures increases. The REML procedure may not be the best estimator in a situation in which the covariance structure is in question. The Standard Robust Estimator is consistent, but it has an extreme downward bias for small sample sizes. The Standard Robust Estimator changes little when complexity is added to the covariance structure. The Long estimator is unstable estimator. As complexity is introduced into the covariance structure, the coverage probability with the Long estimator increases. The Long estimator with the quantile adjustment works as designed by mimicking the Long estimator at an inflated quantile level. The empirical option of the MIXED procedure in SAS works well for homogeneous covariance structures. The empirical option of the MIXED procedure in SAS reduces the downward bias of the Standard Robust Estimator when the covariance structure is homogeneous.
|
4 |
Implementing the Difference in Differences (Dd) Estimator in Observational Education Studies: Evaluating the Effects of Small, Guided Reading Instruction for English Language LearnersSebastian, Princy 07 1900 (has links)
The present study provides an example of implementing the difference in differences (DD) estimator for a two-group, pretest-posttest design with K-12 educational intervention data. The goal is to explore the basis for causal inference via Rubin's potential outcomes framework. The DD method is introduced to educational researchers, as it is seldom implemented in educational research. DD analytic methods' mathematical formulae and assumptions are explored to understand the opportunity and the challenges of using the DD estimator for causal inference in educational research. For this example, the teacher intervention effect is estimated with multi-cohort student outcome data. First, the DD method is used to detect the average treatment effect (ATE) with linear regression as a baseline model. Second, the analysis is repeated using linear regression with cluster robust standard errors. Finally, a linear mixed effects analysis is provided with a random intercept model. Resulting standard errors, parameter estimates, and inferential statistics are compared among these three analyses to explore the best holistic analytic method for this context.
|
5 |
The Deterrent Effect of Traffic Enforcement on Ohio Crashes, 1995-2004Falinski, Giles L. 09 July 2009 (has links)
No description available.
|
6 |
The determinants of credit spreads changes in global shipping bonds.Kavussanos, M.G., Tsouknidis, Dimitris A. January 2014 (has links)
Yes / This paper investigates whether bond, issuer, industry and macro-specific variables
account for the observed variation of credit spreads’ changes of global shipping bond issues
before and after the onset of the subprime financial crisis. Results show that conclusions as
to the significant variables of spreads depend significantly on whether two-way clusteradjusted
standard errors are utilized, thus rendering results in the extant literature ambigious.
The main determinants of global cargo-carrying companies’ shipping bond spreads
are found in this paper to be: the liquidity of the bond issue, the stock market’s volatility,
the bond market’s cyclicality, freight earnings and the credit rating of the bond issue.
|
7 |
Contributions to Kernel EquatingAndersson, Björn January 2014 (has links)
The statistical practice of equating is needed when scores on different versions of the same standardized test are to be compared. This thesis constitutes four contributions to the observed-score equating framework kernel equating. Paper I introduces the open source R package kequate which enables the equating of observed scores using the kernel method of test equating in all common equating designs. The package is designed for ease of use and integrates well with other packages. The equating methods non-equivalent groups with covariates and item response theory observed-score kernel equating are currently not available in any other software package. In paper II an alternative bandwidth selection method for the kernel method of test equating is proposed. The new method is designed for usage with non-smooth data such as when using the observed data directly, without pre-smoothing. In previously used bandwidth selection methods, the variability from the bandwidth selection was disregarded when calculating the asymptotic standard errors. Here, the bandwidth selection is accounted for and updated asymptotic standard error derivations are provided. Item response theory observed-score kernel equating for the non-equivalent groups with anchor test design is introduced in paper III. Multivariate observed-score kernel equating functions are defined and their asymptotic covariance matrices are derived. An empirical example in the form of a standardized achievement test is used and the item response theory methods are compared to previously used log-linear methods. In paper IV, Wald tests for equating differences in item response theory observed-score kernel equating are conducted using the results from paper III. Simulations are performed to evaluate the empirical significance level and power under different settings, showing that the Wald test is more powerful than the Hommel multiple hypothesis testing method. Data from a psychometric licensure test and a standardized achievement test are used to exemplify the hypothesis testing procedure. The results show that using the Wald test can provide different conclusions to using the Hommel procedure.
|
8 |
Correctly Modeling Plant-Insect-Herbivore-Pesticide Interactions as Aggregate DataBanks, H. T., Banks, John E., Catenacci, Jared, Joyner, Michele, Stark, John 01 January 2020 (has links)
We consider a population dynamics model in investigating data from controlled experiments with aphids in broccoli patches surrounded by different margin types (bare or weedy ground) and three levels of insecticide spray (no, light, or heavy spray). The experimental data is clearly aggregate in nature. In previous efforts [1], the aggregate nature of the data was ignored. In this paper, we embrace this aspect of the experiment and correctly model the data as aggregate data, comparing the results to the previous approach. We discuss cases in which the approach may provide similar results as well as cases in which there is a clear difference in the resulting fit to the data.
|
9 |
An analysis of the relationship between economic development and demographic characteristics in the United StatesHeyne, Chad M. 01 May 2011 (has links)
Over the past several decades there has been extensive research done in an attempt to determine what demographic characteristics affect economic growth, measured in GDP per capita. Understanding what influences the growth of a country will vastly help policy makers enact policies to lead the country in a positive direction. This research focuses on isolating a new variable, women in the work force. As well as isolating a new variable, this research will modify a preexisting variable that was shown to be significant in order to make the variable more robust and sensitive to recessions. The intent of this thesis is to explore the relationship between several demographic characteristics and their effect on the growth rate of GDP per capita. The first step is to reproduce the work done by Barlow (1994) to ensure that the United States follows similar rules as the countries in his research. Afterwards, we will introduce new variables into the model, comparing the goodness of fit through the methods of R-squared, AIC and BIC. There have been several models developed to answer each of the research questions independently.
|
10 |
When Infinity is Too Long to Wait: On the Convergence of Markov Chain Monte Carlo MethodsOlsen, Andrew Nolan 08 October 2015 (has links)
No description available.
|
Page generated in 0.07 seconds