• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Predicting Success: An Examination of the Predictive Validity of a Measure of Motivational-Developmental Dimensions in College Admissions

Paris, Joseph January 2018 (has links)
Although many colleges and universities use a wide range of criteria to evaluate and select admissions applicants, much of the variance in college student success remains unexplained. Thus, success in college, as defined by academic performance and student retention, may be related to other variables or combinations of variables beyond those traditionally used in college admissions (high school grade point average and standardized test scores). The current study investigated the predictive validity of a measure of motivational-developmental dimensions as a predictor of the academic achievement and persistence of college students as measured by cumulative undergraduate grade point average and retention. These dimensions are based on social-cognitive (self-concept, self-set goals, causal attributions, and coping strategies) and developmental-constructivist (self-awareness and self-authorship) perspectives. Motivational-developmental constructs are under-explored in terms of the predictive potential derived from their use in evaluating admission applicants’ ability to succeed and persevere despite the academic and social challenges presented by postsecondary participation. Therefore, the current study aimed to generate new understandings to benefit the participating institution and other institutions of higher education that seek new methodologies for evaluating and selecting college admission applicants. This dissertation describes two studies conducted at a large, urban public university located in the Northeastern United States. Participants included 10,149 undergraduate students who enrolled as first-time freshmen for the Fall 2015 (Study 1) and Fall 2016 (Study 2) semesters. Prior to matriculation, participants applied for admission using one of two methods: standard admissions or test-optional admissions. Standard admission applicants submitted standardized test scores (e.g., SAT) whereas test-optional applicants responded to four short-answer essay questions, each of which measured a subset of the motivational-developmental dimensions examined in the current study. Trained readers evaluated the essays to produce a “test-optional essay rating score,” which served as the primary predictor variable in the current study. Quantitative analyses were conducted to investigate the predictive validity of the “test-optional essay rating score” and its relationship to cumulative undergraduate grade point average and retention, which served as the outcome variables in the current study. The results revealed statistically significant group differences between test-optional applicants and standard applicants. Test-optional admission applicants are more likely to be female, of lower socioeconomic status, and ethnic minorities as compared to standard admission applicants. Given these group differences, Pearson product-moment correlation coefficients were computed to determine whether the test-optional essay rating score differentially predicted success across racial and gender subgroups. There was inconclusive evidence regarding whether the test-optional essay rating score differentially predicts cumulative undergraduate grade point average and retention across student subgroups. The results revealed a weak correlation between the test-optional essay rating score and cumulative undergraduate grade point average (Study 1: r = .11, p < .01; Study 2: r = .07, p < .05) and retention (Study 1: r = .08, p < .05; Study 2: r = .10, p < .01), particularly in comparison to the relationship between these outcome variables and the criteria most commonly considered in college admissions (high school grade point average, SAT Verbal, SAT Quantitative, and SAT Writing). Despite these findings, the test-optional essay rating score contributed nominal value (R2 = .07) in predicting academic achievement and persistence beyond the explanation provided by traditional admissions criteria. Additionally, a ROC analysis determined that the test-optional essay rating score does not predict student retention in a way that is meaningfully different than chance and therefore is not an accurate binary classifier of retention. Further research should investigate the validity of other motivational-developmental dimensions and the fidelity of other methods for measuring them in an attempt to account for a greater proportion of variance in college student success. / Educational Leadership
2

Interactive Mitigation of Biases in Machine Learning Models

Kelly M Van Busum (18863677) 03 September 2024 (has links)
<p dir="ltr">Bias and fairness issues in artificial intelligence algorithms are major concerns as people do not want to use AI software they cannot trust. This work uses college admissions data as a case study to develop methodology to define and detect bias, and then introduces a new method for interactive bias mitigation.</p><p dir="ltr">Admissions data spanning six years was used to create machine learning-based predictive models to determine whether a given student would be directly admitted into the School of Science under various scenarios at a large urban research university. During this time, submission of standardized test scores as part of a student’s application became optional which led to interesting questions about the impact of standardized test scores on admission decisions. We developed and analyzed predictive models to understand which variables are important in admissions decisions, and how the decision to exclude test scores affects the demographics of the students who are admitted.</p><p dir="ltr">Then, using a variety of bias and fairness metrics, we analyzed these predictive models to detect biases the models may carry with respect to three variables chosen to represent sensitive populations: gender, race, and whether a student was the first in his/her family to attend college. We found that high accuracy rates can mask underlying algorithmic bias towards these sensitive groups.</p><p dir="ltr">Finally, we describe our method for bias mitigation which uses a combination of machine learning and user interaction. Because bias is intrinsically a subjective and context-dependent matter, it requires human input and feedback. Our approach allows the user to iteratively and incrementally adjust bias and fairness metrics to change the training dataset for an AI model to make the model more fair. This interactive bias mitigation approach was then used to successfully decrease the biases in three AI models in the context of undergraduate student admissions.</p>

Page generated in 0.0891 seconds