• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 8
  • 8
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Community College Institutional Effectiveness: Perspectives of Campus Stakeholders

Skolits, Gary J., Graybeal, Susan 01 April 2007 (has links)
This study addresses a campus institutional effectiveness (IE) process and its influence on faculty and staff. Although a comprehensive, rational IE process appeals to campus leaders, this study found that it creates significant faculty and staff challenges. Campus leaders, faculty, and staff differ in their (a) knowledge and support of IE; (b) participation in IE process activities; and (c) perceptions of IE strengths, weaknesses, and usefulness. Needed IE data are typically available to campus stakeholders except for student learning outcomes data across all academic programs. Administrators, faculty, and staff agree that a lack of time is the major IE impediment. IE expectations may be too challenging for campus participants, and faculty and staff need more institutional support to analyze and use existing data. Future research should focus on faculty and staff aspects of community college effectiveness.
2

Bayesian Networks with Expert Elicitation as Applicable to Student Retention in Institutional Research

Dunn, Jessamine Corey 13 May 2016 (has links)
The application of Bayesian networks within the field of institutional research is explored through the development of a Bayesian network used to predict first- to second-year retention of undergraduates. A hybrid approach to model development is employed, in which formal elicitation of subject-matter expertise is combined with machine learning in designing model structure and specification of model parameters. Subject-matter experts include two academic advisors at a small, private liberal arts college in the southeast, and the data used in machine learning include six years of historical student-related information (i.e., demographic, admissions, academic, and financial) on 1,438 first-year students. Netica 5.12, a software package designed for constructing Bayesian networks, is used for building and validating the model. Evaluation of the resulting model’s predictive capabilities is examined, as well as analyses of sensitivity, internal validity, and model complexity. Additionally, the utility of using Bayesian networks within institutional research and higher education is discussed. The importance of comprehensive evaluation is highlighted, due to the study’s inclusion of an unbalanced data set. Best practices and experiences with expert elicitation are also noted, including recommendations for use of formal elicitation frameworks and careful consideration of operating definitions. Academic preparation and financial need risk profile are identified as key variables related to retention, and the need for enhanced data collection surrounding such variables is also revealed. For example, the experts emphasize study skills as an important predictor of retention while noting the absence of collection of quantitative data related to measuring students’ study skills. Finally, the importance and value of the model development process is stressed, as stakeholders are required to articulate, define, discuss, and evaluate model components, assumptions, and results.
3

An instrumental case study of the phenomenon of collaboration in the process of improving community college developmental reading and writing instruction

Gordin, Patricia C 01 June 2006 (has links)
Focusing upon the intersections between community college faculty and assessment professionals (e.g., institutional researchers) in improving student learning outcomes, the purpose of this study was to describe, analyze, and interpret the experiences of these professionals as they planned for and conducted student learning outcomes assessment in developmental reading, writing, and study skills courses. This instrumental case study at one particular community college in Florida investigated the roles played by these individuals within the larger college effort to develop a Quality Enhancement Plan (QEP), an essential component of a regional accreditation review. The methodology included individual interviews, a focus group interview, a field observation, and analysis of documents related to assessment planning. There were several major findings: · Assessment professionals and faculty teaching developmental courses had similar professional development interests (e.g., teaching and learning, measurement). · While some faculty leaders assumed a facilitative role similar to that of an assessment professional, the reporting structure determined the appropriate action taken in response to the results of assessment. That is, assessment professionals interpreted results and recommended targets for improvement, while faculty and instructional administrators implemented and monitored instructional strategies. · The continuous transformation of the QEP organizational structure through research, strategy formulation, and implementation phases in an inclusive process enabled the college to put its best knowledge and measurement expertise into its five-year plan. · Developmental goals for students in addition to Florida-mandated exit exams included self-direction, affective development such as motivation, and success at the next level. · Faculty identified discipline-based workshops as promising vehicles for infusing instructional changes into courses, thus using the results of learning outcomes assessments more effectively.A chronological analysis further contributed to findings of the study. This researcher concluded that the College's eight-year history of developing general education outcomes and striving to improve the college preparatory program through longitudinal tracking of student success had incubated a powerful faculty learning community and an alliance with assessment professionals. This community of practice, when provided the right structure, leadership, and resources, enabled the College to create a Quality Enhancement Plan that faculty and staff members could be proud of.
4

Assessing the Validity of a Measure of the Culture of Evidence at Two-Year Colleges

Wallace-Pascoe, Dawn Marie 03 September 2013 (has links)
No description available.
5

A Retrospective-Longitudinal Examination of the Relationship between Apportionment of Seat Time in Community-College Algebra Courses and Student Academic Performance

Roig-Watnik, Steven M 06 December 2012 (has links)
During the past decade, there has been a dramatic increase by postsecondary institutions in providing academic programs and course offerings in a multitude of formats and venues (Biemiller, 2009; Kucsera & Zimmaro, 2010; Lang, 2009; Mangan, 2008). Strategies pertaining to reapportionment of course-delivery seat time have been a major facet of these institutional initiatives; most notably, within many open-door 2-year colleges. Often, these enrollment-management decisions are driven by the desire to increase market-share, optimize the usage of finite facility capacity, and contain costs, especially during these economically turbulent times. So, while enrollments have surged to the point where nearly one in three 18-to-24 year-old U.S. undergraduates are community college students (Pew Research Center, 2009), graduation rates, on average, still remain distressingly low (Complete College America, 2011). Among the learning-theory constructs related to seat-time reapportionment efforts is the cognitive phenomenon commonly referred to as the spacing effect, the degree to which learning is enhanced by a series of shorter, separated sessions as opposed to fewer, more massed episodes. This ex post facto study explored whether seat time in a postsecondary developmental-level algebra course is significantly related to: course success; course-enrollment persistence; and, longitudinally, the time to successfully complete a general-education-level mathematics course. Hierarchical logistic regression and discrete-time survival analysis were used to perform a multi-level, multivariable analysis of a student cohort (N = 3,284) enrolled at a large, multi-campus, urban community college. The subjects were retrospectively tracked over a 2-year longitudinal period. The study found that students in long seat-time classes tended to withdraw earlier and more often than did their peers in short seat-time classes (p < .05). Additionally, a model comprised of nine statistically significant covariates (all with p-values less than .01) was constructed. However, no longitudinal seat-time group differences were detected nor was there sufficient statistical evidence to conclude that seat time was predictive of developmental-level course success. A principal aim of this study was to demonstrate—to educational leaders, researchers, and institutional-research/business-intelligence professionals—the advantages and computational practicability of survival analysis, an underused but more powerful way to investigate changes in students over time.
6

Using Data Mining to Model Student Success

Geltz, Rebecca L. January 2009 (has links)
No description available.
7

Expeditious Causal Inference for Big Observational Data

Yumin Zhang (13163253) 28 July 2022 (has links)
<p>This dissertation address two significant challenges in the causal inference workflow for Big Observational Data. The first is designing Big Observational Data with high-dimensional and heterogeneous covariates. The second is performing uncertainty quantification for estimates of causal estimands that are obtained from the application of black box machine learning algorithms on the designed Big Observational Data. The methodologies developed by addressing these challenges are applied for the design and analysis of Big Observational Data from a large public university in the United States. </p> <h4>Distributed Design</h4> <p>A fundamental issue in causal inference for Big Observational Data is confounding due to covariate imbalances between treatment groups. This can be addressed by designing the study prior to analysis. The design ensures that subjects in the different treatment groups that have comparable covariates are subclassified or matched together. Analyzing such a designed study helps to reduce biases arising from the confounding of covariates with treatment. Existing design methods, developed for traditional observational studies consisting of a single designer, can yield unsatisfactory designs with sub-optimum covariate balance for Big Observational Data due to their inability to accommodate the massive dimensionality, heterogeneity, and volume of the Big Data. We propose a new framework for the distributed design of Big Observational Data amongst collaborative designers. Our framework first assigns subsets of the high-dimensional and heterogeneous covariates to multiple designers. The designers then summarize their covariates into lower-dimensional quantities, share their summaries with the others, and design the study in parallel based on their assigned covariates and the summaries they receive. The final design is selected by comparing balance measures for all covariates across the candidates and identifying the best amongst the candidates. We perform simulation studies and analyze datasets from the 2016 Atlantic Causal Inference Conference Data Challenge to demonstrate the flexibility and power of our framework for constructing designs with good covariate balance from Big Observational Data.</p> <h4>Designed Bootstrap</h4> <p>The combination of modern machine learning algorithms with the nonparametric bootstrap can enable effective predictions and inferences on Big Observational Data. An increasingly prominent and critical objective in such analyses is to draw causal inferences from the Big Observational Data. A fundamental step in addressing this objective is to design the observational study prior to the application of machine learning algorithms. However, the application of the traditional nonparametric bootstrap on Big Observational Data requires excessive computational efforts. This is because every bootstrap sample would need to be re-designed under the traditional approach, which can be prohibitive in practice. We propose a design-based bootstrap for deriving causal inferences with reduced bias from the application of machine learning algorithms on Big Observational Data. Our bootstrap procedure operates by resampling from the original designed observational study. It eliminates the need for additional, costly design steps on each bootstrap sample that are performed under the standard nonparametric bootstrap. We demonstrate the computational efficiency of this procedure compared to the traditional nonparametric bootstrap, and its equivalency in terms of confidence interval coverage rates for the average treatment effects, by means of simulation studies and a real-life case study.</p> <h4>Case Study</h4> <p>We apply the distributed design and designed bootstrap methodologies in a case study involving institutional data from a large public university in the United States. The institutional data contains comprehensive information about the undergraduate students in the university, ranging from their academic records to on-campus activities. We study the causal effects of undergraduate students’ attempted course load on their academic performance based on a selection of covariates from these data. Ultimately, our real-life case study demonstrates how our methodologies enable researchers to effectively use straightforward design procedures to obtain valid causal inferences with reduced computational efforts from the application of machine learning algorithms on Big Observational Data.</p> <p><br></p>
8

Educationally At-risk College Students From Single-parent and Two-parent Households: an Analysis of Differences Employing Cooperative Institutional Research Program Data.

Brown, Peggy Brandt 08 1900 (has links)
Using factors of low income, parents' levels of education, and family composition as determinants of educationally at-risk status, study investigated differences between first generation, undergraduate college students from families in lowest quintile of income in the U.S, One group consisted of students from single-parent households and the other of students from two-parent households. Data were from CIRP 2003 College Student Survey (CSS) and its matched data from the Freshman Survey (Student Information Form - SIF). Differences examined included student inputs, involvements, outcomes, and collegiate environments. Included is portrait of low income, first generation college students who successfully navigated U.S. higher education. The number of cases dropped from 15,601 matched SIF/CSS cases to 308 cases of low income, first generation college students (175 from single-parent households and 133 from two-parent households). Most of the 308 attended private, 4-year colleges. Data yielded more similarities than differences between groups. Statistically significant differences (p < .05) existed in 9 of 100 variables including race/ ethnicity, whether or not English was first language, and concern for ability to finance education as freshman. Data were not generalizable to all low income, first generation college students because of lack of public, 4-year and 2-year colleges and universities in dataset. Graduating seniors' average expected debt in June 2003 was $23,824 for students from single-parent households and $19,867 for those from two-parent households. 32% from single-parent households and 22% from two-parent households expected more than $25,000 of debt. Variables used on SIF proved effective tools to develop derived variables to identify low income, first generation college students from single-parent and two-parent households within CIRP database. Methodology to develop derived variables is explained.

Page generated in 0.1337 seconds