• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Construct representation of self-report future time perspective for work and retirement scholarship

Kerry, Matthew James 27 May 2016 (has links)
The dissertation presents evidence on the measurement properties of self-report items in contemporary organizational contexts (Podsakoff & Organ, 1986). Operationally, the dissertation adopts a construct representation approach to construct validity, defined by the response processes engaged for measurement performance in trait assessment (AERA, 2014; Embretson, 1983). For example, self-report measures are known to be affected by a variety of variables, such as semantic and referent features (Cermac & Craik, 1979; Kelly, 1955) and design factors that impact cognitive context (Stone, et al, 2000; The Science of Self-Report). In turn, the response processes impacts the external correlations (Embretson, 2007). To the extent that semantic-referent features and design factors are construct-irrelevant, reduced external correlations can be expected. This dissertation presents evidence from a qualitative review of self-report future time perspective (FTP) instruments across organizational and retirement contexts. A quantitative review compares external correlates of the two instruments. A retrospective-observational study benchmarks the psychometric properties of Carstensen's self-report instrument using modern latent-variable modeling (item-response theory [IRT]). Structural equation modeling (SEM) is further used to test for moderating effects of subjective life expectancy (SLE) on latent predictors of FTP and retirement plans. Evidence from a '3 x 2' mixed-subjects experimental design is also presented indicating the effects of subjective life expectancy (SLE) on measurement error in personality factors, FTP, and retirement plans. Discussion centers on advancing measurement paradigms in psychological and education research, as well as -more generally- adopting an integrated perspective of construct validity for advancing and evaluating substantive research.
2

Construct representation of First Certificate in English (FCE) reading

Corrigan, Michael January 2015 (has links)
The current study investigates the construct representation of the reading component of a B2 level general English test: First Certificate in English (FCE). Construct representation is the relationship between cognitive processes elicited by the test and item difficulty. To facilitate this research, a model of the cognitive process involved in responding to reading test items was defined, drawing together aspects of different models (Embretson & Wetzel, 1987; Khalifa & Weir, 2009; Rouet, 2012). The resulting composite contained four components: the formation of an understanding of item requirements (OP), the location of relevant text in the reading passage (SEARCH), the retrieval of meaning from the relevant text (READ) and the selection of an option for the response (RD). Following this, contextual features predicted by theory to influence the cognitive processes, and hence the difficulty of items, were determined. Over 50 such variables were identified and mapped to each of the cognitive processes in the model. Examples are word frequency in the item stem and options for OP; word frequency in the reading passage for READ; semantic match between stem/option and relevant text in the passage for SEARCH; and dispersal of relevant information in the reading passage for RD. Response data from approximately 10,000 live test candidates were modelled using the Linear Logistic Test Model (LLTM) within a Generalised Linear Mixed Model framework (De Boeck & Wilson, 2004b). The LLTM is based on the Rasch model, for which the probability of success on an item is a function of item difficulty and candidate ability. The holds for LLTM except that item difficulty is decomposed so that the contribution of each source of difficulty (the contextual features mentioned above) is estimated. The main findings of the study included the identification of 26 contextual features which either increased or decreased item difficulty. Of these features, 20 were retained in a final model which explained 75.79% of the variance accounted for by a Rasch model. Among the components specified by the composite model, OP and READ were found to have the most influence, with RD exhibiting a moderate influence and SEARCH a low influence. Implications for developers of FCE include the need to consider and balance test method effects, and for other developers the additional need to determine whether their tests test features found to be criterial to the target level (such as non-standard word order at B2 level). Researchers wishing to use Khalifa and Weir’s (2009) model of reading should modify the stage termed named inferencing and consider adding further stages which define the way in which the goal setter and monitor work and the way in which item responses are selected. Finally, for those researchers interested in adopting a similar approach to that of the current study, careful consideration should be given to the way in which attributes are selected. The aims and scope of the study are of prime importance here.
3

Investigating How Equating Guidelines for Screening and Selecting Common Items Apply When Creating Vertically Scaled Elementary Mathematics Tests

Hardy, Maria Assunta 09 December 2011 (has links) (PDF)
Guidelines to screen and select common items for vertical scaling have been adopted from equating. Differences between vertical scaling and equating suggest that these guidelines may not apply to vertical scaling in the same way that they apply to equating. For example, in equating the examinee groups are assumed to be randomly equivalent, but in vertical scaling the examinee groups are assumed to possess different levels of proficiency. Equating studies that examined the characteristics of the common-item set stress the importance of careful item selection, particularly when groups differ in ability level. Since in vertical scaling cross-level ability differences are expected, the common items' psychometric characteristics become even more important in order to obtain a correct interpretation of students' academic growth. This dissertation applied two screening criteria and two selection approaches to investigate how changes in the composition of the linking sets impacted the nature of students' growth when creating vertical scales for two elementary mathematics tests. The purpose was to observe how well these equating guidelines were applied in the context of vertical scaling. Two separate datasets were analyzed to observe the impact of manipulating the common items' content area and targeted curricular grade level. The same Rasch scaling method was applied for all variations of the linking set. Both the robust z procedure and a variant of the 0.3-logit difference procedure were used to screen unstable common items from the linking sets. (In vertical scaling, a directional item-difficulty difference must be computed for the 0.3-logit difference procedure.) Different combinations of stable common items were selected to make up the linking sets. The mean/mean method was used to compute the equating constant and linearly transform the students' test scores onto the base scale. A total of 36 vertical scales were created. The results indicated that, although the robust z procedure was a more conservative approach to flagging unstable items, the robust z and the 0.3-logit difference procedure produced similar interpretations of students' growth. The results also suggested that the choice of grade-level-targeted common items affected the estimates of students' grade-to-grade growth, whereas the results regarding the choice of content-area-specific common items were inconsistent. The findings from the Geometry and Measurement dataset indicated that the choice of content-area-specific common items had an impact on the interpretation of students' growth, while the findings from the Algebra and Data Analysis/Probability dataset indicated that the choice of content-area-specific common items did not appear to significantly affect students' growth. A discussion of the limitations of the study and possible future research is presented.

Page generated in 0.1636 seconds