• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 537
  • 324
  • 70
  • 51
  • 36
  • 28
  • 25
  • 14
  • 11
  • 10
  • 10
  • 8
  • 7
  • 7
  • 6
  • Tagged with
  • 1536
  • 445
  • 267
  • 241
  • 190
  • 185
  • 168
  • 161
  • 141
  • 139
  • 136
  • 111
  • 106
  • 105
  • 92
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

An Examination of Test-Taking Attitudes and Response Distortion on a Personality Test

Smith, Jeffrey A. 30 April 1997 (has links)
This study examined test-taking attitudes and response distortion on a personality test. Consistent with our hypotheses, applicants were found to have significantly more positive test-taking attitudes and exhibit a greater degree of response distortion as compared to incumbents. In addition, test-taking attitudes were significantly associated with response distortion. However, test-taking attitudes failed to affect work performance or validity in the incumbent sample. Limitations and implications for future research are discussed. / Ph. D.
142

The Evaluation of Methods to Rapidly Assess Beverage Intake and Hydration Status

Kostelnik, Samantha Bond 09 April 2020 (has links)
Dehydration can impact the general population but it is particularly detrimental for athletes, due to their physical performance requirements. Although fluids in general contribute to meeting hydration needs, some beverages aid in the rehydration process more than others. The Beverage Intake Questionnaire (BEVQ-15) is a food frequency questionnaire (FFQ) that can rapidly assess habitual beverage intake; this FFQ has been validated in children and adults. However, no beverage consumption questionnaire has been validated in athletes. In addition to monitoring fluid intake, hydration status can be assessed through urinary and blood indices. Urine color (UC) has been utilized as a practical hydration biomarker in several populations. However, this biomarker has not been validated among the general population of collegiate athletes. The first study (n=58): formulated a novel whey-permeate-based beverage to promote hydration and assess its sensory characteristics in the general population. The overall acceptability of the beverage was lower than the control beverage, according to a 9-point Likert scale (x̅ = 4.5 – 4.9 and x̅ = 6.7, respectively). The second study (n=120): evaluated the comparative validity and reliability of the BEVQ-15 and UC within NCAA Division 1 collegiate athletes. Associations were noted between the BEVQ-15 and multiple 24-hr dietary recalls (reference method) for total beverage fl oz and kcal (r=0.41 and r=0.47, p<0.05, respectively). There were associations between athlete's UC and urinary specific gravity (USG; hydration biomarker) rated by athletes and researchers (r=0.67 and 0.88, p<0.05, respectively). Lastly, a systematic review was performed to evaluate original research addressing the validity of UC as a hydration biomarker in the adult population more broadly, including athletes and older adults. Eleven of 424 articles met inclusion criteria, and the available research generally reported significant correlations between UC and other hydration indices (r=0.35-0.93). However, limitations in existing research were evident. Although the BEVQ-15 may be a valid beverage intake assessment method in collegiate athletes, additional modifications were identified which could improve its validity. Future work includes re-evaluating the validity and reliability of the BEVQ-15 specifically modified for athletes, as well as assessing the sensitivity of this FFQ to detect changes in beverage intake. / Doctor of Philosophy / Drinking adequate amounts of fluids is important for maintaining normal bodily functions. When body water losses exceed fluid intake, dehydration may result, which can lead to numerous consequences such as headaches, dizziness, decreased mental focus, and fatigue. An athlete, who has high physical demands, may experience these negative consequences as well as muscle cramps, increased strain on the heart, and decreased athletic performance. Some beverages can replenish lost fluids better than others, due to their electrolyte (i.e. sodium, potassium, magnesium) content. This may include whey-permeate based beverages. In order to prevent dehydration, it is important to monitor fluid consumption and fluid losses. A beverage intake questionnaire (BEVQ-15) can be used to quickly assess usual beverage intake. Studies have shown that this questionnaire is accurate in children, adolescents, and adults. However, there are currently no validated methods for usual habitual beverage intake in athletes. This dissertation evaluated the taste of a new whey-permeate hydration beverage, and the accuracy and test-retest reliability of the BEVQ-15 within NCAA Division 1 collegiate athletes and found positive results. Measurements in urine and blood can be also be used to assess hydration status, but some of these methods are more expensive and less practical for daily use in real-world settings. Urine color (UC) has been studied as a hydration indicator, but this dissertation is the first to evaluate the accuracy and reliability of this method within a diverse group of collegiate athletes, in a real-world setting. Our results suggest that UC is a simple and reasonably accurate hydration assessment method when compared to another urinary assessment method. Nonetheless, there is limited research which has studied this topic. Future work can address methods to improve the effectiveness of these approaches for maintaining and evaluating fluid intake and status in the collegiate athletic population.
143

Construct Deficiency in Avoidance Motivation: Development and Validation of a Scale Measuring Vigilance

Bateman, Tanner Alan 06 January 2017 (has links)
Two concerns dominate speculation about the lack of progress in motivational disposition research. First, truly unique dispositional constructs have not been identified since wide acceptance of the approach / avoidance distinction. Second, research has largely neglected to account for context in models of motivated behavior. Effective avoidance has systematically been unassessed in motivation research. Social cognitive theory was used to define an effective avoidance motivational trait, vigilance, as an antecedent to effective regulatory behaviors that are avoidant in nature and/or strategy. Two studies were conducted: First, development and psychometric evaluation of a scale measuring vigilance within the existing motivational trait framework (Heggestad and Kanfer, 2000). Exploratory and confirmatory analyses provided initial validity evidence for the vigilance construct; composed of diligence and error-detection facets. Convergent – discriminant analysis revealed that vigilance is significantly related to approach and avoidance motivational constructs identifying two possible sources of contamination in self-report measures of motivational traits. Measurement items may be contaminated with implied outcomes and measurement items may be contaminated with generalized self-efficacy. In the second study, a within-subjects experiment tested the predictive validity of the vigilance measurement scale for task-specific self-efficacy and performance on a task that rewards avoidance-oriented strategies. Vigilance predicted prevention task-specific self-efficacy ( = .29) in one of two experimental conditions. The validation study also offered construct validity evidence for the vigilance construct. Implications and future directions are discussed. / Ph. D.
144

Validity of interpretation: a user validity perspective beyond the test score

MacIver, R., Anderson, Neil, Costa, Ana-Cristina, Evers, A. 2014 April 1923 (has links)
Yes / This paper introduces the concept of user validity and provides a new perspective on the validity of interpretations from tests. Test interpretation is based on outputs such as test scores, profiles, reports, spread-sheets of multiple candidates’ scores, etc. The user validity perspective focuses on the interpretations a test user makes given the purpose of the test and the information provided in the test output. This innovative perspective focuses on how user validity can be extended to content, criterion and to some extent construct-related validity. It provides a basis for researching the validity of interpretations and an improved understanding of the appropriateness of different approaches to score interpretation, as well as how to design test outputs and assessments which are pragmatic and optimal.
145

An evaluation of the construct validity of situational judgment tests

Trippe, David Matthew 10 December 2002 (has links)
Situational judgment tests are analogous to earlier forms of "high fidelity" simulations such that an ostensible paradox emerges in the consistent finding of criterion-referenced validity but almost complete lack of construct validity evidence. The present study evaluates the extent to which SJT's can demonstrate convergent and discriminant validity by analyzing a SJT from a multitrait-multimethod perspective. A series of hierarchically nested confirmatory factor models were tested. Results indicate that the SJT demonstrates convergent and discriminant validity but also contains non-trivial amounts of construct-irrelevant method variance. Wide variability in the content and validation methods of SJT's are discussed as the reason previous attempts to find construct validity have failed. / Master of Science
146

The Construct Validity of Self-Reported Historical Physical Activity

Bowles, Heather R. 05 1900 (has links)
The purpose of this study was to determine the construct validity of self-reported historical walking, running, and jogging (WRJ) activity. The criterion measure was concurrent performance on a maximal treadmill test. Subjects completed a medical exam and treadmill test between the years 1976 and 1985, and completed a follow-up questionnaire in 1986. Questionnaire included an item that assessed WRJ for each year from 1976 through 1985. Data analysis included Spearman correlations, partial correlations, ANOVA, and ANCOVA. Results indicated self-reported historical WRJ can be assessed with reasonable validity when compared with concurrently measured treadmill performance, and there is no decay in the accuracy of this reporting for up to ten years in the past.
147

Analyse critique de la validité des études scientifiques infirmières sur l'efficacité des techniques de relaxation : une revue intégrative

Bleau, Huguette January 2008 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal.
148

A study of the validity of the H.K.C.E. Eng. (B) Paper III.

January 1996 (has links)
by Suen Lai Kuen, Denise. / Publication date from spine. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 103-114). / Abstract --- p.i / Chapter Chapter 1 --- The issue and its background --- p.1 / Chapter 1.1 --- Languages in Hong Kong society --- p.1 / Chapter 1.2 --- Hong Kong Certificate of Education Examination --- p.2 / Chapter 1.3 --- The validation of the HKCE Eng. (B) Examination Paper III --- p.3 / Chapter Chapter 2 --- Review of relevant literature / Chapter 2.1 --- Reliability and validity --- p.6 / Chapter 2.1.1 --- Reliability --- p.6 / Chapter 2.1.2 --- Validity --- p.8 / Chapter 2.1.2.1 --- Test validation --- p.7 / Chapter 2.1.2.2 --- Construct validity --- p.10 / Chapter 2.1.2.3 --- Content validity . --- p.11 / Chapter 2.1.2.3.1 --- Qualitative and quantitative approaches to content validation --- p.12 / Chapter 2.1.2.3.2 --- Item difficulty and item discriminability --- p.13 / Chapter 2.1.2.3 --- Criterion-related validity --- p.14 / Chapter 2.1.3 --- Validity and reliability --- p.16 / Chapter 2.1.4 --- Summary --- p.16 / Chapter 2.2 --- Communicative paradigm and the study of language --- p.18 / Chapter 2.2.1 --- Communicative competence --- p.20 / Chapter 2.2.2 --- Framework of communicative competence --- p.21 / Chapter 2.2.3 --- Problems in communicative language testing --- p.23 / Chapter 2.2.4 --- Characteristics of communicative language tests --- p.24 / Chapter 2.2.5 --- HKCE Eng. (B) Examination Paper III in the communicative paradigm --- p.25 / Chapter 2.2.6 --- Features of spoken language --- p.26 / Chapter 2.2.6.1 --- Speaking against time --- p.26 / Chapter 2.2.6.2 --- Spoken against written language --- p.27 / Chapter 2.2.6.3 --- Variety of vocabulary --- p.28 / Chapter 2.2.6.4 --- Level of vocabulary --- p.29 / Chapter 2.2.6.5 --- Intonation unit --- p.30 / Chapter 2.2.6.6 --- Clausal construction --- p.32 / Chapter 2.2.6.7 --- Sentence construction --- p.33 / Chapter 2.2.6.8 --- "Involvement with audience, with self, and with concrete reality" --- p.33 / Chapter 2.2.6.9 --- Features of conversations and lectures --- p.34 / Chapter 2.2.7 --- Summary --- p.35 / Chapter 2.3 --- Second language listening comprehension --- p.37 / Chapter 2.3.1 --- Factors in L2 listening comprehension --- p.37 / Chapter 2.3.1.1 --- Speech rate and syntactic structure --- p.37 / Chapter 2.3.1.2 --- Sandhi and proficiency level --- p.38 / Chapter 2.3.1.3 --- "Syntactic simplication, repetition, and proficiency level" --- p.39 / Chapter 2.3.1.4 --- Discourse markers and proficiency level --- p.40 / Chapter 2.3.1.5 --- Background knowledge --- p.41 / Chapter 2.3.1.6 --- Text type and question type --- p.42 / Chapter 2.3.1.7 --- "Note taking, memory, and proficiency level" --- p.43 / Chapter 2.3.1.8 --- Syntactic simplicity and redundancy --- p.44 / Chapter 2.3.2 --- Different types of listening skills --- p.46 / Chapter 2.3.3 --- Summary --- p.49 / Chapter Chapter 3 --- Content validation of the HKCE Eng. (B) Examination Paper III / Chapter 3 1 --- Test objectives and test specifications --- p.51 / Chapter 3.2 --- Question types in the HKCE Eng. (B) Examination Paper III --- p.53 / Chapter 3.3 --- Validity of test specifications --- p.58 / Chapter 3.3.1 --- Domains of use --- p.58 / Chapter 3.3.2 --- Listening comprehension component skills as test specifcations --- p.59 / Chapter 3.4 --- Validity of test content --- p.61 / Chapter 3.4.1 --- Domains of use in test content --- p.62 / Chapter 3.4.1.1 --- Domains of use in section A items --- p.62 / Chapter 3.4.1.2 --- Domains of use in section B items --- p.66 / Chapter 3.4.2 --- Content validity in terms of skills --- p.66 / Chapter 3.4.3 --- Content validity in terms of text authenticity --- p.68 / Chapter 3.5 --- Quantitative approach to content validation --- p.71 / Chapter 3.6 --- Summary --- p.77 / Chapter Chapter 4 --- Criterion-related (concurrent) validation of the HKCE Eng. (B) Examination Paper III / Chapter 4.1 --- The choice of the criterion --- p.78 / Chapter 4.1.1 --- Test of English as a Foreign Language --- p.78 / Chapter 4.1.2 --- Section 1 (Listening comprehension) of the TOEFL --- p.79 / Chapter 4.1.3 --- The TOEFL program under the policy council --- p.80 / Chapter 4 1.4 --- Development of TOEFL questions --- p.80 / Chapter 4.1.5 --- Reliability and validity of the TOEFL --- p.81 / Chapter 4.1.6 --- TOEFL as the criterion --- p.86 / Chapter 4.2 --- The subjects and the procedure --- p.89 / Chapter 4.3 --- Basic assumptions of correlation analysis --- p.90 / Chapter 4.4 --- Statistical procedure and findings --- p.94 / Chapter Chapter 5 --- Discussion and conclusion / Chapter 5.1 --- Content validity of the HKCE Eng. (B) Examination Paper III --- p.98 / Chapter 5.1.1 --- Validity of test objectives and test specificiations --- p.99 / Chapter 5.1.2 --- Validity of test content in terms of testing specifications --- p.99 / Chapter 5.1.3 --- Content validity in terms of skills --- p.99 / Chapter 5.1.4 --- Content validity in terms of text authenticity --- p.100 / Chapter 5.1.5 --- Content validity based on internal analysis --- p.100 / Chapter 5.2 --- Criterion-related validity of the HKCE Eng. (B) Examination Paper III --- p.102 / Chapter 5.3 --- Further research as corroboration --- p.103 / Bibliography --- p.103 / Appendix Materials of the 1991 (session 2) HKCE Eng. (B) Examination Paper III --- p.115 / Appendix 2 Teacher A's evaluation form --- p.136 / Appendix 3 Teacher B's evaluation form --- p.142 / Appendix 4 Materials of the TOEFL Secion 1 (Listening Comprehension) used as the criterion --- p.147 / Appendix 5 HKCE scores and TOEFL scores in the criterion-related validation study --- p.158
149

The development and initial validation of the cognitive response bias scale for the personality assessment inventory

Gaasedelen, Owen J. 01 August 2018 (has links)
The Personality Assessment Inventory (PAI) is a commonly used instrument in neuropsychological assessment; however, it lacks a symptom validity test (SVT) that is sensitive to cognitive response bias (also referred to as non-credible responding), as defined by performance on cognitive performance validity tests (PVT). Therefore the purpose of the present study was to derive from the PAI item pool a new SVT, named the Cognitive Response Bias Scale (CRBS), that is sensitive to non-credible responding, and to provide initial validation evidence supporting the use of the CRBS in a clinical setting. The current study utilized an existing neuropsychological outpatient clinical database consisting of 306 consecutive participants who completed the PAI and PVTs and met inclusion criteria. The CRBS was empirically derived from this database utilizing primarily an Item Response Theory (IRT) framework. Out of 40 items initially examined, 10 items were ultimately retained based on their empirical properties to form the CRBS. An examination of the internal structure of the CRBS indicated that 8 items on the CRBS demonstrated good fit to the graded response IRT model. Overall scale reliability was good (Cronbach’s alpha = 0.77) and commensurate with other SVTs. Examination of item content revealed the CRBS consisted of items related to somatic complaints, psychological distress, and denial of fault. Items endorsed by participants exhibiting lower levels of non-credible responding consisted of vague and non-specific complaints, while participants with high levels of non-credible responding endorsed items indicating ongoing active pain and distress. The CRBS displayed expected relationships with other measures, including high positive correlations with negative impression management (r = 0.73), depression (r = 0.78), anxiety (r = 0.78), and schizophrenia (r = 0.71). Moderate negative correlations were observed with positive impression management (r = -0.31), and treatment rejection (r = -0.42). Two hierarchical logistic regression models showed the CRBS has significant predictive power above and beyond existing PAI SVTs and clinical scales in accurately predicting PVT failure. The overall classification accuracy of the CRBS in detecting failure on multiple PVTs was comparable to other SVTs (area under the curve = 0.72), and it displayed moderate sensitivity (i.e., 0.31) when specificity was high (i.e., 0.96). These operating characteristics suggest that the CRBS is effective at ruling in the possibility of non-credible responding, but not for ruling it out. The conservative recommended cut score was robust to effects of differential prediction due to gender and education. Given the extremely small sample subsets of forensic-only and non-Caucasian participants, future validation is required to establish reliable cut-offs when inferences based on comparisons to similar populations are desired. Results of the current study indicate the CRBS has comparable psychometric properties and clinical utility to analogous SVTs in similar personality inventories to the PAI. Furthermore, item content of the CRBS is consistent with and corroborates existing theory on non-credible responding and cognitive response bias. This study also demonstrated that a graded response IRT model can be useful in deriving and validating SVTs in the PAI, and that the graded response model provides unique and novel information into the nature of non-credible responding.
150

The Construct Validation of an Instrument Based on Students’ University Choice and their Perceptions of Professor Effectiveness and Academic Reputation at the University of Los Andes

Montilla, Josefa Maria 03 December 2004 (has links)
The purpose of this study was to examine the construct validation of an instrument based on students university choice and their perceptions of professor effectiveness and academic reputation at the University of Los Andes (ULA). Moreover, a comparative analysis was carried out to determine how the selected factors that influence the students decisions and perceptions differ according to student demographic factors such as: gender and university campus. This instrument was developed with items based on the three domains formulated: university choice process, professor effectiveness, and university academic reputation. To determine the instruments appropriateness to measure the students decisions in university choice process and their perceptions about professor effectiveness and university academic reputation at the ULA, this research examined the reliability of scores by domains and factors across domains. The participants were undergraduate students who were registered in the second semester of 2002 and enrolled in the different courses by college within the ULAs main campus, which consists of ten colleges throughout the city of Merida, and within the other two university branch campuses in Tachira and Trujillo. For purposes of this research, a stratified probability sample was used to select the participants. The data show that the instrument designed has adequate internal consistency reliability estimates (all the domains exceeded .70). The confirmatory factor analysis shows that the overall fit indices revealed values at or close to the acceptable range .90, even when the model has statistically significant chi-square and demonstrates significant problems with some of the standardized residuals, which indicates that the fit of the model could possibly be significantly improved. The modified model revealed a relatively small improvement in the overall goodness of fit. These results provide supportive evidence of construct validity. Finally, the multivariate analyses of variance using gender and university campus as the predictor variables revealed a nonsignificant gender effect and a significant university campus effect, respectively. The Tukey multiple comparison test used to determine university campus differences across the domains showed approximately similar results, although they are separate and distinguishable. ULA-Merida established the highest mean scores when they are compared on the factors that influence their decisions in university choice process and their perceptions about professor effectiveness and university academic reputation, and the campus 1 (NURR-Trujillo) show the smaller mean scores.

Page generated in 0.0585 seconds