• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • 1
  • Tagged with
  • 9
  • 9
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Comprehensive Review of Effect Size Reporting and Interpreting Practices in Academic Journals in Education and Psychology

SUN, SHUYAN 24 September 2008 (has links)
No description available.
2

A framework for the use and interpretation of statistics in reading instruction / Jeanette Brits

Brits, Jeanette January 2007 (has links)
Thesis (Ph.D. (English))--North-West University, Potchefstroom Campus, 2007.
3

What’s still wrong with psychology, anyway? Twenty slow years, three old issues, and one new methodology for improving psychological research.

Woods, Bradley Dean January 2011 (has links)
Recent retrospectives of classic psychology articles by Meehl (1978) and Wachtel (1980), concerning problems with psychology’s research paradigm, have been viewed by commentators, on the whole, as germane as when first published. However, no similar examination of Lykken’s (1991) classic criticisms of psychology’s dominant research tradition has been undertaken. Twenty years on, this thesis investigates whether Lykken’s criticisms and conclusions are still valid via an exposition of three contentious issues in psychological science: the measurement problem, null hypothesis significance testing, and the granularity of research methods. Though finding that little progress has been made, Observation Oriented Modelling is advanced as a promising methodological solution for improving psychological research.
4

A framework for the use and interpretation of statistics in reading instruction / Jeanette Brits

Brits, Jeanette January 2007 (has links)
There are few instructional tasks more important than teaching children to read. The consequences of low achievement in reading are costly both to individuals and society. Low achievement in literacy correlates with high rates of school drop-out, poverty, and underemployment. The far-reaching effects of literacy achievement have heightened the interest of educators and non-educators alike in the teaching of reading. Successful efforts to improve reading achievement emphasise identification and implementation of evidence-based practices that promote high rates of achievement when used in classrooms by teachers with diverse instructional styles with children who have diverse instructional needs and interests. Being able to recognise what characterises rigorous evidence-based reading instruction is essential to choosing the right reading curriculum (i.e., what method or approach). It will be necessary to ensure that general classroom reading instruction is of universally high quality and that practitioners are prepared to effectively implement validated reading interventions. When educators are not familiar with research methodologies and findings, national and provincial departments of education may find themselves implementing fads or incomplete findings. The choice of method of instruction is very often based on empirical research studies. The selection of statistical procedures is an integral part of the research process. Statistical significance testing is a prominent feature of data analysis in language learning studies and also specifically, reading instruction studies. For many years, methodologists have debated what statistical significance testing means and how it should be used in the interpretation of substantive results. Researchers have long placed a premium on the use of statistical significance testing. However, criticisms of the statistical significance testing procedure are prevalent and occur across many scientific disciplines. Critics of statistical significance tests have made several suggestions, with the underlying theme being for researchers to examine and interpret their data carefully and thoroughly, rather than relying solely upon p values in determining which results are important enough to examine further and report in journals. Specific suggestions include the use of effect sizes, confidence intervals, and power. The purpose of this study was to: determine what the state of affairs is with regard to statistical significance testing in reading instruction research, with specific reference to post-1999 literature (post-I999 literature was selected because of the specific request, made by Wilkinson and the Task Force on Statistical Inference in 1999, to include the reporting of effect sizes in empirical research studies); determine what the criticisms as well as the defences are that have been offered for statistical significance testing; determine what the alternatives or supplements are to statistical significance testing in reading instruction research; To provide a framework for the most effective and appropriate selection, use and representation of statistical significance testing in the reading instruction research field. A comprehensive survey on the use of statistical significance testing, as manifested in randomly selected journals, was undertaken. Six journals (i.e., System, Language Learning and Technology, The Reading Matrix, Scientific Studies of Reading, Teaching English as a Second or Foreign Language (TESL-EJ); South African Journal for Language Teaching) regularly including articles related to reading instruction research and publishing articles reporting statistical analyses, were reviewed and analysed. All articles in these journals from 2000-2005, employing statistical analyses were reviewed and analysed. The data was analysed by means of descriptive statistics (i.e., frequency counts and percentages). Qualitative reporting was also utilized. A review of six readily accessible (online) journals publishing research on reading instruction indicated that researchers/authors rely very heavily on statistical significance testing and very seldom, if ever, report effect size/effect magnitude or confidence interval measures when documenting their results. A review of the literature indicates that null hypothesis significance testing has been and is a controversial method of extracting information from experimental data and of guiding the formation of scientific conclusions. Several alternatives or complements to null hypothesis significance testing, namely effect sizes, confidence intervals and power analysis have been suggested. The following central theoretical statement was formulated for this study: Statistical significance tests should be supplemented with accurate reports of effect size, power analyses and confidence intervals in reading research studies. In addition, quantitative studies, utilising statistics as stated in the previous sentence, should be supplemented with qualitative studies in order to obtain a more comprehensive picture of reading instruction research. Research indicates that no single study ever establishes a programme or practice as effective; moreover it is the convergence of evidence from a variety of study designs that is ultimately scientifically convincing. When evaluating studies and claims of evidence, educators must not determine whether the study is quantitative or qualitative in nature, but rather if the study meets the standards of scientific research. The proposed framework presented in this study consists of three main parts, namely, part one focuses on the study's description of the intervention and the random assignment process, part two focuses on the study's collection of data and part three focuses on the study's reporting of results, specifically the statistical reporting of the results. / Thesis (Ph.D. (English))--North-West University, Potchefstroom Campus, 2007.
5

A framework for the use and interpretation of statistics in reading instruction / Jeanette Brits

Brits, Jeanette January 2007 (has links)
There are few instructional tasks more important than teaching children to read. The consequences of low achievement in reading are costly both to individuals and society. Low achievement in literacy correlates with high rates of school drop-out, poverty, and underemployment. The far-reaching effects of literacy achievement have heightened the interest of educators and non-educators alike in the teaching of reading. Successful efforts to improve reading achievement emphasise identification and implementation of evidence-based practices that promote high rates of achievement when used in classrooms by teachers with diverse instructional styles with children who have diverse instructional needs and interests. Being able to recognise what characterises rigorous evidence-based reading instruction is essential to choosing the right reading curriculum (i.e., what method or approach). It will be necessary to ensure that general classroom reading instruction is of universally high quality and that practitioners are prepared to effectively implement validated reading interventions. When educators are not familiar with research methodologies and findings, national and provincial departments of education may find themselves implementing fads or incomplete findings. The choice of method of instruction is very often based on empirical research studies. The selection of statistical procedures is an integral part of the research process. Statistical significance testing is a prominent feature of data analysis in language learning studies and also specifically, reading instruction studies. For many years, methodologists have debated what statistical significance testing means and how it should be used in the interpretation of substantive results. Researchers have long placed a premium on the use of statistical significance testing. However, criticisms of the statistical significance testing procedure are prevalent and occur across many scientific disciplines. Critics of statistical significance tests have made several suggestions, with the underlying theme being for researchers to examine and interpret their data carefully and thoroughly, rather than relying solely upon p values in determining which results are important enough to examine further and report in journals. Specific suggestions include the use of effect sizes, confidence intervals, and power. The purpose of this study was to: determine what the state of affairs is with regard to statistical significance testing in reading instruction research, with specific reference to post-1999 literature (post-I999 literature was selected because of the specific request, made by Wilkinson and the Task Force on Statistical Inference in 1999, to include the reporting of effect sizes in empirical research studies); determine what the criticisms as well as the defences are that have been offered for statistical significance testing; determine what the alternatives or supplements are to statistical significance testing in reading instruction research; To provide a framework for the most effective and appropriate selection, use and representation of statistical significance testing in the reading instruction research field. A comprehensive survey on the use of statistical significance testing, as manifested in randomly selected journals, was undertaken. Six journals (i.e., System, Language Learning and Technology, The Reading Matrix, Scientific Studies of Reading, Teaching English as a Second or Foreign Language (TESL-EJ); South African Journal for Language Teaching) regularly including articles related to reading instruction research and publishing articles reporting statistical analyses, were reviewed and analysed. All articles in these journals from 2000-2005, employing statistical analyses were reviewed and analysed. The data was analysed by means of descriptive statistics (i.e., frequency counts and percentages). Qualitative reporting was also utilized. A review of six readily accessible (online) journals publishing research on reading instruction indicated that researchers/authors rely very heavily on statistical significance testing and very seldom, if ever, report effect size/effect magnitude or confidence interval measures when documenting their results. A review of the literature indicates that null hypothesis significance testing has been and is a controversial method of extracting information from experimental data and of guiding the formation of scientific conclusions. Several alternatives or complements to null hypothesis significance testing, namely effect sizes, confidence intervals and power analysis have been suggested. The following central theoretical statement was formulated for this study: Statistical significance tests should be supplemented with accurate reports of effect size, power analyses and confidence intervals in reading research studies. In addition, quantitative studies, utilising statistics as stated in the previous sentence, should be supplemented with qualitative studies in order to obtain a more comprehensive picture of reading instruction research. Research indicates that no single study ever establishes a programme or practice as effective; moreover it is the convergence of evidence from a variety of study designs that is ultimately scientifically convincing. When evaluating studies and claims of evidence, educators must not determine whether the study is quantitative or qualitative in nature, but rather if the study meets the standards of scientific research. The proposed framework presented in this study consists of three main parts, namely, part one focuses on the study's description of the intervention and the random assignment process, part two focuses on the study's collection of data and part three focuses on the study's reporting of results, specifically the statistical reporting of the results. / Thesis (Ph.D. (English))--North-West University, Potchefstroom Campus, 2007.
6

Adaptive dim point target detection and tracking infrared images

DeMars, Thomas V. 12 1900 (has links)
Approved for public release; distribution is unlimited / The thesis deals with the detection and tracking of dim point targets in infrared images. Research topics include image process modeling with adaptive two-dimensional Least Mean Square (LMS) and Recursive Least Squares (RLS) prediction filters. Target detection is performed by significance testing the prediction error residual. A pulse tracker is developed which may be adjusted to discriminate target dynamics. The methods are applicable to detection and tracking in other spectral bands. / http://archive.org/details/adaptivedimpoint00dema / Major, United States Marine Corps
7

"Jag vill spara etiskt, men vad har jag för alternativ?" : En kvantitativ studie av etiska restriktioners påverkan på svenska fonders prestation

Atterby, Alfred, Ekström Hagevall, Adam, Wikström, Carl January 2019 (has links)
During the last few decades, the Swedish population has shown an increased interest in investment fund savings, and more than 60% of Swedish citizens are saving through funds today. In addition to this, awareness on climate change and related risks has increased, which has contributed to a greater focus on corporate sustainability among Swedish companies. As a result of these trends, there has been an increase in fund companies that are basing their investments on certain ethical restrictions, in order for private investors to save ethically. The purpose of this study was to examine how ethical restrictions affect the financial performance of Swedish funds with regards to risk-adjusted return. Previous studies have focused on comparing ethical and traditional funds, but this study chose not to make any difference between the two types of funds. The study's relevance is based on how it can make private investors aware of which ethical restrictions that have a negative impact on the risk-adjusted return, and how much each restriction decreases the return. A total of 101 Swedish funds were analyzed. Information about each fund's performance measures were retrieved from Morningstar, and is based on three years development. Information about each fund's ethical restrictions were retrieved from Hållbarhetsprofilen and their information pamphlets. With data about the performance measures Sharpe ratio, Alpha, and Treynor ratio, three statistical models were defined and analyzed with multiple linear regression analysis. Each model's reliability was assessed with residual analysis; the models were adjusted and improved if necessary. Hypotheses were evaluated with significance testing to answer the research questions. The results indicate that exclusion of tobacco and gambling companies affect the risk-adjusted return of Swedish funds negatively, while exclusion of alcohol companies affect the risk-adjusted return positively. This implies that private investors should save their money in Swedish funds that exclude alcohol companies in order to avoid lower risk-adjusted return. / Fondsparandet i Sverige har under de senaste decennierna ökat kraftigt och idag sparar över 60% av befolkningen i fonder. Utöver detta har bland annat miljömedvetenheten hos befolkningen ökat, vilket har bidragit till ett ökat engagemang i hållbarhetsfrågor hos företag. I kölvattnet av dessa trender har allt fler svenska fondbolag valt att införa etiska restriktioner på deras investeringar, så att fondsparare på ett enkelt sätt ska kunna investera hållbart. Syftet med denna studie var att undersöka hur etiska restriktioner påverkar svenska fonders prestation med avseende på riskjusterad avkastning. Där tidigare forskning har fokuserat på att jämföra etiska och traditionella fonder, valde denna studie istället att inte göra någon skillnad mellan de två fondtyperna. Studiens relevans baseras på hur studiens slutsatser kan hjälpa småsparare att bli medvetna om vilka etiska restriktioner som hämmar avkastningen och vad varje restriktion kostar investerare i potentiellt förlorad avkastning. Totalt har 101 svenska fonder undersökts. Information som gäller fondernas prestationsmått har hämtats från Morningstar, och baseras på tre års utveckling. Information om fondernas etiska restriktioner har hämtats från Hållbarhetsprofilen och fondernas informationsbroschyrer. Med insamlad data om prestationsmåtten Sharpekvoten, Alfa, och Treynorkvoten, har tre modeller definierats som sedan analyserats med multipel linjär regressionsanalys. Modellernas tillförlitlighet har sedan utvärderats med residualanalys; modellerna förbättrades om det fanns belägg för det. Utifrån signifikanstest har hypoteser utvärderats för att besvara studiens frågeställningar. Studiens resultat pekar på att exkludering av företag inom tobak- och spelbranschen påverkar svenska fonders riskjusterade avkastning negativt, medan exkludering av företag inom alkoholbranschen påverkar den positivt. Detta betyder alltså att småsparare bör investera i svenska fonder som väljer bort företag inom alkoholbranschen för att undvika låg avkastning.
8

What can we learn from climate data? : Methods for fluctuation, time/scale and phase analysis

Maraun, Douglas January 2006 (has links)
Since Galileo Galilei invented the first thermometer, researchers have tried to understand the complex dynamics of ocean and atmosphere by means of scientific methods. They observe nature and formulate theories about the climate system. Since some decades powerful computers are capable to simulate the past and future evolution of climate.<br><br> Time series analysis tries to link the observed data to the computer models: Using statistical methods, one estimates characteristic properties of the underlying climatological processes that in turn can enter the models. The quality of an estimation is evaluated by means of error bars and significance testing. On the one hand, such a test should be capable to detect interesting features, i.e. be sensitive. On the other hand, it should be robust and sort out false positive results, i.e. be specific. <br><br> This thesis mainly aims to contribute to methodological questions of time series analysis with a focus on sensitivity and specificity and to apply the investigated methods to recent climatological problems. <br><br> First, the inference of long-range correlations by means of Detrended Fluctuation Analysis (DFA) is studied. It is argued that power-law scaling of the fluctuation function and thus long-memory may not be assumed a priori but have to be established. This requires to investigate the local slopes of the fluctuation function. The variability characteristic for stochastic processes is accounted for by calculating empirical confidence regions. The comparison of a long-memory with a short-memory model shows that the inference of long-range correlations from a finite amount of data by means of DFA is not specific. When aiming to infer short memory by means of DFA, a local slope larger than $alpha=0.5$ for large scales does not necessarily imply long-memory. Also, a finite scaling of the autocorrelation function is shifted to larger scales in the fluctuation function. It turns out that long-range correlations cannot be concluded unambiguously from the DFA results for the Prague temperature data set. <br><br> In the second part of the thesis, an equivalence class of nonstationary Gaussian stochastic processes is defined in the wavelet domain. These processes are characterized by means of wavelet multipliers and exhibit well defined time dependent spectral properties; they allow one to generate realizations of any nonstationary Gaussian process. The dependency of the realizations on the wavelets used for the generation is studied, bias and variance of the wavelet sample spectrum are calculated. To overcome the difficulties of multiple testing, an areawise significance test is developed and compared to the conventional pointwise test in terms of sensitivity and specificity. Applications to Climatological and Hydrological questions are presented. The thesis at hand mainly aims to contribute to methodological questions of time series analysis and to apply the investigated methods to recent climatological problems. <br><br> In the last part, the coupling between El Nino/Southern Oscillation (ENSO) and the Indian Monsoon on inter-annual time scales is studied by means of Hilbert transformation and a curvature defined phase. This method allows one to investigate the relation of two oscillating systems with respect to their phases, independently of their amplitudes. The performance of the technique is evaluated using a toy model. From the data, distinct epochs are identified, especially two intervals of phase coherence, 1886-1908 and 1964-1980, confirming earlier findings from a new point of view. A significance test of high specificity corroborates these results. Also so far unknown periods of coupling invisible to linear methods are detected. These findings suggest that the decreasing correlation during the last decades might be partly inherent to the ENSO/Monsoon system. Finally, a possible interpretation of how volcanic radiative forcing could cause the coupling is outlined. / Seit der Erfindung des Thermometers durch Galileo Galilei versuchen Forscher mit naturwissenschaftlichen Methoden die komplexen Zusammenhänge in der Atmosphäre und den Ozeanen zu entschlüsseln. Sie beobachten die Natur und stellen Theorien über das Klimasystem auf. Seit wenigen Jahrzehnten werden sie dabei von immer leistungsfähigeren Computern unterstützt, die das Klima der Erdgeschichte und der nahen Zukunft simulieren. <br><br> Die Verbindung aus den Beobachtungen und den Modellen versucht die Zeitreihen­analyse herzustellen: Aus den Daten werden mit statistischen Methoden charak­teristische Eigenschaften der zugrundeliegenden klimatologischen Prozesse geschätzt, die dann in die Modelle einfliessen können. Die Bewertung solch einer Schätzung, die stets Messfehlern und Vereinfachungen des Modells unterworfen ist, erfolgt statistisch entweder mittels Konfidenzintervallen oder Signifikanztests. Solche Tests sollen auf der einen Seite charakteristische Eigenschaften in den Daten erkennen können, d.h. sie sollen sensitiv sein. Auf der anderen Seite sollen sie jedoch auch keine Eigenschaften vortäuschen, d.h. sie sollen spezifisch sein. Für die vertrauenswürdige Untermauerung einer Hypothese ist also ein spezifischer Test erforderlich. <br><br> Die vorliegende Arbeit untersucht verschiedene Methoden der Zeitreihenanalyse, erweitert sie gegebenenfalls und wendet sie auf typische klimatologische Frage­stellungen an. Besonderes Augenmerk wird dabei auf die Spezifizität der jeweiligen Methode gelegt; die Grenzen möglicher Folgerungen mittels Datenanalyse werden diskutiert.<br><br> Im ersten Teil der Arbeit wird studiert, wie und ob sich mithilfe der sogenannten trendbereinigenden Fluktuationsanalyse aus Temperaturzeitreihen ein sogenanntes langes Gedächtnis der zugrundeliegenden Prozesse herleiten lässt. Solch ein Gedächtnis bedeutet, dass der Prozess seine Vergangenheit nie vergisst, mit fundamentalen Auswirkungen auf die gesamte statistische Beurteilung des Klimasystems. Diese Arbeit konnte jedoch zeigen, dass die Analysemethode vollkommen unspezifisch ist und die Hypothese “Langes Gedächtnis” gar nicht abgelehnt werden kann. <br><br> Im zweiten Teil werden zunächst Mängel einer sehr populären Analysemethode, der sogenannten kontinuierlichen Waveletspetralanalyse diskutiert. Diese Methode schätzt die Variabilität eines Prozesses auf verschiedenen Schwingungsperioden zu bestimm­ten Zeiten. Ein wichtiger Nachteil der bisherigen Methodik sind auch hier unspezi­fische Signifikanztests. Ausgehend von der Diskussion wird eine Theorie der Wavelet­spektralanalyse entwickelt, die ein breites Feld an neuen Anwendungen öffnet. Darauf basierend werden spezifische Signifikanztests konstruiert.<br><br> Im letzten Teil der Arbeit wird der Einfluss des El Niño/Southern Oscillation Phäno­mens auf den Indischen Sommermonsun analysiert. Es wird untersucht, ob und wann die Oszillationen beider Phänomene synchron ablaufen. Dazu wird eine etablierte Methode für die speziellen Bedürfnisse der Analyse von typischerweise sehr unregel­mäßigen Klimadaten erweitert. Mittels eines spezifischen Signifikanztests konnten bisherige Ergebnisse mit erhöhter Genauigkeit bestätigt werden. Zusätzlich konnte diese Methode jedoch auch neue Kopplungsintervalle feststellen, die die Hypothese entkräften konnten, dass ein neuerliches Verschwinden der Kopplung ein beisspielloser Vorgang sei. Schliesslich wird eine Hypothese vorgestellt, wie vulkanische Aerosole die Kopplung beeinflussen könnten.
9

Exploring a meta-theoretical framework for dynamic assessment and intelligence

Murphy, Raegan 30 September 2007 (has links)
Dynamic assessment, as manner of alternative process-based assessment, is currently at a cross-roads chiefly characterised by, at times, vague conceptualisation of terminology, blurred demarcation as to its model and theory status and at times ill-defined fundamental philosophy. As a movement in modern psychological assessment within the broader field of intelligence, dynamic assessment does not present with a coherent unifying theory as such and due to its lack of clarity in a number of key areas its eventual disuse might well be the final outcome of this method and its unique history and methodology. In pursuit of this study’s main goal, dynamic assessment models and theories are critically explored by means of a meta-theory largely inspired by the work K.B. Madsen, a Danish meta-theorist and pioneer in theoretical psychology. Madsen’s meta-theory is attenuated in order to suit the nature and purposes of this study; so as to better analyse dynamic assessment within intelligence research and assessment. In its primary aim, this study builds on a foundation of epistemological and ontological considerations within science in general, the social sciences and psychology in particular. In keeping with Madsen’s method of meta-theory analysis, the author’s predilections are stated at the outset in order to place the progression of analyses of the various models and theories within dynamic assessment. Dynamic assessment and intelligence are discussed and a brief digression into the history of Soviet psychology is offered as it is pertinent to the work of Lev Vygotsky and its subsequent influence within process-based assessment. Theory and model development within science and the social sciences are described from a philosophy-of-science vantage point. Psychological assessment’s prime considerations are critically explored and the discussion highlights the role played by the philosophical aspects of mathematics and statistical foundations as leveraging measurement within assessment. Particular attention is paid to the perennial controversy surrounding null hypothesis significance testing and the possible future directions that can be explored by and within dynamic assessment which lends itself to approaches less restrictive than those offered by mainstream statistics. The obvious and not so obvious aspects within the mathematical, statistical and measurement foundations are critically explored in terms of how best dynamic assessment can manoeuvre within the current mainstream psychological assessment system and how new models of item response theory suited to change-based assessment can be explored as possible manner of handling the gain score issue; itself a paradoxical state of affairs within classical and modern test theory. Dynamic assessment’s past has in large part been dictated by mainstream considerations in the areas mentioned and in order to place itself on an alternative path these considerations are critically assessed in terms of dynamic assessment’s future path. Dynamic assessment and its place within the broader intelligence assessment field is then investigated by means of the meta-theory developed. It is envisaged that the intuitive appeal of dynamic assessment will continue to garner support from practitioners across the globe, specifically those trained in countries outside the traditional stronghold of Western psychological theory. However, the position taken in this argument is that in order to ensure its survival it will need to make a decision in terms of its future progress: either to branch off from mainstream assessment altogether or to become fused within mainstream assessment. The “best of both worlds” scenario has obviously not worked out as it was originally hoped. The study concludes with the meta-theoretical exploration of dynamic assessment within intelligence by utilising a small selection of current models. The application of the attenuated Madsenian framework seeks to explore, place and ascertain the nature of each model regarding the ontological and philosophical status of the approach; the nature of the hypothetical terminology, scientific hypotheses and hypothesis system utilised and lastly the nature of the abstract data, concrete data and prime considerations as implicit concerns within the varied approaches. An HQ score is calculated for each such model and is a partial indicator of the testability (verifiability or falsifiability) of the model in question. The models are thus couched in meta, hypothetical and data strata and can be positioned on a continuum of sorts according to which tentative claims can be made regarding the veracity of the approach behind each model. The study concludes with two appendices; a meta-analysis which was conducted on South African research in the field of dynamic assessment (1961-2002) and which cumulated in a significant effect size evidencing an overall positive effect that dynamic assessment has had as an alternative intervention technique in comparison to conventional or static based assessment models. In order to encourage replication of this study, all details pertaining to the studies included for consideration in the meta-analyses are attached in section 2 of this appendix. Secondly, an informal content analysis was conducted on eleven responses to questionnaires that were originally delivered to one hundred dynamic assessment practitioners and researchers across the globe. The purpose of the questionnaire was to ascertain information on core issues within dynamic assessment, as these fundamental issues were considered as pivotal in the future of this approaches’ eventual development or stagnation. The analysis concluded that dynamic assessment is indeed perceived to be at a crossroads of sorts and thus supported the initial hypothesis stated above. It is hoped that this theoretical study will aid in aligning dynamic assessment in a manner such that its eventual place in psychological assessment will be solidly grounded, theoretically defensible and viable as alternative manner of assessment. / Thesis (PhD (Psychology))--University of Pretoria, 2007. / Psychology / PhD / PhD / unrestricted

Page generated in 0.1076 seconds