• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 359
  • 154
  • 76
  • 24
  • 18
  • 16
  • 16
  • 11
  • 9
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 859
  • 434
  • 422
  • 136
  • 127
  • 124
  • 118
  • 117
  • 115
  • 109
  • 101
  • 86
  • 86
  • 86
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
621

Advanced Quantitative Measurement Methodology in Physics Education Research

Wang, Jing 11 September 2009 (has links)
No description available.
622

A Hierarchy of Grammatical Difficulty for Japanese EFL Learners: Multiple-Choice Items and Processability Theory

Nishitani, Atsuko January 2012 (has links)
This study investigated the difficulty order of 38 grammar structures obtained from an analysis of multiple-choice items using a Rasch analysis. The order was compared with the order predicted by processability theory and the order in which the structures appear in junior and senior high school textbooks in Japan. Because processability theory is based on natural speech data, a sentence repetition test was also conducted in order to compare the result with the order obtained from the multiple-choice tests and the order predicted by processability theory. The participants were 872 Japanese university students, whose TOEIC scores ranged from 200 to 875. The difficulty order of the 38 structures was displayed according to their Rasch difficulty estimates: The most difficult structure was subjunctive and the easiest one was present perfect with since in the sentence. The order was not in accord with the order predicted by processability theory, and the difficulty order derived from the sentence repetition test was not accounted for by processability theory either. In other words, the results suggest that processability theory only accounts for natural speech data, and not elicited data. Although the order derived from the repetition test differed from the order derived from the written tests, they correlated strongly when the repetition test used ungrammatical sentences. This study tentatively concluded that the students could have used their implicit knowledge when answering the written tests, but it is also possible that students used their explicit knowledge when correcting ungrammatical sentences in the repetition test. The difficulty order of grammatical structures derived from this study was not in accord with the order in which the structures appear in junior and senior high school textbooks in Japan. Their correlation was extremely low, which suggests that there is no empirical basis for textbook makers'/writers' policy regarding the ordering of grammar items. This study also demonstrated the difficulty of writing items testing the knowledge of the same grammar point that show similar Rasch difficulty estimates. Even though the vocabulary and the sentence positions were carefully controlled and the two items looked parallel to teachers, they often displayed very different difficulty estimates. A questionnaire was administered concerning such items, and the students' responses suggested that they seemed to look at the items differently than teachers and what they notice and how they interpret what they notice strongly influences item difficulty. Teachers or test-writers should be aware that it is difficult to write items that produce similar difficulty estimates and their own intuition or experience might not be the best guide for writing effective grammar test items. It is recommended to pilot test items to get statistical information about item functioning and qualitative data from students using a think-aloud protocol, interviews, or a questionnaire. / CITE/Language Arts
623

Effect of Unequal Sample Sizes on the Power of DIF Detection: An IRT-Based Monte Carlo Study with SIBTEST and Mantel-Haenszel Procedures

Awuor, Risper Akelo 04 August 2008 (has links)
This simulation study focused on determining the effect of unequal sample sizes on statistical power of SIBTEST and Mantel-Haenszel procedures for detection of DIF of moderate and large magnitudes. Item parameters were estimated by, and generated with the 2PLM using WinGen2 (Han, 2006). MULTISIM was used to simulate ability estimates and to generate response data that were analyzed by SIBTEST. The SIBTEST procedure with regression correction was used to calculate the DIF statistics, namely the DIF effect size and the statistical significance of the bias. The older SIBTEST was used to calculate the DIF statistics for the M-H procedure. SAS provided the environment in which the ability parameters were simulated; response data generated and DIF analyses conducted. Test items were observed to determine if a priori manipulated items demonstrated DIF. The study results indicated that with unequal samples in any ratio, M-H had better Type I error rate control than SIBTEST. The results also indicated that not only the ratios, but also the sample size and the magnitude of DIF influenced the behavior of SIBTEST and M-H with regard to their error rate behavior. With small samples and moderate DIF magnitude, Type II errors were committed by both M-H and SIBTEST when the reference to focal group sample size ratio was 1:.10 due to low observed statistical power and inflated Type I error rates. / Ph. D.
624

Essays zu methodischen Herausforderungen im Large-Scale Assessment

Robitzsch, Alexander 21 January 2016 (has links)
Mit der wachsenden Verbreitung empirischer Schulleistungsleistungen im Large-Scale Assessment gehen eine Reihe methodischer Herausforderungen einher. Die vorliegende Arbeit untersucht, welche Konsequenzen Modellverletzungen in eindimensionalen Item-Response-Modellen (besonders im Rasch-Modell) besitzen. Insbesondere liegt der Fokus auf vier methodischen Herausforderungen von Modellverletzungen. Erstens, implizieren Positions- und Kontexteffekte, dass gegenüber einem eindimensionalen IRT-Modell Itemschwierigkeiten nicht unabhängig von der Position im Testheft und der Zusammenstellung des Testheftes ausgeprägt sind und Schülerfähigkeiten im Verlauf eines Tests variieren können. Zweitens, verursacht die Vorlage von Items innerhalb von Testlets lokale Abhängigkeiten, wobei unklar ist, ob und wie diese in der Skalierung berücksichtigt werden sollen. Drittens, können Itemschwierigkeiten aufgrund verschiedener Lerngelegenheiten zwischen Schulklassen variieren. Viertens, sind insbesondere in low stakes Tests nicht bearbeitete Items vorzufinden. In der Arbeit wird argumentiert, dass trotz Modellverletzungen nicht zwingend von verzerrten Schätzungen von Itemschwierigkeiten, Personenfähigkeiten und Reliabilitäten ausgegangen werden muss. Außerdem wird hervorgehoben, dass man psychometrisch häufig nicht entscheiden kann und entscheiden sollte, welches IRT-Modell vorzuziehen ist. Dies trifft auch auf die Fragestellung zu, wie nicht bearbeitete Items zu bewerten sind. Ausschließlich Validitätsüberlegungen können dafür Hinweise geben. Modellverletzungen in IRT-Modellen lassen sich konzeptuell plausibel in den Ansatz des Domain Samplings (Item Sampling; Generalisierbarkeitstheorie) einordnen. In dieser Arbeit wird gezeigt, dass die statistische Unsicherheit in der Modellierung von Kompetenzen nicht nur von der Stichprobe der Personen, sondern auch von der Stichprobe der Items und der Wahl statistischer Modelle verursacht wird. / Several methodological challenges emerge in large-scale student assessment studies like PISA and TIMSS. Item response models (IRT models) are essential for scaling student abilities within these studies. This thesis investigates the consequences of several model violations in unidimensional IRT models (especially in the Rasch model). In particular, this thesis focuses on the following four methodological challenges of model violations. First, position effects and contextual effects imply (in comparison to unidimensional IRT models) that item difficulties depend on the item position in a test booklet as well as on the composition of a test booklet. Furthermore, student abilities are allowed to vary among test positions. Second, the administration of items within testlets causes local dependencies, but it is unclear whether and how these dependencies should be taken into account for the scaling of student abilities. Third, item difficulties can vary among different school classes due to different opportunities to learn. Fourth, the amount of omitted items is in general non-negligible in low stakes tests. In this thesis it is argued that estimates of item difficulties, student abilities and reliabilities can be unbiased despite model violations. Furthermore, it is argued that the choice of an IRT model cannot and should not be made (solely) from a psychometric perspective. This also holds true for the problem of how to score omitted items. Only validity considerations provide reasons for choosing an adequate scoring procedure. Model violations in IRT models can be conceptually classified within the approach of domain sampling (item sampling; generalizability theory). In this approach, the existence of latent variables need not be posed. It is argued that statistical uncertainty in modelling competencies does not only depend on the sampling of persons, but also on the sampling of items and on the choice of statistical models.
625

Psychometric properties of a Venda version of the Sixteen Personality Factor Questionnaire (16PF)

Mantsha, Tshifhiwa Rebecca 10 1900 (has links)
A Venda version of the South African Sixteen Personality Factor Questionnaire Fifth edition(16PF5) was develop using forward and back translation methods. This version was administered to a sample of 85 Venda speaking subjects. Subjects ranged in age from 18 to 30 years old. Item analysis was done and a qualitative analysis of the reasons why items were not successful was done for each scale. Reasons identified included translation errors, problems in understanding the vocabulary and idiomatic language used, the use of the negative form and possible differences in the manifestation of constructs. Given the large number of items to be excluded, only general trends were indicated as to avoid over interpretation. These trends need to be considered when changing or replacing items. The results of this study can be regarded as a first step in developing a Venda version of the 16PF5. / Psychology / M.A. (Psychology)
626

Lagerstyrning – Förståelse är grunden till förbättring : Utformning av en teoretisk lagerstyrningsmodell för att skapa förståelse för hur lageromsättningshastigheten kan öka samt applicering av denna på Sandviks produktionsavdelning i Svedala för att identifiera möjliga förbättringar. / Inventory Control - Understanding is the basis for improvement : Designing of a theoretical model of inventory control to create an understanding regarding how inventory turnover may increase, and applying this model on Sandvik’s production department in Svedala to identify possible improvements.

Råstrander, Frida, Hejdenberg, Linnea January 2016 (has links)
Bakgrund: För företag som håller lager är en viktig faktor för att lyckas öka effektiviteten att arbeta med lagerstyrning. Lagerstyrning handlar om planering och kontroll av lagret för att kunna serva kunderna och produktionen. Inom lagerstyrning är det viktigt att företag fattar beslut gällande vilken orderkvantitet som ska beställas samt när ordern ska läggas för att finnas tillgänglig på lagret vid rätt tidpunkt. Företag kan använda sig av säkerhetslager vid styrning av sitt lager för att försäkra sig om att de kan hantera osäkerheter i efterfrågan och produktion. Syfte: Syftet med studien är att utifrån analys av lagerstyrningsteori utforma en teoretisk lagerstyrningsmodell för att skapa förståelse för hur lageromsättningshastigheten kan öka. Vidare ska den framtagna modellen empiriskt appliceras på Sandviks produktionsavdelning i Svedalas aktuella artiklar för att identifiera möjliga förbättringar. Metod: Studien har genomförts som en fallstudie på Sandviks produktionsavdelning i Svedala baserat på en teoretisk framtagen lagerstyrningsmodell. Teoriinsamlingen till utformandet av den teoretiska lagerstyrningsmodellen har inhämtats via facklitteratur och vetenskapliga artiklar. Empiriinsamlingen har gjorts med hjälp av intervjuer och numerisk data. Både teori och empiri har sedan analyserats utifrån ett kvalitativt tillvägagångssätt. Avslutande kommentarer: Den framtagna teoretiska lagerstyrningsmodellen börjar med att presentera kriterier som påverkar lageromsättningshastigheten, sedan presenteras steg för att genomföra en ABC-klassificering och slutligen presenteras olika lagerstyrningsmetoder för att bestämma hur orderläggningen ska ske samt hur stortivsäkerhetslagret ska vara. Den teoretiska lagerstyrningsmodellen är pedagogisk och tydlig för att skapa förståelse hos företag om hur de kan öka sin lageromsättningshastighet. Lagerstyrningen som formades utifrån den teoretiska lagerstyrningsmodellen för Sandviks produktionsavdelning i Svedala bestod av orderläggningsmetoderna lot-for-lot, uppskattad orderkvantitet och täcktidplanering samt säkerhetslager baserat på manuella bedömningar och baserat på ledtidsförbrukningen. Med hjälp av denna lagerstyrning ska de proaktivt undvika föråldrat och långsamtgående lager i framtiden. / Background: In order to increase the efficiency for companies that keep inventory, they need to work with inventory control. Inventory control regards planning and control of the inventory to increase customer and production service. Within inventory control, it is important that companies make decisions regarding the quantity to be ordered and when the order will be added to be available in the warehouse at the right time. Companies can use safety stock to ensure that they can deal with uncertainties in demand and production. Purpose: The purpose of this study is, based on analysis of inventory control theory, to design a theoretical model of inventory control to create an understanding for how the inventory turnover may increase. Furthermore, the theoretical model will be empirically applied to Sandvik's production department of Svedala's current articles to indicate improvements. Method: The study has been made as a case study at Sandvik´s production department in Svedala, based on a developed theoretical model of inventory control. Theory collection to the design of the theoretical model of inventory control has been obtained through professional literature and scientific articles. Empirical data has been collected through interviews and numerical data. Both theory and empirical data have been analyzed from a qualitative approach. Concluding remarks: The designed theoretical model of inventory includes criteria that affect inventory turnover, the steps to implement an ABC classification and various inventory control methods to determine how the placement of orders should be implemented and the amount of safety stock that should be held. The theoretical model vi of inventory control is pedagogical and clear to create an understanding regarding how companies can increase their inventory turnover. The inventory control that was formed for Sandvik´s production department in Svedala, on the basis of the theoretical model of inventory control, consisted of the ordering methods, lot-for-lot, estimated order quantity and cover-time planning. The methods for safety stock were safety stock based on manual assessments and on lead time consumptions. With this control Sandvik´s production department in Svedala, proactively can avoid obsolete and slow moving inventory in the future
627

Evaluation of Post-Deployment PTSD Screening of Marines Returning From a Combat Deployment

Hall, Erika L. 01 January 2015 (has links)
The purpose of this quantitative study was to examine whether the post-deployment screening instrument currently utilized to assess active-duty Marines for symptoms of PTSD upon their return from a combat deployment can be solely relied upon to accurately assess for PTSD. Additionally, this study sought to compare the number of Marines who have sought trauma-related mental health treatment based on their answers on the Post-Deployment Health Assessment (PDHA) to the number who have sought trauma-related mental health treatment based on their answers on their PTSD Checklist â?? Military Version (PCL-M). The participants in this study were comprised of a sample of active-duty Marines that had recently returned from a combat deployment. A quantitative secondary data analysis used Item Response Theory (IRT) to examine the answers provided by the participants on both the PDHA and PCL-M. Both instruments proved to be effective when assessing symptoms of PTSD and the participants identified as having symptoms of PTSD were referred for mental health services as required. According to the results, more Marines were identified as having symptoms of PTSD using both assessment instruments (PDHA and PCL-M) compared to those identified using just the PDHA. The result was a better understanding of predictors of Marines who may later develop PTSD. The results of this study can also assist the Marine Corps with its post-deployment screening for symptoms of PTSD which in turn can provide appropriate mental health referrals for Marines if deemed appropriate.
628

A comparison of item selection procedures using different ability estimation methods in computerized adaptive testing based on the generalized partial credit model

Ho, Tsung-Han 17 September 2010 (has links)
Computerized adaptive testing (CAT) provides a highly efficient alternative to the paper-and-pencil test. By selecting items that match examinees’ ability levels, CAT not only can shorten test length and administration time but it can also increase measurement precision and reduce measurement error. In CAT, maximum information (MI) is the most widely used item selection procedure. However, the major challenge with MI is the attenuation paradox, which results because the MI algorithm may lead to the selection of items that are not well targeted at an examinee’s true ability level, resulting in more errors in subsequent ability estimates. The solution is to find an alternative item selection procedure or an appropriate ability estimation method. CAT studies have not investigated the association between these two components of a CAT system based on polytomous IRT models. The present study compared the performance of four item selection procedures (MI, MPWI, MEI, and MEPV) across four ability estimation methods (MLE, WLE, EAP-N, and EAP-PS) under the mixed-format CAT based on the generalized partial credit model (GPCM). The test-unit pool and generated responses were based on test-units calibrated from an operational national test that included both independent dichotomous items and testlets. Several test conditions were manipulated: the unconstrained CAT as well as the constrained CAT in which the CCAT was used as the content-balancing, and the progressive-restricted procedure with maximum exposure rate equal to 0.19 (PR19) served as the exposure control in this study. The performance of various CAT conditions was evaluated in terms of measurement precision, exposure control properties, and the extent of selected-test-unit overlap. Results suggested that all item selection procedures, regardless of ability estimation methods, performed equally well in all evaluation indices across two CAT conditions. The MEPV procedure, however, was favorable in terms of a slightly lower maximum exposure rate, better pool utilization, and reduced test and selected-test-unit overlap than with the other three item selection procedures when both CCAT and PR19 procedures were implemented. It is not necessary to implement the sophisticated and computing-intensive Bayesian item selection procedures across ability estimation methods under the GPCM-based CAT. In terms of the ability estimation methods, MLE, WLE, and two EAP methods, regardless of item selection procedures, did not produce practical differences in all evaluation indices across two CAT conditions. The WLE method, however, generated significantly fewer non-convergent cases than did the MLE method. It was concluded that the WLE method, instead of MLE, should be considered, because the non-convergent case is less of an issue. The EAP estimation method, on the other hand, should be used with caution unless an appropriate prior θ distribution is specified. / text
629

Mappningsstrategi på IKEA:s CDC-lager i Torsvik / Mapping strategy at IKEA:s CDC-Warehouse at Torsvik

Hansson, Alexander, Petersson, Andreas January 2007 (has links)
<p>This study has been assigned by Bengt Hellman who works at the logistic department at IKEA Torsvik just outside of Jönköping. The task has been to develop a new picking strategy for the Oversize area in the CDC-Warehouse. The reason why IKEA wants a new strategy is because they want to minimize the route length for the forklifts when collecting the orders.</p><p>By evaluating the current strategies on other areas in the CDC-storehouse, study literature within this subject and look at restrictions for the storing of goods, we have analyzed how the Oversize area works today. We compared the gathered information, to how it should be done according to the literature to be able to work out a new functional strategy.</p><p>Today, IKEA does not have a working strategy for the oversize area because of lack of time and because all the power has been put on other areas were sales rates are currently higher. This have led to lack of organisation at the Oversize area and items are just put were there is space without first analysing were it should be placed.</p><p>The strategy that we have worked out and introduced to IKEA builds on easiness of understanding the layout, the employees shall know why an item is placed where it is. We have also analyzed which items that are frequently ordered with each other. We have put weight on trying to keep those items as close as we can to minimize the route length for the forklifts. To help IKEA with reorganizations in the future we have made it as easy as we could to move high- and low frequently items around and also introducing new articles in the storehouse, this by the reason that sales rate can change drastically when a new sales campaign is initiated by IKEA</p> / <p>Detta examensarbete är utfört på önskemål av Bengt Hellman som arbetar på logistikavdelningen på IKEA Torsvik utanför Jönköping. Arbetet har gått ut på att ta fram ett nytt förslag på hur de skall placera sina artiklar på plocknivå (vad IKEA kallar för mappning) inne på Oversize avdelningen på IKEA:s CDC-lager. Anledningen till detta är att IKEA vill minska sina körsträckor för plockarna inne på lagret och därmed öka effektiviteten.</p><p>Genom att studera befintliga mappningsstrategier som redan finns på övriga avdelningar på CDC-lagret, litteraturstudier och titta på vilka begränsningar som finns inne på lagret så har vi analyserat nuläget mot teori för att sedan kunna ta fram en ny strategi för hur en fungerande mappning skulle kunna se ut.</p><p>I dag så finns det inte någon väl utarbetat strategi för mappningen på Oversize avdelningen, detta på grund av att andra avdelningar prioriterats på grund av att de säljer mycket mer i dagsläget. Detta har även medfört att den huvudsakliga uppdelning som fanns tidigare på Oversize avdelningen med affärsområden har fått stå åt sidan på grund av tidsbrist och artiklar har placerats in där det finns plats istället för att analysera var de kan placeras bäst.</p><p>Det nya förslaget som vi presenterar för IKEA bygger på att det skall vara enkelt att förstå layouten, plockarna skall veta varför en artikel finns där den finns. Vi har även tagit stor hänsyn till vad som säljs med vad och försökt placera dessa artiklar i närheten av varandra för att på så sätt minska körsträckorna. Vi har även haft i åtanke att det skall vara enkelt att placera om hög och lågfrekventa artiklar samt nyheter inne på lagret då försäljningsvolym påverkas väldigt mycket utav olika kampanjer som IKEA har.</p>
630

以範例為基礎之英漢TIMSS詴題輔助翻譯 / Using Example-based Translation Techniques for Computer Assisted Translation of TIMSS Test Items

張智傑, Chang, Chih Chieh Unknown Date (has links)
本論文應用以範例為基礎的機器翻譯技術,應用英漢雙語對應的結構輔助英漢單句語料的翻譯。翻譯範例是運用一種特殊的結構,此結構包含來源句的剖析樹、目標句的字串、以及目標句和來源句詞彙對應關係。將翻譯範例建立資料庫,以提供來源句作詞序交換的依據,接著透過字典翻譯,以及利用統計式中英詞彙對列和語言模型來選詞,最後填補缺少的量詞,產生建議的翻譯。我們是以2003年國際數學與科學教育成就趨勢調查測驗詴題為主要翻譯的對象,以期提升翻譯的一致性和效率。以NIST 和BLEU 的評比方式,來評估和比較Google Translate 和Yahoo!線上翻譯系統及本系統所達成的翻譯品質。我們的系統經過詞序調動以及填補量詞後,翻譯品質比我們前一代系統要佳,但整體效果沒有比Google Translate 和Yahoo!線上翻譯的品質要佳。 / This paper presents an example-based machine translation based on bilingual structured string tree correspondence (BSSTC). The BSSTC structure includes a parse tree in source language, a string in target language and the correspondence between the source language tree and the target language string. / We designed an English to Chinese computer assisted translation system for Trends in International Mathematics and Science Study (TIMSS), through the BSSTC structure reordering, directory translation, choosing translation statistics model and measure word generation. / We evaluated our system by the BLEU and NIST score and compared with Google Translate and Yahoo! Translate. By reordering selected word sequences and inserting measure words in the default translations, the current system achieved a higher quality of default translations than the previous implementation of our research group, but the overall effects still lag behind that achieved by Google and Yahoo!.

Page generated in 0.0204 seconds