Spelling suggestions: "subject:"1standard error"" "subject:"39standard error""
11 |
Um modelo inteligente para seleção de itens em testes adaptativos computadorizadosGalvão, Ailton Fonseca 06 October 2013 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-05-31T14:57:31Z
No. of bitstreams: 1
ailtonfonsecagalvao.pdf: 1341901 bytes, checksum: 6bca81f10b97b6393ed399ed87e900ff (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-01T11:37:37Z (GMT) No. of bitstreams: 1
ailtonfonsecagalvao.pdf: 1341901 bytes, checksum: 6bca81f10b97b6393ed399ed87e900ff (MD5) / Made available in DSpace on 2017-06-01T11:37:37Z (GMT). No. of bitstreams: 1
ailtonfonsecagalvao.pdf: 1341901 bytes, checksum: 6bca81f10b97b6393ed399ed87e900ff (MD5)
Previous issue date: 2013-10-06 / Testes Adaptativos Computadorizados (TAC) são um tipo de avaliação aplicada utilizando
se de computadores que tem como principal característica a adequação do nível das ques-
tões do teste ao desempenho de cada indivíduo avaliado. Os dois principais elementos que
compõem um TAC são: (i) o banco de itens, que é o conjunto das questões disponíveis
para serem utilizadas no teste; (ii) o modelo de seleção, que faz a escolha de quais questões,
chamadas aqui de itens, são aplicadas aos indivíduos. O modelo de seleção de itens é o
núcleo do TAC, pois é o responsável por identificar o nível de conhecimento dos indivíduos
à medida que os itens são aplicados fazendo com que o teste se adapte, selecionando os
itens mais adequados para produzir uma medida precisa. Nesta dissertação, é proposto
um modelo para seleção de itens baseado em metas para a precisão do teste através da
estimativa do erro padrão da proficiência, por meio de um controle específico do mesmo
para cada fase do teste. Utilizando simulações de testes, os resultados são comparados
aos de outros dois modelos tradicionais de seleção, avaliando o desempenho do modelo
proposto em termos da precisão do resultado e do nível de exposição dos itens do banco.
Por fim, é feita uma análise específica sobre o cumprimento das metas ao longo dos testes
e a possível influência no resultado final, além de considerações sobre o comportamento
do modelo em relação às características do banco de itens. / Computerized Adaptive Tests (CAT) are a type of assessment tests applied through
computers which main feature is the adequacy of the test questions to the performance of
each examinee. The two main elements of a CAT are: (i) the item pool, which is the set
of available questions for testing; (ii) the selection model, which pick out the questions,
named items, applied to the examinees. The item selection model is the core of CAT,
and its main task is to identify examinees knowledge level as the items are applied and
to adapt the test, selecting the most proper items to produce an accurate measure. This
thesis proposes a model for item selection based on goals for the test precision using the
estimation of the proficiency standard error. For that, an specific control of the goals
for each step of the test is developed. Using simulated tests, the results are compared to
two traditional item selection models, evaluating the performance of the proposed model
in terms of measure accuracy and the level of exposure of the items. Finally, a specific
analysis is performed on the accomplishment of goals over the tests and the possible
influence on the final result, in addition to considerations on the behavior of the model in
relation to the characteristics of the item pool.
|
12 |
Predicting Glass Sponge (Porifera, Hexactinellida) Distributions in the North Pacific Ocean and Spatially Quantifying Model UncertaintyDavidson, Fiona 07 January 2020 (has links)
Predictions of species’ ranges from distribution modeling are often used to inform marine management and conservation efforts, but few studies justify the model selected or quantify the uncertainty of the model predictions in a spatial manner. This thesis employs a multi-model, multi-area SDM analysis to develop a higher certainty in the predictions where similarities exist across models and areas. Partial dependence plots and variable importance rankings were shown to be useful in producing further certainty in the results. The modeling indicated that glass sponges (Hexactinellida) are most likely to exist within the North Pacific Ocean where alkalinity is greater than 2.2 μmol l-1 and dissolved oxygen is lower than 2 ml l-1. Silicate was also found to be an important environmental predictor. All areas, except Hecate Strait, indicated that high glass sponge probability of presence coincided with silicate values of 150 μmol l-1 and over, although lower values in Hecate Strait confirmed that sponges can exist in areas with silicate values of as low as 40 μmol l-1. Three methods of showing spatial uncertainty of model predictions were presented: the standard error (SE) of a binomial GLM, the standard deviation of predictions made from 200 bootstrapped GLM models, and the standard deviation of eight commonly used SDM algorithms. Certain areas with few input data points or extreme ranges of predictor variables were highlighted by these methods as having high uncertainty. Such areas should be treated cautiously regardless of the overall accuracy of the model as indicated by accuracy metrics (AUC, TSS), and such areas could be targeted for future data collection. The uncertainty metrics produced by the multi-model SE varied from the GLM SE and the bootstrapped GLM. The uncertainty was lowest where models predicted low probability of presence and highest where the models predicted high probability of presence and these predictions differed slightly, indicating high confidence in where the models predicted the sponges would not exist.
|
13 |
Impacts of Ignoring Nested Data Structure in Rasch/IRT Model and Comparison of Different Estimation MethodsChungbaek, Youngyun 06 June 2011 (has links)
This study involves investigating the impacts of ignoring nested data structure in Rasch/1PL item response theory (IRT) model via a two-level and three-level hierarchical generalized linear model (HGLM). Currently, Rasch/IRT models are frequently used in educational and psychometric researches for data obtained from multistage cluster samplings, which are more likely to violate the assumption of independent observations of examinees required by Rasch/IRT models. The violation of the assumption of independent observation, however, is ignored in the current standard practices which apply the standard Rasch/IRT for the large scale testing data. A simulation study (Study Two) was conducted to address this issue of the effects of ignoring nested data structure in Rasch/IRT models under various conditions, following a simulation study (Study One) to compare the performances of three methods, such as Penalized Quasi-Likelihood (PQL), Laplace approximation, and Adaptive Gaussian Quadrature (AGQ), commonly used in HGLM in terms of accuracy and efficiency in estimating parameters.
As expected, PQL tended to produce seriously biased item difficulty estimates and ability variance estimates whereas almost unbiased for Laplace or AGQ for both 2-level and 3-level analysis. As for the root mean squared errors (RMSE), three methods performed without substantive differences for item difficulty estimates and ability variance estimates in both 2-level and 3-level analysis, except for level-2 ability variance estimates in 3-level analysis. Generally, Laplace and AGQ performed similarly well in terms of bias and RMSE of parameter estimates; however, Laplace exhibited a much lower convergence rate than that of AGQ in 3-level analyses.
The results from AGQ, which produced the most accurate and stable results among three computational methods, demonstrated that the theoretical standard errors (SE), i.e., asymptotic information-based SEs, were underestimated by at most 34% when 2-level analyses were used for the data generated from 3-level model, implying that the Type I error rate would be inflated when the nested data structures are ignored in Rasch/IRT models. The underestimated theoretical standard errors were substantively more severe as the true ability variance increased or the number of students within schools increased regardless of test length or the number of schools. / Ph. D.
|
14 |
Using Large-Scale Datasets to Teach Abstract Statistical Concepts: Sampling DistributionKanyongo, Gibbs Y. 16 March 2012 (has links) (PDF)
No description available.
|
15 |
Design, maintenance and methodology for analysing longitudinal social surveys, including applicationsDomrow, Nathan Craig January 2007 (has links)
This thesis describes the design, maintenance and statistical analysis involved in undertaking a Longitudinal Survey. A longitudinal survey (or study) obtains observations or responses from individuals over several times over a defined period. This enables the direct study of changes in an individual's response over time. In particular, it distinguishes an individual's change over time from the baseline differences among individuals within the initial panel (or cohort). This is not possible in a cross-sectional study. As such, longitudinal surveys give correlated responses within individuals. Longitudinal studies therefore require different considerations for sample design and selection and analysis from standard cross-sectional studies. This thesis looks at the methodology for analysing social surveys. Most social surveys comprise of variables described as categorical variables. This thesis outlines the process of sample design and selection, interviewing and analysis for a longitudinal study. Emphasis is given to categorical response data typical of a survey. Included in this thesis are examples relating to the Goodna Longitudinal Survey and the Longitudinal Survey of Immigrants to Australia (LSIA). Analysis in this thesis also utilises data collected from these surveys. The Goodna Longitudinal Survey was conducted by the Queensland Office of Economic and Statistical Research (a portfolio office within Queensland Treasury) and began in 2002. It ran for two years whereby two waves of responses were collected.
|
16 |
Měření horizontálních a vertikálních posunů gabionové zdi / Deformation Surveying of Supporting WallZbránek, Jakub January 2014 (has links)
The main subject of this diploma thesis is monitoring of horizontal and vertical displacements of the supporting wall in village Smědčice. The thesis describes the whole production process, from construction of the reference net and the net of observed points to the final review. There are also displayed main theoretical basis. Final outputs of the thesis are charts, graphical sketches, tables and final word summary.
|
17 |
Using Large-Scale Datasets to Teach Abstract Statistical Concepts: Sampling DistributionKanyongo, Gibbs Y. 16 March 2012 (has links)
No description available.
|
18 |
Implementing SAE Techniques to Predict Global Spectacles NeedsZhang, Yuxue January 2023 (has links)
This study delves into the application of Small Area Estimation (SAE) techniques to enhance the accuracy of predicting global needs for assistive spectacles. By leveraging the power of SAE, the research undertakes a comprehensive exploration, employing arange of predictive models including Linear Regression (LR), Empirical Best Linear Unbiased Prediction (EBLUP), hglm (from R package) with Conditional Autoregressive (CAR), and Generalized Linear Mixed Models (GLMM). At last phase,the global spectacle needs’ prediction includes various essential steps such as random effects simulation, coefficient extraction from GLMM estimates, and log-linear modeling. The investigation develops a multi-faceted approach, incorporating area-level modeling, spatial correlation analysis, and relative standard error, to assess their impact on predictive accuracy. The GLMM consistently displays the lowest Relative Standard Error (RSE) values, almost close to zero, indicating precise but potentially overfit results. Conversely, the hglm with CAR model presents a narrower RSE range, typically below 25%, reflecting greater accuracy; however, it is worth noting that it contains a higher number of outliers. LR illustrates a performance similar to EBLUP, with RSE values reaching around 50% in certain scenarios and displaying slight variations across different contexts. These findings underscore the trade-offs between precision and robustness across these models, especially for finer geographical levels and countries not included in the initial sample.
|
19 |
ARIMA forecasts of the number of beneficiaries of social security grants in South AfricaLuruli, Fululedzani Lucy 12 1900 (has links)
The main objective of the thesis was to investigate the feasibility of accurately and precisely fore-
casting the number of both national and provincial bene ciaries of social security grants in South
Africa, using simple autoregressive integrated moving average (ARIMA) models. The series of the
monthly number of bene ciaries of the old age, child support, foster care and disability grants from
April 2004 to March 2010 were used to achieve the objectives of the thesis. The conclusions from
analysing the series were that: (1) ARIMA models for forecasting are province and grant-type spe-
ci c; (2) for some grants, national forecasts obtained by aggregating provincial ARIMA forecasts
are more accurate and precise than those obtained by ARIMA modelling national series; and (3)
for some grants, forecasts obtained by modelling the latest half of the series were more accurate
and precise than those obtained from modelling the full series. / Mathematical Sciences / M.Sc. (Statistics)
|
20 |
ARIMA forecasts of the number of beneficiaries of social security grants in South AfricaLuruli, Fululedzani Lucy 12 1900 (has links)
The main objective of the thesis was to investigate the feasibility of accurately and precisely fore-
casting the number of both national and provincial bene ciaries of social security grants in South
Africa, using simple autoregressive integrated moving average (ARIMA) models. The series of the
monthly number of bene ciaries of the old age, child support, foster care and disability grants from
April 2004 to March 2010 were used to achieve the objectives of the thesis. The conclusions from
analysing the series were that: (1) ARIMA models for forecasting are province and grant-type spe-
ci c; (2) for some grants, national forecasts obtained by aggregating provincial ARIMA forecasts
are more accurate and precise than those obtained by ARIMA modelling national series; and (3)
for some grants, forecasts obtained by modelling the latest half of the series were more accurate
and precise than those obtained from modelling the full series. / Mathematical Sciences / M.Sc. (Statistics)
|
Page generated in 0.2915 seconds