1 |
Avaliação experimental do grau de confiabilidade dos ensaios à compressão do concreto efetivados em laboratórios / Experimental evaluation the degree of trustworthiness of assays concrete compressive made effective in laboratoriesGidrão, Salmen Saleme 01 September 2014 (has links)
The measurements of a physical quantity invariably involve errors and uncertainties. The
results of the testing of compressive strength of concrete are not exempt from this rule.
Measure is an act of comparison whose degree of accuracy can depend on instruments,
operators and the measurement process itself. In this work were analyzed questions that
involving the intervening factors of the quality of the concrete compressive results, and,
been tested the trustworthiness with which these assays have been produced by several
laboratories. The focus is on the measurement errors. Your organization has involved a
conceptual review of \"quality\" and its relation to the constructions in concrete;
sequentially, has organized an application of tests to verify the trustworthiness of their
results through two complementary ways. The first, to analyze the dispersion of results by
different methods; and the main form of the reference test, established from a result set as
the default method; and the other in order to characterize the types of errors produced.
Their results, irrefutable regarding the methodology used for the production of his bodies
of evidence and significant for the strategy used for data searching allowed to identify an
undesirable state for conditions that defined the level of its reliability. Classified as
inconsistent, a considerable number of laboratories evaluated in three different stages of
experimental verification, presented as the results of their measurement, inadequate
numbers to the strength of concrete, not meeting the expectations of desirable accuracy for
this important procedure of quality control production. / As medições de uma grandeza física invariavelmente envolvem erros e incertezas. Os
resultados de um ensaio de resistência à compressão do concreto, não estão livres desta
regra. Medir é um ato de comparação cujo grau de precisão pode depender de
instrumentos, operadores e do próprio processo de medida. Neste trabalho foram
analisadas questões que envolvem os fatores intervenientes da qualidade dos resultados dos
ensaios de compressão do concreto, e avaliado o grau de confiabilidade dos ensaios
realizados por diversos laboratórios. O foco são os erros de medida. Sua organização
envolveu uma revisão conceitual sobre qualidade e sua relação com as construções em
concreto; na sequência, foi organizada uma aplicação de ensaios para a verificação da
confiabilidade dos seus resultados por meio de dois caminhos complementares. O
primeiro, para a análise das dispersões de resultados por métodos distintos; e de forma
principal pelo método do ensaio referencial, estabelecido a partir de um resultado fixado
como padrão; e o outro de maneira a caracterizar os tipos de erros produzidos. Seus
resultados, irrefutáveis quanto à metodologia adotada para a produção de seus corpos-deprova,
e significativos quanto à estratégia utilizada para a prospecção de dados, permitiram
identificar um estado indesejável para as condições que definiram o grau de sua
confiabilidade. Classificados como não coerentes, um número considerável dos
laboratórios avaliados em três etapas distintas da verificação experimental, apresentaram
como resultados de sua medição, números inadequados para a resistência do concreto, não
atendendo as expectativas de precisão desejáveis para este importante procedimento do
controle de qualidade de sua produção. / Mestre em Engenharia Civil
|
2 |
Relationships between Student Attendance and Test Scores on the Virginia Standards of Learning Tests.Cassell, Jeffrey 15 December 2007 (has links) (PDF)
This study examines the relationship between student attendance and student test scores on a criterion-referenced test, using test scores of all 5th graders in Virginia who participated in the 2005-2006 Standards of Learning tests in reading and mathematics. Data collection for this study was performed with the cooperation of the Virginia Department of Education using a state database of student testing information. Pearson correlation coefficients were determined for the overall student population and for the subgroups of economically disadvantaged, students with disabilities, limited English proficient, white, black, and Hispanic. The results of this study indicate that there is a significant positive correlation (p<.01) between student attendance, as measured by the number of days present, and student performance on the Virginia SOL test, a criterion-referenced test.
Positive correlations were found between student attendance and student test scores for all subgroups. The correlation between student attendance and student performance on the SOL mathematics test was higher than the correlation for the same variables on the English test. The correlation for the overall student population on the English SOL test was higher than the correlation for any subgroup on the English SOL test. Only the LEP and Hispanic subgroups had higher correlations on the mathematics test than the overall student population. This study will contribute to a growing body of research resulting from the enactment of the No Child Left Behind legislation and the national attention that this legislation has focused on student attendance and student performance on standardized tests.
|
3 |
From Intuition to Evidence: A Data-Driven Approach to Transforming CS EducationAllevato, Anthony James 13 August 2012 (has links)
Educators in many disciplines are too often forced to rely on intuition about how students learn and the effectiveness of teaching to guide changes and improvements to their curricula. In computer science, systems that perform automated collection and assessment of programming assignments are seeing increased adoption, and these systems generate a great deal of meaningful intermediate data and statistics during the grading process. Continuous collection of these data and long-term retention of collected data present educators with a new resource to assess both learning (how well students understand a topic or how they behave on assignments) and teaching (how effective a response, intervention, or assessment instrument was in evaluating knowledge or changing behavior), by basing their decisions on evidence rather than intuition. It is only possible to achieve these goals, however, if such data are easily accessible.
I present an infrastructure that has been added to one such automated grading system, Web-CAT, in order to facilitate routine data collection and access while requiring very little added effort by instructors. Using this infrastructure, I present three case studies that serve as representative examples of educational questions that can be explored thoroughly using pre-existing data from required student work. The first case study examines student time management habits and finds that students perform better when they start earlier but that offering extra credit for finishing earlier did not encourage them to do so. The second case study evaluates a tool used to improve student understanding of manual memory management and finds that students made fewer errors when using the tool. The third case study evaluates the reference tests used to grade student code on a selected assignment and confirms that the tests are a suitable instrument for assessing student ability. In each case study, I use a data-driven, evidence-based approach spanning multiple semesters and students, allowing me to answer each question in greater detail than was possible using previous methods and giving me significantly increased confidence in my conclusions. / Ph. D.
|
Page generated in 0.078 seconds