341 |
Student Learning Heterogeneity in School MathematicsCunningham, Malcolm 11 December 2012 (has links)
The phrase "opportunities to learn" (OTL) is most commonly interpreted in institutional, or inter-individual, terms but it can also be viewed as a cognitive, or intra-individual, phenomenon. How student learning heterogeneity (LH) - learning differences manifested when children's understanding is later assessed - is understood varies by OTL interpretation. In this study, I argue that the cognitive underpinning of learning disability, learning difficulty, typical achievement, and gifted achievement in mathematics is not well understood in part because of the ambiguity of LH assumptions in previous studies. Data from 104,315 Ontario students who had responded to provincially-mandated mathematics tests in grades 3, 6, and 9 dataset were analyzed using latent trait analysis (LTM) and latent class analysis (LCA). The tests were constructed to distinguish four achievement levels per grade and, either five curriculum strands (grades 3 and 6), three strands (grade 9 applied) or four strands (grade 9 academic). Best-fitting LTM models reflected 3- or 4-factors (grade 9 applied and grades 3, 6, 9 academic, respectively). Best-fitting LCA solutions reflected 4- or 5-classes (grade 3, 6 and grade 9 applied, academic, respectively). There were differences in relative proportions of students who were distributed across levels and classes. Moreover, grade 9 models were more complex than the reported four achievement levels. To explore intrinsic modeled results further, latent factors were plotted against latent classes. Implications of institutional versus cognitive interpretations are discussed.
|
342 |
Impact of cold climate on boreal ecosystem processes : exploring data and model uncertaintiesWu, Sihong January 2011 (has links)
The impact of cold climate on physical and biological processes, especially the role of air and soil temperature in recovering photosynthesis and transpiration in boreal forests, was investigated in a series of studies. A process-based ecosystem model (CoupModel) considering atmospheric, soil and plant components was evaluated and developed using Generalized Likelihood Uncertainty Estimation (GLUE) and detailed measurements from three different sites. The model accurately described the variability in measurements within days, within years and between years. The forcing environmental conditions were shown to govern both aboveground and belowground processes and regulating carbon, water and heat fluxes. However, the various feedback mechanisms between vegetation and environmental conditions are still unclear, since simulations with one model assumption could not be rejected when compared with another. The strong interactions between soil temperature and moisture processes were indicated by the few behavioural models obtained when constrained by combined temperature and moisture criteria. Model performance on sensible and latent heat fluxes and net ecosystem exchange (NEE) also indicated the coupled processes within the system. Diurnal and seasonal courses of eddy flux data in boreal conifer ecosystems were reproduced successfully within defined ranges of parameter values. Air temperature was the major limiting factor for photosynthesis in early spring, autumn and winter, but soil temperature was a rather important limiting factor in late spring. Soil moisture and nitrogen showed indications of being more important for regulating photosynthesis in the summer period. The need for systematic monitoring of the entire system, covering both soil and plant components, was identified as a subject for future studies. The results from this modelling work could be applied to suggest improvements in management of forest and agriculture ecosystems in order to reduce greenhouse gas emissions and to find adaptations to future climate conditions. / QC 20110921 / the Nitro-Europe project
|
343 |
Deep Web Collection SelectionKing, John Douglas January 2004 (has links)
The deep web contains a massive number of collections that are mostly invisible to search engines. These collections often contain high-quality, structured information that cannot be crawled using traditional methods. An important problem is selecting which of these collections to search. Automatic collection selection methods try to solve this problem by suggesting the best subset of deep web collections to search based on a query. A few methods for deep Web collection selection have proposed in Collection Retrieval Inference Network system and Glossary of Servers Server system. The drawback in these methods is that they require communication between the search broker and the collections, and need metadata about each collection. This thesis compares three different sampling methods that do not require communication with the broker or metadata about each collection. It also transforms some traditional information retrieval based techniques to this area. In addition, the thesis tests these techniques using INEX collection for total 18 collections (including 12232 XML documents) and total 36 queries. The experiment shows that the performance of sample-based technique is satisfactory in average.
|
344 |
Effective web service discovery using a combination of a semantic model and a data mining techniqueBose, Aishwarya January 2008 (has links)
With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.
|
345 |
Analysis of the latency associated transcripts of Herpes simplex virus type 1 / Jane Louise Arthur.Arthur, Jane Louise January 1994 (has links)
Bibliography: leaves 92-118. / xii, 118, [20] leaves, [12] leaves of plates : ill. (some col.) ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / Reports a method for the study of HSV-1 transcripts during latency. High resolution non-isotopic in situ hybridization (ISH) is used to study the intracellular location of HSV-1 latency associated transcripts (LATs) in primary sensory neurons of latently infected mice and humans. / Thesis (Ph.D.)--University of Adelaide, Dept. of Microbiology and Immunology, 1995?
|
346 |
Educational Technology: A Comparison of Ten Academic Journals and the New Media Consortium Horizon Reports for the Period of 2000-2017Morel, Gwendolyn 12 1900 (has links)
This exploratory and descriptive study provides an increased understanding of the topics being explored in both published research and industry reporting in the field of educational technology. Although literature in the field is plentiful, the task of synthesizing the information for practical use is a massive undertaking. Latent semantic analysis was used to review journal abstracts from ten highly respected journals and the New Media Consortium Horizon Reports to identify trends within the publications. As part of the analysis, 25 topics and technologies were identified in the combined corpus of academic journals and Horizon Reports. The journals tended to focus on pedagogical issues whereas the Horizon Reports tended to focus on technological aspects in education. In addition to differences between publication types, trends over time are also described. Findings may assist researchers, practitioners, administrators, and policy makers with decision-making in their respective educational areas.
|
347 |
The treatment of missing measurements in PCA and PLS models /Nelson, Philip R. C. MacGregor, John F. Taylor, Paul A. January 2002 (has links)
Thesis (Ph.D.)--McMaster University, 2002. / Adviser: P.A. Taylor and John F. MacGregor. Includes bibliographical references. Also available via World Wide Web.
|
348 |
The treatment of missing measurements in PCA and PLS models /Nelson, Philip R. C. MacGregor, John F. Taylor, Paul A. January 2002 (has links)
Thesis (Ph.D.)--McMaster University, 2002. / Adviser: P.A. Taylor and John F. MacGregor. Includes bibliographical references. Also available via World Wide Web.
|
349 |
Estimating parameters in markov models for longitudinal studies with missing data or surrogate outcomes /Yeh, Hung-Wen. Chan, Wenyaw. January 2007 (has links)
Thesis (Ph. D.)--University of Texas Health Science Center at Houston, School of Public Health, 2007. / Includes bibliographical references (leaves 58-59).
|
350 |
Improving integration quality for heterogeneous data sourcesAltareva, Evgeniya. Unknown Date (has links)
University, Diss., 2005--Düsseldorf.
|
Page generated in 0.058 seconds