• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 58
  • 17
  • 7
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 120
  • 29
  • 25
  • 17
  • 16
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Optimization and Refinement of XML Schema Inference Approaches / Optimization and Refinement of XML Schema Inference Approaches

Klempa, Michal January 2011 (has links)
Although XML is a widely used technology, the majority of real-world XML documents does not conform to any particular schema. To fill the gap, the research area of automatic schema inference from XML documents has emerged. This work refines and extends recent approaches to the automatic schema inference mainly by exploiting an obsolete schema in the inference process, designing new MDL measures and heuristic excluding of excentric data inputs. The work delivers a ready-to-use and easy-to-extend implementation integrated into the jInfer framework (developed as a software project). Experimental results are a part of the work.
72

Probabilistic Methods In Information Theory

Pachas, Erik W 01 September 2016 (has links)
Given a probability space, we analyze the uncertainty, that is, the amount of information of a finite system, by studying the entropy of the system. We also extend the concept of entropy to a dynamical system by introducing a measure preserving transformation on a probability space. After showing some theorems and applications of entropy theory, we study the concept of ergodicity, which helps us to further analyze the information of the system.
73

An investigation of appropriate instructional design to match the ability of the learner

Maxwell, Elizabeth Anne, Education, Faculty of Arts & Social Sciences, UNSW January 2008 (has links)
Content analyses of research in the literature of gifted education (Coleman, 2006; Rogers, 1999, 2006) has shown a consistent absence of research investigating methodology for instructing gifted students and for the development of expertise using new technologies. In this study, utilising electronic instructional delivery, an investigation was undertaken of the differential effects and appropriateness of matching the prior knowledge of the learner to the instructional method. Underpinned with a theoretical understanding of gifted education and cognitive load theory, a series of three experiments was designed and implemented to determine whether gifted students learn more effectively under guided discovery design than with example based instruction, while not identified as gifted ability students perform significantly better under direct example based instruction than with guided discovery. Data were collected and analysed in three stages. Experiment 1 was conducted in the novel domain of Boolean switching equations. Experiments 2 and 3 used identical test instruments with novel tasks in the semi-familiar domain of geometry. A total of 155 Years 7, 8 and 9 students at three metropolitan secondary schools participated. The study explored whether the presence of schemas, that facilitated greater problem-solving ability in gifted students, would generate clear evidence of instructional efficiency and preference for either mode of instruction. As students advanced from novice state to expert in particular domains of learning, it was anticipated that gifted students would progress from benefiting from worked example instruction to more efficient learning in guided discovery mode. This hypothesis was rejected as the results from each of the experiments did not confirm the hypothesised outcomes. There was no manifested expertise-reversal effect. The absence of any clear delineation of enhanced learning proficiency mode of instruction for gifted students does, however, contribute to the advancement and understanding of cognitive load theory and the complexity of learning strategies necessary for gifted learners.
74

Data Editing and Logic: The covering set method from the perspective of logic

Boskovitz, Agnes, abvi@webone.com.au January 2008 (has links)
Errors in collections of data can cause significant problems when those data are used. Therefore the owners of data find themselves spending much time on data cleaning. This thesis is a theoretical work about one part of the broad subject of data cleaning - to be called the covering set method. More specifically, the covering set method deals with data records that have been assessed by the use of edits, which are rules that the data records are supposed to obey. The problem solved by the covering set method is the error localisation problem, which is the problem of determining the erroneous fields within data records that fail the edits. In this thesis I analyse the covering set method from the perspective of propositional logic. I demonstrate that the covering set method has strong parallels with well-known parts of propositional logic. The first aspect of the covering set method that I analyse is the edit generation function, which is the main function used in the covering set method. I demonstrate that the edit generation function can be formalised as a logical deduction function in propositional logic. I also demonstrate that the best-known edit generation function, written here as FH (standing for Fellegi-Holt), is essentially the same as propositional resolution deduction. Since there are many automated implementations of propositional resolution, the equivalence of FH with propositional resolution gives some hope that the covering set method might be implementable with automated logic tools. However, before any implementation, the other main aspect of the covering set method must also be formalised in terms of logic. This other aspect, to be called covering set correctibility, is the property that must be obeyed by the edit generation function if the covering set method is to successfully solve the error localisation problem. In this thesis I demonstrate that covering set correctibility is a strengthening of the well-known logical properties of soundness and refutation completeness. What is more, the proofs of the covering set correctibility of FH and of the soundness / completeness of resolution deduction have strong parallels: while the proof of soundness / completeness depends on the reduction property for counter-examples, the proof of covering set correctibility depends on the related lifting property. In this thesis I also use the lifting property to prove the covering set correctibility of the function defined by the Field Code Forest Algorithm. In so doing, I prove that the Field Code Forest Algorithm, whose correctness has been questioned, is indeed correct. The results about edit generation functions and covering set correctibility apply to both categorical edits (edits about discrete data) and arithmetic edits (edits expressible as linear inequalities). Thus this thesis gives the beginnings of a theoretical logical framework for error localisation, which might give new insights to the problem. In addition, the new insights will help develop new tools using automated logic tools. What is more, the strong parallels between the covering set method and aspects of logic are of aesthetic appeal.
75

Examining the Generality of Self-Explanation

Wylie, Ruth 01 September 2011 (has links)
Prompting students to self-explain during problem solving has proven to be an effective instructional strategy across many domains. However, despite being called “domain general”, very little work has been done in areas outside of math and science. In this dissertation, I investigate whether the self-explanation effect holds when applied in an inherently different type of domain, second language grammar learning. Through a series of in vivo experiments, I tested the effects of using prompted self-explanation to help adult English language learners acquire the English article system (e.g., teaching students the difference between “I saw a dog” versus “I was the dog”). In the pilot study, I explored different modalities of self-explanation (free-form versus menu-based), and in Study 1, I looked at transfer effects between practice and self-explanation. In the studies that followed, I added an additional deep processing manipulation (Study 2: analogical comparisons) and a strategy designed to increase the rate of practice and information processing (Study 3: worked example study). Finally, in Study 4, I built and evaluated an adaptive self-explanation tutor that prompted students to self-explain only when estimates of prior knowledge were low. Across all studies, results show that self-explanation is an effective instructional strategy in that it leads to significant pre- to post-test learning gains, but it is inefficient compared to tutored practice. In addition to learning gains, I compared learning process data and found that both self-explanation and practice lead to similar patterns of learning and there was no evidence in support of individual differences. This work makes contributions to learning sciences, second language acquisition (SLA), and tutoring system communities. It contributes to learning sciences by demonstrating boundary conditions of the self-explanation effect and cautioning against broad generalizations for instructional strategies, suggesting instead that strategies should be aligned to target knowledge. This work contributes to second language acquisition theory by demonstrating the effectiveness of computer-based tutoring systems for second language grammar learning and providing data that supports the benefits of explicit instruction. Furthermore, this work demonstrates the relative effectiveness of a broad spectrum of explicit learning conditions. Finally, this work makes contributions to tutoring systems research by demonstrating a process for data-driven and experiment-driven tutor design that has lead to significant learning gains and consistent adoption in real classrooms.
76

A Good Instruction in Mathematics Education should be Open but Structured

Graumann, Olga 15 March 2012 (has links) (PDF)
No description available.
77

An investigation of appropriate instructional design to match the ability of the learner

Maxwell, Elizabeth Anne, Education, Faculty of Arts & Social Sciences, UNSW January 2008 (has links)
Content analyses of research in the literature of gifted education (Coleman, 2006; Rogers, 1999, 2006) has shown a consistent absence of research investigating methodology for instructing gifted students and for the development of expertise using new technologies. In this study, utilising electronic instructional delivery, an investigation was undertaken of the differential effects and appropriateness of matching the prior knowledge of the learner to the instructional method. Underpinned with a theoretical understanding of gifted education and cognitive load theory, a series of three experiments was designed and implemented to determine whether gifted students learn more effectively under guided discovery design than with example based instruction, while not identified as gifted ability students perform significantly better under direct example based instruction than with guided discovery. Data were collected and analysed in three stages. Experiment 1 was conducted in the novel domain of Boolean switching equations. Experiments 2 and 3 used identical test instruments with novel tasks in the semi-familiar domain of geometry. A total of 155 Years 7, 8 and 9 students at three metropolitan secondary schools participated. The study explored whether the presence of schemas, that facilitated greater problem-solving ability in gifted students, would generate clear evidence of instructional efficiency and preference for either mode of instruction. As students advanced from novice state to expert in particular domains of learning, it was anticipated that gifted students would progress from benefiting from worked example instruction to more efficient learning in guided discovery mode. This hypothesis was rejected as the results from each of the experiments did not confirm the hypothesised outcomes. There was no manifested expertise-reversal effect. The absence of any clear delineation of enhanced learning proficiency mode of instruction for gifted students does, however, contribute to the advancement and understanding of cognitive load theory and the complexity of learning strategies necessary for gifted learners.
78

SCUT-DS: Methodologies for Learning in Imbalanced Data Streams

Olaitan, Olubukola January 2018 (has links)
The automation of most of our activities has led to the continuous production of data that arrive in the form of fast-arriving streams. In a supervised learning setting, instances in these streams are labeled as belonging to a particular class. When the number of classes in the data stream is more than two, such a data stream is referred to as a multi-class data stream. Multi-class imbalanced data stream describes the situation where the instance distribution of the classes is skewed, such that instances of some classes occur more frequently than others. Classes with the frequently occurring instances are referred to as the majority classes, while the classes with instances that occur less frequently are denoted as the minority classes. Classification algorithms, or supervised learning techniques, use historic instances to build models, which are then used to predict the classes of unseen instances. Multi-class imbalanced data stream classification poses a great challenge to classical classification algorithms. This is due to the fact that traditional algorithms are usually biased towards the majority classes, since they have more examples of the majority classes when building the model. These traditional algorithms yield low predictive accuracy rates for the minority instances and need to be augmented, often with some form of sampling, in order to improve their overall performances. In the literature, in both static and streaming environments, most studies focus on the binary class imbalance problem. Furthermore, research in multi-class imbalance in the data stream environment is limited. A number of researchers have proceeded by transforming a multi-class imbalanced setting into multiple binary class problems. However, such a transformation does not allow the stream to be studied in the original form and may introduce bias. The research conducted in this thesis aims to address this research gap by proposing a novel online learning methodology that combines oversampling of the minority classes with cluster-based majority class under-sampling, without decomposing the data stream into multiple binary sets. Rather, sampling involves continuously selecting a balanced number of instances across all classes for model building. Our focus is on improving the rate of correctly predicting instances of the minority classes in multi-class imbalanced data streams, through the introduction of the Synthetic Minority Over-sampling Technique (SMOTE) and Cluster-based Under-sampling - Data Streams (SCUT-DS) methodologies. In this work, we dynamically balance the classes by utilizing a windowing mechanism during the incremental sampling process. Our SCUT-DS algorithms are evaluated using six different types of classification techniques, followed by comparing their results against a state-of-the-art algorithm. Our contributions are tested using both synthetic and real data sets. The experimental results show that the approaches developed in this thesis yield high prediction rates of minority instances as contained in the multiple minority classes within a non-evolving stream.
79

Cvičebnice Mongeova promítání / Workbook of Monge projection

Pajerová, Nikola January 2016 (has links)
In this thesis there can be found various examples from Monge projection. The theory is summarized in the beginning, which is important of understanding the projection and for solving the examples. There are also examples of solving axial affinity and central collineation. Then there is a chapter about the projection of all types of angular and rotational solids, which are solved at the secondary schools. Then follows a chapter, where the sections of these solids are constructed. In the last chapter, there are solved intersection of solids from each type. Powered by TCPDF (www.tcpdf.org)
80

Zadání a statistické řešení výzkumné úlohy / Assignment and Statistical Solution of a Research Task

Novák, Marek January 2008 (has links)
This thesis is intent on the introduction to problems of statistical approach to research tasks. It focuses on research assignments, position of research worker and statistician while analyzing, ways of gathering data files and problems connected with them, main types of multivariate statistical methods and possible views of their classification. Moreover, this work includes overview of examples of research assignments, possibilities of their solutions and related data files. First chapter describes statistical approach to the research assignments, and the second one shows concrete examples of these assignments. The enclosed CD includes data files to most of the statistical examples.

Page generated in 0.0287 seconds