• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 5
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Dynamic Constructing Decision Rules from Learning Portfolio to Support Adaptive Instruction

Chen, Yun-pei 14 July 2004 (has links)
With the dynamic development of internet, various protocols and applications had been gradually matured on the network. The network has objective merits such as getting beyond the limits of time and space and change the tradition teaching model. Otherwise, the learning portfolios documented by on-line learning websites help teachers keep track of students¡¦ learning process. With the educational information, teachers would be more able to observe students¡¦ learning in real time and provide students with different decision rules under various time frames for teachers to understand both students¡¦ learning behaviors and process instantaneously. Nevertheless, technology mediated learning (TML) refers to an environment in which the learner interacts with learning materials, peers, and/or instructors that are mediated through advanced information technology. Recently, there have been increasing interests in investigating if TML can yield positive learning outcome. However, the rapid growth of information technology concerning analyzing the learning track is of various analytic approaches and thus is really complicated. The lack of one integrative analysis of all the possible use of the diverse analyzing frameworks prevents teachers from picking one most appropriate analyzing framework for their own teaching. Accordingly, this research compares and contrasts the most prevailing data analyzing technique¡Ðdata mining and the traditional statistical analysis approaches with the hope to allocate matching analyzing tools for various kinds of courses as well as to provide teachers with immediate decision rules as bases for predicting students¡¦ possible learning behaviors.
2

Objectively Defining Scenario Complexity: Towards Automated, Adaptive Scenario-Based Training

Dunn, Robert 01 January 2014 (has links)
Effective Scenario-Based Training (SBT) is sequenced in an efficient trajectory from novice to mastery and is well-grounded in pedagogically sound instructional strategies and learning theory. Adaptive, automated SBT attempts to sequence scenarios according to the performance of the student and implement the sequence without human agency. The source of these scenarios may take the form of a matrix constructed by Instructional Systems Designers (ISD), software engineers or trainers. The domain being instructed may contain procedures or concepts that are easily differentiated thus allowing quick and accurate determination of difficulty. In this instance, the sequencing of the SBT is relatively simple. However, in complex, domain-integrated instructional environments accurate and efficient sequencing may be extremely difficult as ISD, software engineers and trainers, without an objective means to calculate a scenario*s complexity must rely on subjectivity. In the Military, where time, fiscal and manpower constraints may lead to ineffective, inefficient and, perhaps, negative training SBT is a growing alternative to live training due to the significant cost avoidance demonstrated by such systems as the United States Marine Corps* (USMC) Abrams Main Battle Tank (M1A1) Advanced Gunnery Training System (AGTS). Even as the practice of simulation training grows, leadership such as the Government Accountability Office asserts that little has been done to demonstrate simulator impact on trainee proficiency. The M1A1 AGTS instructional sub system, the Improved Crew Training Program (ICTP), employs an automated matrix intended to increase Tank Commander (TC) and Gunner (GNR) team proficiency. This matrix is intended to guide the team along a trajectory of ever-increasing scenario difficulty. However, as designed, the sequencing of the matrix is based on subjective evaluation of difficulty, not on empirical or objective calculations of complexity. Without effective, automated SBT that adapts to the performance of the trainee, gaps in combat readiness and fiscal responsibility could grow large. In 2010, the author developed an algorithm intended to computationally define scenario complexity (Dunne, Schatz, Fiore, Martin & Nicholson, 2010) and conducted a proof of concept study to determine the algorithm*s effectiveness (Dunne, Schatz, Fiore, Nicholson & Fowlkes, 2010). Based on results of that study, and follow-on analysis, revisions were made to that Scenario Complexity (SC) algorithm. The purpose of this research was to examine the efficacy of the revised SC algorithm to enable Educators and Trainers, ISDs, and software engineers to objectively and computationally define SC. The research process included a period of instruction for Subject Matter Experts (SME) to receive instruction on how to identify the base variables that comprise SC. Using this knowledge SMEs then determined the values of the scenarios base variables. Once calculated, these values were ranked and compared to the ICTP matrix sequence. Results indicate that the SMEs were very consistent in their ratings of the items across scenario base variables. Due to the highly proceduralized process underlying advanced gunnery skills, this high degree of agreement was expected. However, the significant lack of correlation to the matrix sequencing is alarming and while a recent study has shown the AGTS to increase TC and GNR team proficiency (PM TRASYS, 2014a), this research*s findings suggests that redesign of the ICTP matrix is in order.
3

Adaptive Feedback In Simulation-based Training

Billings, Deborah 01 January 2010 (has links)
Feedback is essential to guide performance in simulation-based training (SBT) and to refine learning. Generally outcomes improve when feedback is delivered with personalized tutoring that tailors specific guidance and adapts feedback to the learner in a one-to-on environment. Therefore, emulating by automation these adaptive aspects of human tutors in SBT systems should be an effective way to train individuals. This study investigates the efficacy of automating different types of feedback in a SBT system. These include adaptive bottom-up feedback (i.e., detailed feedback, changing to general as proficiency develops) and adaptive top-down feedback (i.e., general feedback, changing to detailed if performance fails to improve). Other types of non-adaptive feedback were included for performance comparisons as well as to examine the overall cognitive load. To test hypotheses, 130 participants were randomly assigned to five conditions. Two feedback conditions employed adaptive approaches (bottom-up and top-down), two used non-adaptive approaches (constant detailed and constant general), and one functioned as a control group (i.e., only a performance score was given). After preliminary training on the simulator system, participants completed four simulated search and rescue missions (three training missions and one transfer mission). After each training mission, all participants received feedback relative to the condition they were assigned. Overall performance on missions, knowledge post-test scores, and subjective cognitive load were measured and analyzed to determine the effectiveness of the type of feedback. Results indicate that: (1) feedback generally improves performance, confirming prior research; (2) performance for the two adaptive approaches (bottom-up vs. top-down did not differ significantly at the end of training, but the bottom-up group achieved higher performance levels significantly sooner; (3) performance for the bottom-up and constant detailed groups did not differ significantly, although the trend suggests that adaptive bottom-up feedback may yield significant results in further studies. Overall, these results have implications for the implementation of feedback in SBT and beyond for other computer-based training systems.
4

Arthur: An Intelligent Tutoring System with Adaptive Instruction

Gilbert, Juan Eugene January 2000 (has links)
No description available.
5

The Model-Based Systematic Development of LOGIS Online Graphing Instructional Simulator

Davis, Darrel R 22 August 2007 (has links)
This Developmental Research study described the development of an interactive online graphing instructional application and the impact of the Analysis Design Development Implementation Evaluation (ADDIE) model on the development process. An optimal learning environment was produced by combining Programmed Instruction and Adaptive Instruction principles with a graphing simulator that implemented guided contingent practice. The development process entailed the creation and validation of three instruments measuring knowledge, skills, and attitudes, which were components of the instruction. The research questions were focused on the influence of the ADDIE model on the development process and the value of the LOGIS instructional application. The model had a significant effect on the development process and the effects were categorized by: Organization, Time, and Perspective. In terms of Organization, the model forced a high level of planning to occur and dictated the task sequence thereby reducing frustration. The model facilitated the definition of terminal states and made it easier to transition from completed tasks to new tasks. The model also forced the simultaneous consideration of global and local views of the development process. The model had a significant effect on Time and Perspective. With respect to Time, using the model resulted in increased development time. Perspectives were influenced because previously held assumptions about instructional design were exposed for critique. Also, the model facilitated post project reflection and problem diagnosis. LOGIS was more valuable in terms of the knowledge assessment than the skills and attitudes assessments. There was a statistically and educationally significant increase from the pretest to posttest on the knowledge assessment, but the overall posttest performance was below average. Overall performance on the skills assessment was also below average. Participants reported positive dispositions toward LOGIS and toward graphing, but no significant difference was found between the pre-instruction survey and the post-instruction survey. The value of LOGIS must be considered within the context that this study was the first iteration in the refinement of the LOGIS instructional application.

Page generated in 0.216 seconds