• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 112
  • 33
  • 18
  • 8
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 256
  • 256
  • 204
  • 56
  • 53
  • 52
  • 42
  • 37
  • 36
  • 33
  • 32
  • 31
  • 29
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Exploring the Development and Transfer of Case Use Skills in Middle-School Project-Based Inquiry Classrooms

Owensby, Jakita Nicole 11 April 2006 (has links)
The ability to interpret and apply experiences, or cases (Kolodner, 1993; 1997) is a skill (Anderson, et. al, 1981; Anderson, 2000) that is key to successful learning that can be transferred (Bransford, Brown and Cocking, 1999) to new learning situations. For middle-schoolers in a project-based inquiry science classroom, interpreting and applying the experiences of experts to inform their design solutions is not always easy (Owensby and Kolodner, 2002). Interpreting and applying an expert case and then assessing the solution that results from that application are the components of a process I call case use. This work seeks to answer three questions: 1. How do small-group case use capabilities develop over time? 2. How well are students able to apply case use skills in new situations over time? 3. What difficulties do learners have as they learn case use skills and as they apply case use skills in new situations? What do these difficulties suggest about how software might further support cognitive skill development using a cognitive apprenticeship (Collins, Brown and Newman, 1989) framework? I argue that if learners in project based inquiry classrooms are able to understand, engage in, and carry out the processes involved in interpreting and applying expert cases effectively, then they will be able to do several things. They will learn those process and be able to read an expert case for understanding, glean the lessons they can learn from it, and apply those lessons to their question or challenge. Furthermore, I argue that they may also be able to transfer interpretation, application, and assessment skills to other learning situations where application of cases is appropriate.
132

(Meta)Knowledge modeling for inventive design / Modélisation des (méta)connaissances pour la conception inventive

Yan, Wei 07 February 2014 (has links)
Un nombre croissant d’industries ressentent le besoin de formaliser leurs processus d’innovation. Dans ce contexte, les outils du domaine de la qualité et les approches d’aide à la créativité provenant du "brain storming" ont déjà montré leurs limites. Afin de répondre à ces besoins, la TRIZ (Acronyme russe pour Théorie de Résolution des Problèmes Inventifs), développée par l’ingénieur russe G. S. Altshuller au milieu du 20ème siècle, propose une méthode systématique de résolution de problèmes inventifs multidomaines. Selon TRIZ, la résolution de problèmes inventifs consiste en la construction du modèle et l’utilisation des sources de connaissance de la TRIZ. Plusieurs modèles et sources de connaissances permettent la résolution de problèmes inventifs de types différents, comme les quarante Principes Inventifs pour l’élimination des contradictions techniques. Toutes ces sources se situent à des niveaux d’abstractions relativement élevés et sont, donc, indépendantes d’un domaine particulier, qui nécessitent des connaissances approfondies des domaines d’ingénierie différents. Afin de faciliter le processus de résolution de problèmes inventifs, un "Système Intelligent de Gestion de Connaissances" est développé dans cette thèse. D’une part, en intégrant les ontologies des bases de connaissance de la TRIZ, le gestionnaire propose aux utilisateurs de sources de connaissance pertinentes pour le modèle qu’ils construisent, et d’autre part, le gestionnaire a la capacité de remplir "automatiquement" les modèles associés aux autres bases de connaissance. Ces travaux de recherche visent à faciliter et automatiser le processus de résolution de problèmes inventifs. Ils sont basés sur le calcul de similarité sémantique et font usage de différentes technologies provenantes de domaine de l’Ingénierie de Connaissances (modélisation et raisonnement basés sur les ontologies, notamment). Tout d’abord, des méthodes de calcul de similarité sémantique sont proposées pour rechercher et définir les liens manquants entre les bases de connaissance de la TRIZ. Ensuite, les sources de connaissance de la TRIZ sont formalisées comme des ontologies afin de pouvoir utiliser des mécanismes d’inférence heuristique pour la recherche de solutions spécifiques. Pour résoudre des problèmes inventifs, les utilisateurs de la TRIZ choisissent dans un premier temps une base de connaissance et obtiennent une solution abstraite. Ensuite, les éléments des autres bases de connaissance similaires aux éléments sélectionnés dans la première base sont proposés sur la base de la similarité sémantique préalablement calculée. A l’aide de ces éléments et des effets physiques heuristiques, d’autres solutions conceptuelles sont obtenues par inférence sur les ontologies. Enfin, un prototype logiciel est développé. Il est basé sur cette similarité sémantique et les ontologies interviennent en support du processus de génération automatique de solutions conceptuelles. / An increasing number of industries feel the need to formalize their innovation processes. In this context, quality domain tools show their limits as well as the creativity assistance approaches derived from brainstorming. TRIZ (Theory of Inventive Problem Solving) appears to be a pertinent answer to these needs. Developed in the middle of the 20th century by G. S. Althshuller, this methodology's goal was initially to improve and facilitate the resolution of technological problems. According to TRIZ, the resolution of inventive problems consists of the construction of models and the use of the corresponding knowledge sources. Different models and knowledge sources were established in order to solve different types of inventive problems, such as the forty inventive principles for eliminating the technical contradictions. These knowledge sources with different levels of abstraction are all built independent of the specific application field, and require extensive knowledge about different engineering domains. In order to facilitate the inventive problem solving process, the development of an "intelligent knowledge manager" is explored in this thesis. On the one hand, according to the TRIZ knowledge sources ontologies, the manager offers to the users the relevant knowledge sources associated to the model they are building. On the other hand, the manager has the ability to fill "automatically" the models of the other knowledge sources. These research works aim at facilitating and automating the process of solving inventive problems based on semantic similarity and ontology techniques. At first, the TRIZ knowledge sources are formalized based on ontologies, such that heuristic inference can be executed to search for specific solutions. Then, methods for calculating semantic similarity are explored to search and define the missing links among the TRIZ knowledge sources. In order to solve inventive problems, the TRIZ user firstly chooses a TRIZ knowledge source to work for an abstract solution. Then, the items of other knowledge sources, which are similar with the selected items of the first knowledge source, are obtained based on semantic similarity calculated in advance. With the help of these similar items and the heuristic physical effects, other specific solutions are returned through ontology inference. Finally, a software prototype is developed based on semantic similarity and ontology inference to support this automatic process of solving inventive problems.
133

The Study of Project-Based Learning in Preservice Teachers

Anderson, Ashley Ann January 2016 (has links)
Project-based learning (PBL) is a teaching approach where students engage in the investigation of real-world problems through their inquiries. Studies found considerable support for PBL on student performance and improvement in grades K-12 and at the collegiate level. However, fewer studies have examined the effects of PBL at the collegiate level in comparison to K-12 education. No studies have examined the effects of PBL with preservice teachers taking educational psychology courses. The purpose of this study was to provide an analysis of PBL with preservice teachers taking educational psychology courses. An experiment was conducted throughout two semesters to evaluate student achievement and satisfaction in an undergraduate educational psychology child development course and in an undergraduate educational psychology assessments course, which included the same students from the first semester. Student achievement was determined using quantitative and qualitative analyses in each semester and longitudinally. Results in semester one indicated that the comparison group outperformed the PBL group. Results in semester two suggested there were no differences in instructional styles between groups. Longitudinal analyses showed that the comparison group declined in performance over time, whereas the PBL group improved over time; although, the comparison group still outperformed the PBL group. Results of this study indicate that PBL was not an influential teaching method for preservice teachers taking educational psychology courses.
134

A case-based reasoning methodology to formulating polyurethanes

Segura-Velandia, Diana M. January 2006 (has links)
Formulation of polyurethanes is a complex problem poorly understood as it has developed more as an art rather than a science. Only a few experts have mastered polyurethane (PU) formulation after years of experience and the major raw material manufacturers largely hold such expertise. Understanding of PU formulation is at present insufficient to be developed from first principles. The first principle approach requires time and a detailed understanding of the underlying principles that govern the formulation process (e.g. PU chemistry, kinetics) and a number of measurements of process conditions. Even in the simplest formulations, there are more that 20 variables often interacting with each other in very intricate ways. In this doctoral thesis the use of the Case-Based Reasoning and Artificial Neural Network paradigm is proposed to enable support for PUs formulation tasks by providing a framework for the collection, structure, and representation of real formulating knowledge. The framework is also aimed at facilitating the sharing and deployment of solutions in a consistent and referable way, when appropriate, for future problem solving. Two basic problems in the development of a Case-Based Reasoning tool that uses past flexible PU foam formulation recipes or cases to solve new problems were studied. A PU case was divided into a problem description (i. e. PU measured mechanical properties) and a solution description (i. e. the ingredients and their quantities to produce a PU). The problems investigated are related to the retrieval of former PU cases that are similar to a new problem description, and the adaptation of the retrieved case to meet the problem constraints. For retrieval, an alternative similarity measure based on the moment's description of a case when it is represented as a two dimensional image was studied. The retrieval using geometric, central and Legendre moments was also studied and compared with a standard nearest neighbour algorithm using nine different distance functions (e.g. Euclidean, Canberra, City Block, among others). It was concluded that when cases were represented as 2D images and matching is performed by using moment functions in a similar fashion to the approaches studied in image analysis in pattern recognition, low order geometric and Legendre moments and central moments of any order retrieve the same case as the Euclidean distance does when used in a nearest neighbour algorithm. This means that the Euclidean distance acts a low moment function that represents gross level case features. Higher order (moment's order>3) geometric and Legendre moments while enabling finer details about an image to be represented had no standard distance function counterpart. For the adaptation of retrieved cases, a feed-forward back-propagation artificial neural network was proposed to reduce the adaptation knowledge acquisition effort that has prevented building complete CBR systems and to generate a mapping between change in mechanical properties and formulation ingredients. The proposed network was trained with the differences between problem descriptions (i.e. mechanical properties of a pair of foams) as input patterns and the differences between solution descriptions (i.e. formulation ingredients) as the output patterns. A complete data set was used based on 34 initial formulations and a 16950 epochs trained network with 1102 training exemplars, produced from the case differences, gave only 4% error. However, further work with a data set consisting of a training set and a small validation set failed to generalise returning a high percentage of errors. Further tests on different training/test splits of the data also failed to generalise. The conclusion reached is that the data as such has insufficient common structure to form any general conclusions. Other evidence to suggest that the data does not contain generalisable structure includes the large number of hidden nodes necessary to achieve convergence on the complete data set.
135

Improved regulatory oversight using real-time data monitoring technologies in the wake of Macondo

Carter, Kyle Michael 10 October 2014 (has links)
As shown by the Macondo blowout, a deepwater well control event can result in loss of life, harm to the environment, and significant damage to company and industry reputation. Consistent adherence to safety regulations is a recurring issue in deepwater well construction. The two federal entities responsible for offshore U.S. safety regulation are the Department of the Interior’s Bureau of Safety and Environmental Enforcement (BSEE) and the U.S. Coast Guard (USCG), with regulatory authorities that span well planning, drilling, completions, emergency evacuation, environmental response, etc. With such a wide range of rules these agencies are responsible for, safety compliance cannot be comprehensively verified with the current infrequency of on-site inspections. Offshore regulation and operational safety could be greatly improved through continuous remote real-time data monitoring. Many government agencies have adopted monitoring regimes dependent on real-time data for improved oversight (e.g. NASA Mission Control, USGS Earthquake Early Warning System, USCG Vessel Traffic Services, etc.). Appropriately, real-time data monitoring was either re-developed or introduced in the wake of catastrophic events within those sectors (e.g. Challenger, tsunamis, Exxon Valdez, etc.). Over recent decades, oil and gas operators have developed Real-Time Operations Centers (RTOCs) for continuous, pro-active operations oversight and remote interaction with on-site personnel. Commonly seen as collaborative hubs, RTOCs provide a central conduit for shared knowledge, experience, and improved decision-making, thus optimizing performance, reducing operational risk, and improving safety. In particular, RTOCs have been useful in identifying and mitigating potential well construction incidents that could have resulted in significant non-productive time and trouble cost. In this thesis, a comprehensive set of recommendations is made to BSEE and USCG to expand and improve their regulatory oversight activities through remote real-time data monitoring and application of emerging real-time technologies that aid in data acquisition and performance optimization for improved safety. Data sets and tools necessary for regulators to effectively monitor and regulate deepwater operations (Gulf of Mexico, Arctic, etc.) on a continuous basis are identified. Data from actual GOM field cases are used to support the recommendations. In addition, the case is made for the regulator to build a collaborative foundation with deepwater operators, academia and other stakeholders, through the employment of state-of-the-art knowledge management tools and techniques. This will allow the regulator to do “more with less”, in order to address the fast pace of activity expansion and technology adoption in deepwater well construction, while maximizing corporate knowledge and retention. Knowledge management provides a connection that can foster a truly collaborative relationship between regulators, industry, and non-governmental organizations with a common goal of safety assurance and without confusing lines of authority or responsibility. This solves several key issues for regulators with respect to having access to experience and technical know-how, by leveraging industry experts who would not normally have been inaccessible. On implementation of the proposed real-time and knowledge management technologies and workflows, a phased approach is advocated to be carried out under the auspices of the Center for Offshore Safety (COS) and/or the Offshore Energy Safety Institute (OESI). Academia can play an important role, particularly in early phases of the program, as a neutral playing ground where tools, techniques and workflows can be tried and tested before wider adoption takes place. / text
136

Adaptive Image Quality Improvement with Bayesian Classification for In-line Monitoring

Yan, Shuo 01 August 2008 (has links)
Development of an automated method for classifying digital images using a combination of image quality modification and Bayesian classification is the subject of this thesis. The specific example is classification of images obtained by monitoring molten plastic in an extruder. These images were to be classified into two groups: the “with particle” (WP) group which showed contaminant particles and the “without particle” (WO) group which did not. Previous work effected the classification using only an adaptive Bayesian model. This work combines adaptive image quality modification with the adaptive Bayesian model. The first objective was to develop an off-line automated method for determining how to modify each individual raw image to obtain the quality required for improved classification results. This was done in a very novel way by defining image quality in terms of probability using a Bayesian classification model. The Nelder Mead Simplex method was then used to optimize the quality. The result was a “Reference Image Database” which was used as a basis for accomplishing the second objective. The second objective was to develop an in-line method for modifying the quality of new images to improve classification over that which could be obtained previously. Case Based Reasoning used the Reference Image Database to locate reference images similar to each new image. The database supplied instructions on how to modify the new image to obtain a better quality image. Experimental verification of the method used a variety of images from the extruder monitor including images purposefully produced to be of wide diversity. Image quality modification was made adaptive by adding new images to the Reference Image Database. When combined with adaptive classification previously employed, error rates decreased from about 10% to less than 1% for most images. For one unusually difficult set of images that exhibited very low local contrast of particles in the image against their background it was necessary to split the Reference Image Database into two parts on the basis of a critical value for local contrast. The end result of this work is a very powerful, flexible and general method for improving classification of digital images that utilizes both image quality modification and classification modeling.
137

Modélisation d’un système exploratoire d’aide à l’insulinothérapie chez le sujet âgé diabétique de type 2 / Modeling an exploratory system to develop support tools of insulin therapy in the elderly type 2 diabetic

Sedaghati, Afshan 30 September 2014 (has links)
La mise sous schéma insulinique multi-injections des patients diabétiques âgés est risquée et complexe. L’éventuelle perte d’autonomie induite par le traitement et la variabilité des besoins insuliniques doivent être prises en compte. Dans les protocoles thérapeutiques, le démarrage de l’insulinothérapie se base sur la prescription d’une posologie faible ajustée en fonction de résultats glycémiques. Un temps d’adaptation est donc nécessaire afin de prescrire la dose optimale. Pour ces raisons, cette thèse propose une méthodologie originale pour modéliser l’insulinothérapie chez les sujets âgés diabétiques de type 2 à partir de certaines variables endogènes (cliniques et biologiques) avec deux objectifs : 1. aider les médecins dans la phase de détermination de la prescription insulinique, 2. individualiser le traitement en fonction du profil physiologique du patient. La méthodologie, basée sur la logique floue, a permis l’analyse d’un échantillon de 71 diabétiques âgés de plus de 65 ans. Trois profils types de besoins insuliniques ont été mis en évidence et les variables discriminantes à l’origine de cette typologie ont été identifiées. La méthodologie s’inspirant du raisonnement à base de cas permet d’individualiser le traitement en proposant la dose à prescrire à partir des plus proches cas sélectionnés dans l’échantillon. Ces travaux apportent deux contributions principales : - sur l’insulinothérapie des personnes âgées diabétiques de type 2, - sur la méthodologie originale du traitement numérique des données médicales lors de suivis thérapeutiques. / Starting insulin therapy with daily multi-injections in the elderly type 2 diabetic patients is a difficult challenge. The possible loss of autonomy induced by the treatment and the variability of the insulin needs must be taken into account. In the therapeutic protocols, starting insulin therapy is based on the prescription of low dosage adjusted according to blood glucose results. A time of adjustment is therefore necessary to prescribe the optimal dose.For these reasons, this thesis proposes an original methodology for modeling insulin therapy in elderly subjects with type 2 diabetes from certain biological and clinical endogenous variables with two objectives:1. assist physicians at the stage of determination of insulin prescription, 2. individualize insulin treatment depending on the physiological profile of the patient. The methodology based on fuzzy logic, allowed the analysis of a sample of 71 diabetic patients aged over 65 years. Three profiles types of insulin requirements and the discriminating variables underlying this typology have been identified. Methodology based on case based reasoning allows individualizing the treatment to prescribe insulin dosage from the closest (nearest) cases selected in the sample.Our works provide two main contributions:- on insulin therapy in elderly type 2 diabetic patients, - on the original methodology of the digital process of medical data at therapeutic follow-ups.
138

Raciocínio baseado em casos aplicado ao gerenciamento de falhas em redes de computadores / Case-based reasoning applied to fault management in computer networks

Melchiors, Cristina January 1999 (has links)
Com o crescimento do número e da heterogeneidade dos equipamentos presentes nas atuais redes de computadores, o gerenciamento eficaz destes recursos toma-se crítico. Esta atividade exige dos gerentes de redes a disponibilidade de uma grande quantidade de informações sobre os seus equipamentos, as tecnologias envolvidas e os problemas associados a elas. Sistemas de registro de problemas (trouble ticket systems) tem lido utilizados para armazenar os incidentes ocorridos, servindo como uma memória histórica da rede e acumulando o conhecimento derivado do processo de diagnose e resolução de problemas. Todavia, o crescente número de registros armazenados torna a busca manual nestes sistemas por situações similares ocorridas anteriormente muito morosa e imprecisa. Assim, uma solução apropriada para consolidar a memória histórica das redes é o desenvolvimento de um sistema especialista que utilize o conhecimento armazenado nos sistemas de registro de problemas para propor soluções para um problema corrente. Uma abordagem da Inteligência Artificial que tem atraído enorme atenção nos últimos anos e que pode ser utilizada para tal fim é o raciocínio baseado em casos (casebased reasoning). Este paradigma de raciocínio visa propor soluções para novos problemas através da recuperação de um caso similar ocorrido no passado, cuja solução pode ser reutilizada na nova situação. Além disso, os benefícios deste paradigma incluem a capacidade de aprendizado com a experiência, permitindo que novos problemas sejam incorporados e se tomem disponíveis para use em situações futuras, aumentando com isso o conhecimento presente no sistema. Este trabalho apresenta um sistema que utiliza o paradigma de raciocínio baseado em casos aplicado a um sistema de registro de problemas para propor soluções para um novo problema. Esse sistema foi desenvolvido com o propósito de auxiliar no diagnostico e resolução dos problemas em redes. Os problemas típicos deste domínio, a abordagem adotada e os resultados obtidos com o protótipo construído são descritos. / With the increasing number of computer equipments and their increasing heterogeneity, the efficient management of those resources has become a hard job. This activity demands from the network manager a big amount of expertise on network equipments, technologies involved, and eventual problems that may arise. So far, trouble ticket systems (TTS) have been used to store network problems, working like a network historical memory and accumulating the knowledge derived from the diagnosis and troubleshooting of such problems. However, the increasing number of stored tickets makes the manual search of similar situations very slow and inaccurate in these kind of systems. So, an adequate approach to consolidate the network historic memory is the development of an expert system that uses the knowledge stored in the trouble ticket systems to propose a solution for a current problem. Case-based reasoning (CBR), an approach borrowed from Artificial Intelligence that recently had attracted many researchers attention, may be applied to help diagnosing and troubleshooting networking management problems. This reasoning paradigm proposes solution to new problems by retrieving a similar case occurred in the past, whose solution can be reused in the new situation. Furthermore, the benefits of this paradigm include the experience learning capability, allowing new problems being added and becoming available to use in future situations, expanding the knowledge of the system. This work presents a system that uses case-based reasoning applied to a trouble ticket system to propose solutions for a new problem in the network. This system was developed with the aim of helping the diagnostic and troubleshooting of network problems. It describes the typical problems of this domain, the adopted approach and the results obtained with the prototype built.
139

Introductory and Organizing Principles

Byrd, Rebekah J., Bradley, T. B. 19 January 2013 (has links)
Book Summary: Applying Techniques to Common Encounters in School Counseling: A Case-Based Approach helps counselors in training bridge the gap between theory and practice by showing them how to theoretically frame or understand the problems and issues they encounter, how to proceed, and what action steps to take when they enter the field as school counselors. It answers the questions new counselors have in real school settings, such as What is it really like to live the life of a professional school counselor? How does the theory presented in the classroom apply to the myriad of situations encountered in the real life, everyday school setting? Case studies and scenarios give readers examples of many commonly encountered presenting issues. For each scenario the case is introduced, background information is supplied, and initial processing questions are posed. The authors include a discussion of the theoretical models or frameworks used to address the issue, along with a table segmented by theoretical paradigm and grade level that includes other techniques that could be used in the presenting case. With these tools at their disposal, readers gain a firm understanding of the issues from several frames of reference, along with interventions meant to create movement toward a successful resolution.
140

A study of the impact of cooperative small group facilitated case studies on student learning outcomes

Malin, Gregory Ryan 06 December 2007
A cooperative small group facilitated case-based learning method has been used in the medical college at the researchers educational institution since the 2003-2004 academic year. They were designed to be a supplement to a primarily lecture-based curriculum where it was believed that these cooperative cases helped students to develop a better understanding of the material taught in the lectures, although no rigorous investigations had been completed. The purpose of this study was to investigate the impact of these cooperative facilitated small group cases on five specific outcomes which included: 1) achievement, 2) knowledge confidence, 3) student satisfaction, 4) students perceived time on task, and 5) the students perceptions of the degree to which they believed a facilitator helped them to learn the material. These outcomes for cooperative learning (CL) were compared with individual learning (IL) outcomes. Quantitative data on student achievement and knowledge confidence were collected using a pre-test post-test 10 multiple choice question quiz. A brief questionnaire was also distributed to students to collect data regarding student satisfaction, time on task and perceived helpfulness of the facilitator.<p>Fifty-nine medical students were randomly assigned to either the CL or IL cohort (cooperative cohort, n = 32; individual cohort, n = 27). All students were blinded to the purpose of the study until all data were collected at the end of the investigation. Students completed the 10 multiple choice question pre-test. After each question they rated their level of confidence (on a scale from 1 to 10) that they had chosen the correct answer. Immediately after completion of the pre-test, they worked on the case, either cooperatively or individually. One week after the pre-test and case, the students completed the post-test quiz with the same questions, as well as the questionnaire.<p>A repeated-measures MANOVA was used to compare achievement and confidence in the CL (n =19) and IL (n =13) cohorts. An alpha level of .05 was used for all statistical tests. Effect sizes (d) were calculated for within-group and between-groups comparisons for achievement and confidence. Descriptive data on student satisfaction, time on task and facilitator helpfulness were gathered from the questionnaire and compared between groups.<p> Within-group results from the study showed that CL had a greater impact on student achievement and confidence than IL (achievement, d = 0.57 vs. 0.16; confidence, d = 0.52 vs. 0.14). The results for the statistical analysis did not reach significance for achievement or confidence. Between-groups effect sizes were calculated for average pre- to post-test change for achievement and confidence (achievement, d = 0.35; confidence, 0.40). Students in the CL cohort reported spending more time on task before and during the case session and less after the session. They also reported greater levels of satisfaction with the learning experience than IL group. The majority of students (90.5%) in the CL cohort felt that the facilitator helped them to learn.<p>The findings from this study showed that this CL method had a greater impact on the five outcomes outlined above compared to the IL method. Students made greater gains in achievement and confidence. They also spent more time on task, and had higher levels of satisfaction with the learning experience. Students in the CL cohort also believed that the facilitator helped them to learn. Implications of the study include possible expanded use of the cases within the curriculum of this medical college although the demands of resources and curriculum content would have to be carefully considered.

Page generated in 0.051 seconds