• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 39
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 119
  • 32
  • 31
  • 23
  • 21
  • 21
  • 19
  • 18
  • 16
  • 15
  • 14
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Desenvolvimento de técnica para recomendar atividades em workflows científicos: uma abordagem baseada em ontologias / Development of a strategy to scientific workflow activities recommendation: An ontology-based approach

Adilson Lopes Khouri 16 March 2016 (has links)
O número de atividades disponibilizadas pelos sistemas gerenciadores de workflows científicos é grande, o que exige dos cientistas conhecerem muitas delas para aproveitar a capacidade de reutilização desses sistemas. Para minimizar este problema, a literatura apresenta algumas técnicas para recomendar atividades durante a construção de workflows científicos. Este projeto especificou e desenvolveu um sistema de recomendação de atividades híbrido, considerando informação sobre frequência, entrada e saídas das atividades, e anotações ontológicas para recomendar. Além disso, neste projeto é apresentada uma modelagem da recomendação de atividades como um problema de classificação e regressão, usando para isso cinco classificadores; cinco regressores; um classificador SVM composto, o qual usa o resultado dos outros classificadores e regressores para recomendar; e um ensemble de classificadores Rotation Forest. A técnica proposta foi comparada com as outras técnicas da literatura e com os classificadores e regressores, por meio da validação cruzada em 10 subconjuntos, apresentando como resultado uma recomendação mais precisa, com medida MRR ao menos 70% maior do que as obtidas pelas outras técnicas / The number of activities provided by scientific workflow management systems is large, which requires scientists to know many of them to take advantage of the reusability of these systems. To minimize this problem, the literature presents some techniques to recommend activities during the scientific workflow construction. This project specified and developed a hybrid activity recommendation system considering information on frequency, input and outputs of activities and ontological annotations. Additionally, this project presents a modeling of activities recommendation as a classification problem, tested using 5 classifiers; 5 regressors; a SVM classifier, which uses the results of other classifiers and regressors to recommend; and Rotation Forest , an ensemble of classifiers. The proposed technique was compared to other related techniques and to classifiers and regressors, using 10-fold-cross-validation, achieving a MRR at least 70% greater than those obtained by other techniques
42

Une Approche Algébrique pour les Workflows Scientifiques Orientés-Données

Ogasawara, Eduardo 19 December 2011 (has links) (PDF)
Os workflows científicos emergiram como uma abstração básica para estruturar experimentos científicos baseados em simulações computacionais. Em muitas situações, estes workflows são intensivos, seja computacionalmente seja quanto em relação à manipulação de dados, exigindo a execução em ambientes de processamento de alto desempenho. Entretanto, paralelizar a execução de workflows científicos requer programação trabalhosa, de modo ad hoc e em baixo nível de abstração, o que torna difícil a exploração das oportunidades de otimização. Visando a abordar o problema de otimizar a execução paralela de workflows científicos, esta tese propõe uma abordagem algébrica para especificar o workflow, bem como um modelo de execução que, juntos, possibilitam a otimização automática da execução paralela de workflows científicos. A tese apresenta uma avaliação ampla da abordagem usando tanto experimentos reais quanto dados sintéticos. Os experimentos foram avaliados no Chiron, um motor de execução de workflows desenvolvido para apoiar a execução paralela de workflows científicos. Os experimentos apresentaram resultados excelentes de paralelização na execução de workflows e evidenciaram, com a abordagem algébrica, diversas possibilidades de otimização de desempenho quando comparados a execuções paralelas de workflow de modo ad hoc.
43

Análise preditiva de desempenho de workflows usando teoria do campo médio / Predictive performance analysis of workflows using mean field theory

Waldir Edison Farfán Caro 17 April 2017 (has links)
Os processos de negócio desempenham um papel muito importante na indústria, principalmente pela evolução das tecnologias da informação. As plataformas de computação em nuvem, por exemplo, com a alocação de recursos computacionais sob demanda, possibilitam a execução de processos altamente requisitados. Para tanto, é necessário definir o ambiente de execução dos processos de tal modo que os recursos sejam utilizados de forma ótima e seja garantida a correta funcionalidade do processo. Nesse contexto, diferentes métodos já foram propostos para modelar os processos de negócio e analisar suas propriedades quantitativas e qualitativas. Há, contudo, vários desafios que podem restringir a aplicação desses métodos, especialmente para processos com alta demanda (como os workflows de numerosas instâncias) e que dependem de recursos limitados. A análise de desempenho de workflows de numerosas instâncias via modelagem analítica é o objeto de estudo deste trabalho. Geralmente, para a realização desse tipo de análise usa-se modelos matemáticos baseados em técnicas Markovianas (sistemas estocásticos), que sofrem do problema da explosão do espaço de estados. Entretanto, a Teoria do Campo Médio indica que o comportamento de um sistema estocástico, sob certas condições, pode ser aproximado por o de um sistema determinístico, evitando a explosão do espaço de estados. Neste trabalho usamos tal estratégia e, com base na definição formal de aproximação determinística e suas condições de existência, elaboramos um método para representar os workflows, e seus recursos, como equações diferenciais ordinárias, que descrevem um sistema determinístico. Uma vez definida a aproximação determinística, realizamos a análise de desempenho no modelo determinístico, verificando que os resultados obtidos são uma boa aproximação para a solução estocástica. / Business processes play a very important role in the industry, especially by the evolution of information technologies. Cloud computing platforms, for example, with the allocation of on-demand computing resources enable the execution of highly requested processes. Therefore, it is necessary to define the execution environment of the processes in such a way that the resources are used optimally and the correct functionality of the process is guaranteed. In this context, different methods have already been proposed to model business processes and analyze their quantitative and qualitative properties. There are, however, a number of challenges that may restrict the application of these methods, especially for high-demanded processes (such as workflows of numerous instances) and that rely on resources that are limited. The analysis of the performance of workflows of numerous instances through analytical modeling is the object of study of this work. Generally, for the accomplishment of this type of analysis, mathematical models based on Markovian techniques (stochastic systems) are used, which suffer the problem of the state space explosion. However, the Mean Field Theory, indicates that the behavior of a stochastic system, under certain conditions, can be approximated by that of a deterministic system, avoiding the explosion of the state space. In this work we use such a strategy, based on the formal definition of deterministic approximation and its conditions of existence, we elaborate a method to represent the workflows, and their resources, as ordinary differential equations, which describe a deterministic system. Once the deterministic approximation has been defined, we perform the performance analysis in the deterministic model, verifying that the obtained results are a good approximation for the stochastic solution.
44

RAfEG: Referenz-Systemarchitektur und prototypische Umsetzung -- Ausschnitt aus dem Abschlussbericht zum Projekt "Referenzarchitektur für E-Government" (RAfEG) --

Kunis, Raphael, Rünger, Gudula 07 December 2007 (has links) (PDF)
Das Ziel des RAfEG-Projektes bestand in der Entwicklung einer Referenzarchitektur "E-Government", die die notwendigen Komponenten zur Realisierung informations- und kommunikationstechnischer Systeme (IuK-Systeme) für typische Prozesse in nachgeordneten Behörden der Innenministerien der Bundesländer bereitstellte. Die Architektur RAfEG stellt einen ganzheitlichen Ansatz dar, der viele wesentliche Aspekte, beginnend mit der formalen Beschreibung der fachlichen Zusammenhänge bis hin zur Entwicklung von verteilt agierenden Softwarekomponenten behördlicher Geschäftsprozesse umfasst. Die Architektur liefert unter Berücksichtigung hardwareseitiger Voraussetzungen die Struktur von Softwarekomponenten zur Verwaltungsautomatisierung. Die Architektur RAfEG wurde als räumlich verteiltes komponentenbasiertes Softwaresystem entworfen. Dabei war es notwendig, Konzepte zur effizienten Nutzung von heterogenen Systemen für interaktive Anwendungen im Bereich E-Government zu entwickeln. Die prototypische Umsetzung der Architektur erfolgte für Planfeststellungsverfahren/Plangenehmigungsprozesse am Beispiel des Regierungspräsidiums Leipzig. Das Vorhaben war geprägt von der Entwicklung eines durchgängigen Konzeptes zur optimalen IuK-technischen Unterstützung von Verwaltungsprozessen. Dies führte von der Modellierung der fachlichen Zusammenhänge (Fachkonzept) über die entwicklungsorientierte, methodische Abbildung der zu implementierenden Sachverhalte (Datenverarbeitungskonzept) bis zur komponentenbasierten Softwareentwicklung (Implementierungskonzept). Dieses Konzept mündete in einer Referenzarchitektur für typische E-Government-Prozesse. Dazu wurden neben den rein fachlichen, aufgabenbezogenen Aspekten insbesondere Sicherheitsaspekte sowie technische und organisatorische Schnittstellen ausführlich betrachtet. Der durchgängige Einsatz von Open Source Software führt hierbei zu einer kosteneffizienten, flexiblen Referenzlösung, die durch ihre komponentenbasierte Struktur als weiteren Aspekt sehr gut an spezielle Anforderungen anpassbar ist.
45

Sharing and Usage Control of Personal Information / Partage et Contrôle d'Usage de Données Personnelles

Katsouraki, Athanasia 28 September 2016 (has links)
Nous vivons une véritable explosion du volume des données personnelles numériques qui sont générés dans le monde chaque jour (ex. capteurs, web, réseaux sociaux, etc.). En conséquence, les particuliers se sentent exposés tandis qu'ils partagent et publient leurs données. Ainsi, il est clair que des outils et des méthodes sont nécessaires pour contrôler la façon dont leurs données sont collectées, gérées et partagées. Les défis sont principalement axées sur le manque d'applications ou de solutions techniques qui assurent la gestion et le partage sécurisés de données personnelles. Le défi principal est de fournir un outil sécurisé et adaptable qui peut être utilisé par tout utilisateur, sans formation technique. Cette thèse fait trois contributions importantes dans le domaine de la protection de la vie privée : (i) Une implémentation du model UCONABC, un modèle de contrôle d'usage, appliqué à un scénario de réseau social, (ii) une extension algébrique de UCON pour contrôler des partages complexes de données (en transformant des données personnelles en données partageable et/ou publiables), et (iii) la conception, l'implémentation et le déploiement sur le terrain d'une plateforme pour la gestion de données sensibles collectées au travers de formulaires d'enquêtes. / We are recently experiencing an unprecedented explosion of available personal data from sensors, web, social networks, etc. and so people feel exposed while they share and publish their data. There is a clear need for tools and methods to control how their data is collected managed and shared. The challenges are mainly focused on the lack of either applications or technical solutions that provide security on how to collect, manage and share personal data. The main challenge is to provide a secure and adaptable tool that can be used by any user, without technical background. This thesis makes three important contributions to the field of privacy: (i) a prototype implementation of the UCONABC model, a usage control model, applied to an online social networks scenario, (ii) an algebraic extension to UCON to control the complex sharing of data (by transforming personal data into sharable and publishable data) and (iii) the design, implementation and field testing of a secure platform to manage sensitive data collected through online forms.
46

WorkflowDSL: Scalable Workflow Execution with Provenance

Fernando, Tharidu January 2017 (has links)
Scientific workflow systems enable scientists to perform large-scale data intensive scientific experiments using distributed computing resources. Due to the diversity of domains and complexity of technology, delivering a successful outcome efficiently requires collaboration between domain experts and technical experts. However, existing scientific workflow systems require a large investment of time to familiarise and adapt existing workflows. Thus, many scientific workflows are still being implemented by script based languages (such as Python and R) due to familiarity and extensive third party library support. In this thesis, we implement a framework that uses a domain specific language that enables domain experts to collaborate on fine-tuning workflows. Technical experts are able to use Python for task implementations. Moreover, the framework includes support for parallel execution without any specialized code. It also provides a provenance capturing framework that enables users to analyse past executions and retrieve complete lineage of any data item generated. Experiments which were performed using a real-world scientific workflow from the bioinformatics domain show that users were able to execute workflows efficiently while using our DSL for workflow composition and Python for task implementations. Moreover, we show that captured provenance can be useful for analysing past workflow executions. / Vetenskapliga arbetsflödessystem gör det möjligt för forskare att utföra storskaliga dataintensiva vetenskapliga experiment med hjälp av distribuerade datorresurser. På grund av mångfalden av domäner, och komplexitet i teknik, krävs samarbete mellan domänexperter och tekniska experter för att på ett effektivt sätt leverera en framgångsrik lösning. Befintliga vetenskapliga arbetsflödessystem kräver dock en stor investering i tid för att bekanta och anpassa befintliga arbetsflöden. Som ett resultat av detta implementeras många vetenskapliga arbetsflöden fortfarande av skriptbaserade språk (som Python och R) på grund av förtrogenhet och omfattande support från tredje part. I denna avhandling implementeras ett framework som använder ett domänsspecifikt språk som gör det möjligt för domänexperter att samarbeta med att finjustera arbetsflöden. Tekniska experter kan använda Python för att genomföra uppgifter. Dessutom innehåller ramverket stöd för parallell exekvering utan någon specialkod. Detta ger också ett ursprungsfångande framework som gör det möjligt för användare att analysera tidigare exekveringar och att hämta fullständiga härstamningar för samtliga genererade dataobjekt. Experiment som utfördes med hjälp av ett verkligt vetenskapligt arbetsflöde från bioinformatikdomänen visar att användarna effektivt kunde utföra arbetsflöden medan de använde en DSL för arbetsflödesammansättning och Python för uppdragsimplementationer. Dessutom visar vi hur fångade ursprung kan vara användbara för att analysera tidigare genomförda arbetsflödesexekveringar.
47

Efficient in-situ workflows for time-critical applications on heterogeneous ecosystems Item

Feng Li (16627272) 21 July 2023 (has links)
<p>In-situ workflows are a special class of scientific workflows, where different component applications (such as simulation, visualization, analysis) run concurrently, and data flows continuously between components during the whole workflow lifetime. Traditionally, simulations write large amounts of output data to persistent storage, which are later read for future analysis/visualization. In comparison, in-situ workflows allow analysis/visualization components to consume simulation data while the simulations are still running and thus reduce the I/O overhead. There are recent research works that focus on providing data transport libraries to help compose a group of applications into an integral in-situ workflow. However, only a few ``performance-oriented'' studies exist for in-situ workflows, and most of these works focus on workflows with simple structures (e.g., single producer and single consumer), also without consideration of heterogeneous environments for in-situ workflows. Being able to efficiently utilize heterogeneous computing resources such as multiple Clouds and HPCs can significantly accelerate real-world in-situ workflows, and benefit applications that require both significant computation power and real-time outputs(e.g., identifying abnormal patterns in fluid dynamics). The goal of this dissertation is to provide resource planning algorithms and runtime support, to improve in-situ workflow performance on heterogeneous environments.</p> <p><br></p> <p>This dissertation first investigates the emerging applications of in-situ workflows, which usually include parallel simulation, visualization, and analysis components. Two representative real-world in-situ workflows are studied in details-- a real-time CFD machine learning/visualization workflow and a wildfire spreading workflow. These workflows showcase the capability of in-situ workflows: e.g.,  decoupled and accelerated computation and fast near-real-time response time, however, there is a lack of resource planning and runtime support for general in-situ workflows. For resource planning, I first formulate the optimization problem, and then design and implement a heuristic algorithm called ``SNL'' (Scheduled-Neighbor-Lookup). SNL considers the pipelined execution pattern of in-situ workflows, and guides the resource planning of complex in-situ workflows to achieve higher workflow throughput. For the runtime support, I design and implement the ``INSTANT'' runtime framework, a runtime framework to configure, plan, launch, and monitor in-situ workflows for distributed computing environments. INSTANT provides intuitive interfaces to compose abstract in-situ workflows, manages in-site and cross-site data transfers with ADIOS2, and supports resource planning using profiled performance data. Experiments with the two use cases show that INSTANT can efficiently streamline the orchestration of complex in-situ workflows, and the resource planning capability allows INSTANT to plan and carry out fast workflow execution at different computing resource availabilities.</p>
48

Coordination flexible fondée sur la métaphore chimique dans les infrastructures de services

Fernandez, Héctor 20 June 2012 (has links) (PDF)
Avec le développement de l'Internet des services, composer dynamiquement des services distribués faiblement couplés est devenu le nouveau challenge du calcul à large échelle. Alors que la composition de services est devenue un élément clef des plates-formes orientées service, les systèmes de composition de services suivent pour la plupart une approche centralisée connaissant l'ensemble des informations de flux de contrôle et de données du workflow, posant un certain nombre de problèmes, notamment de passage à l'échelle et de fiabilité. Dans un monde où les plates-formes sont de plus en plus dynamiques, de nouveaux mécanismes de coordination dynamiques sont requis. Dans ce contexte, des métaphores naturelles, et en particulier la méthapore chimique, ont gagné une attention particulière récemment, car elles fournissent des abstractions pour une coordination flexible d'entités. Dans cette thèse, nous présentons un système de gestion de workflow fondée sur la métaphore chimique, qui fournit un modèle d'exécution haut-niveau pour l'exécution centralisée et décentralisée de compositions (ou workflows). Selon ce modèle, les services sont vus comme des molécules qui flottent dans une solution chimique. La coordination de ces services est effectuée par un ensemble de réactions entre ces molécules exprimant l'exécution décentralisée d'un workflow. Par ailleurs, si le paradigme chimique est aujourd'hui considéré comme un modèle de coordination prometteur, il manque des résultats expérimentaux. Ainsi, nous avons développé un prototype logiciel. Des expériences ont été menées avec des workflows d'applications réelles pour montrer la viabilité de notre modèle.
49

MDAPSP - Uma arquitetura modular distribuída para auxílio à predição de estruturas de proteínas / MDAPSP - A modular distributed architecture to support the protein structure prediction

Oliveira, Edvard Martins de 09 May 2018 (has links)
A predição de estruturas de proteínas é um campo de pesquisa que busca simular o enovelamento de cadeias de aminoácidos de forma a descobrir as funções das proteínas na natureza, um processo altamente dispendioso por meio de métodos in vivo. Inserida no contexto da Bioinformática, é uma das tarefas mais computacionalmente custosas e desafiadoras da atualidade. Devido à complexidade, muitas pesquisas se utilizam de gateways científicos para disponibilização de ferramentas de execução e análise desses experimentos, aliado ao uso de workflows científicos para organização de tarefas e disponibilização de informações. No entanto, esses gateways podem enfrentar gargalos de desempenho e falhas estruturais, produzindo resultados de baixa qualidade. Para atuar nesse contexto multifacetado e oferecer alternativas para algumas das limitações, esta tese propõe uma arquitetura modular baseada nos conceitos de Service Oriented Architecture (SOA) para oferta de recursos computacionais em gateways científicos, com foco nos experimentos de Protein Structure Prediction (PSP). A Arquitetura Modular Distribuída para auxílio à Predição de Estruturas de Proteínas (MDAPSP) é descrita conceitualmente e validada em um modelo de simulação computacional, no qual se pode identificar suas capacidades, detalhar o funcionamento de seus módulos e destacar seu potencial. A avaliação experimental demonstra a qualidade dos algoritmos propostos, ampliando a capacidade de atendimento de um gateway científico, reduzindo o tempo necessário para experimentos de predição e lançando as bases para o protótipo de uma arquitetura funcional. Os módulos desenvolvidos alcançam boa capacidade de otimização de experimentos de PSP em ambientes distribuídos e constituem uma novidade no modelo de provisionamento de recursos para gateways científicos. / PSP is a scientific process that simulates the folding of amino acid chains to discover the function of a protein in live organisms, considering that its an expensive process to be done by in vivo methods. PSP is a computationally demanding and challenging effort in the Bioinformatics stateof- the-art. Many works use scientific gateways to provide tools for execution and analysis of such experiments, along with scientific workflows to organize tasks and to share information. However, these gateways can suffer performance bottlenecks and structural failures, producing low quality results. With the goal of offering alternatives to some of the limitations and considering the complexity of the topics involved, this thesis proposes a modular architecture based on SOA concepts to provide computing resources to scientific gateways, with focus on PSP experiments. The Modular Distributed Architecture to support Protein Structure Prediction (MDAPSP) is described conceptually and validated in a computer simulation model that explain its capabilities, detail the modules operation and highlight its potential. The performance evaluation presents the quality of the proposed algorithms, a reduction of response time in PSP experiments and prove the benefits of the novel algorithms, establishing the basis for a prototype. The new modules can optmize the PSP experiments in distributed environments and are a innovation in the resource provisioning model for scientific gateways.
50

Adaptation à la volée de situations d'apprentissage modélisées conformément à un langage de modélisation pédagogique / Adapt on the fly learning situations modeled according to a pedagocical modeling language

Ouari, Salim 25 November 2011 (has links)
Le travail présenté dans ce mémoire s'inscrit dans le domaine des Environnements Informatiques pour l'Apprentissage Humain (EIAH), plus précisément celui de l'ingénierie des EIAH dans le cadre d'une approche de type « Learning Design ». Cette approche propose de construire des EIAH à partir de la description formelle d'une activité d'apprentissage. Elle suppose l'existence d'un langage de modélisation communément appelé EML (Educational Modelling Language) et d'un moteur capable d'interpréter ce langage. LDL est le langage sur lequel nous avons travaillé, en relation avec l'infrastructure LDI intégrant un moteur d'interprétation de LDL. L'EML est utilisé pour produire un scénario, modèle formel d'une activité d'apprentissage. L'EIAH servant de support au déroulement de l'activité décrite dans le scénario est alors construit de manière semi-automatique au sein de l'infrastructure associée au langage selon le processus suivant : le scénario est créé lors d'une phase de conception ; il est instancié et déployé sur une plate-forme de services lors d'une phase d'opérationnalisation (choix des participants à l'activité, affectation des rôles, choix des ressources et services) ; le scénario instancié et déployé est pris en charge par le moteur qui va l'interpréter pour en assurer l'exécution. Dans ce cadre, l'activité se déroule conformément à ce qui a été spécifié dans le scénario. Or il est impossible de prévoir par avance tout ce qui peut se produire dans une activité, les activités étant par nature imprévisibles. Des situations non prévues peuvent survenir et conduire à des perturbations dans l'activité, voire à des blocages. Il devient alors primordial de fournir les moyens de débloquer la situation. L'enseignant peut par ailleurs vouloir exploiter une situation ou une opportunité en modifiant l'activité en cours d'exécution. C'est le problème qui est traité dans cette thèse : il s'agit de fournir les moyens d'adapter une activité « à la volée », c'est-à-dire pendant son exécution, de manière à pouvoir gérer une situation non prévue et poursuivre l'activité. La proposition que nous formulons s'appuie sur la différentiation entre les données convoquées dans chacune des trois phases du processus de construction de l'EIAH : la conception, l'opérationnalisation et l'exécution. Nous exhibons un modèle pour chacune de ces phases, qui organise ces données et les positionne les unes par rapport aux autres. Adapter une activité « à la volée » revient alors à modifier ces modèles en fonction des situations à traiter. Certaines nécessitent la modification d'un seul de ses modèles, d'autres conduisent à propager les modifications d'un modèle vers un autre. Nous considérons l'adaptation « à la volée » comme une activité à part entière menée, en parallèle de l'activité d'apprentissage, par un superviseur humain qui dispose d'un environnement adéquat pour observer l'activité, détecter les éventuels problèmes et y remédier par intervention dans l'activité d'apprentissage en modifiant les modèles qui la spécifient. Pour développer les outils support à la modification et les intégrer dans l'infrastructure LDI, nous avons eu recours à des techniques de l'Ingénierie Dirigée par les Modèles. Les modèles manipulés dans ces outils en sont ainsi des données à part entière : les outils réalisés n'en offrent ainsi que plus de flexibilité et d'abstraction. Les modèles sont alors exploités comme des leviers pour atteindre et modifier les données ciblées par l'adaptation. / The work presented in this paper is in the field of Technology for Human Learning (TEL), specifically that of engineering in the context of TEL-type approach "Learning Design". This approach proposes to build ILE from the formal description of a learning activity. It assumes the existence of a common modeling language called EML (Educational Modelling Language) and an engine capable of interpreting this language. LDL is the language on which we worked in conjunction with the LDI infrastructure including a motor interpretation of LDL. The EML is used to produce a scenario, a formal model of learning activity. The ILE serving to support the conduct of the activity described in the scenario is then constructed semi-automatically in the infrastructure associated with the language using the following process: the script is created during a design phase; it is instantiated and deployed on a platform of services in an operational phase (selection of participants in the activity, role assignment, choice of resources and services); instantiated and deployed scenario is supported by the engine will interpret it to ensure its implementation. In this context, the activity takes place in accordance with what was specified in the script. It is impossible to predict in advance all that can occur in an activity, the activities are by nature unpredictable. Unforeseen situations can occur and lead to disturbances in the activity, or even blocks. It then becomes important to provide the means to break the deadlock. The teacher may also want to exploit a situation or opportunity by altering the activity in progress. This is the problem that is addressed in this thesis: to provide the means to adapt an activity "on the fly", that is to say while running, so you can not handle a situation provided and continue the activity. The proposal we make is based on the differentiation between data convened in each of the three phases of construction of the ILE: design, operationalization and implementation. We exhibit a model for each phase, which organizes the data and positions to each other. Adapt an activity "on the fly" then returns to modify these models in different situations to deal with. Some require the modification of one of its models, while others lead to propagate changes from one model to another. We consider adaptation "on the fly" as a separate activity conducted in parallel with the learning activity by a human supervisor has an adequate environment to observe the activity, identify potential problems and be remedied by intervention in the learning activity by modifying the templates that specify. To develop tools to support the change and integrate them into the LDI infrastructure, we have used the techniques of Model Driven Engineering. Handled in these models are tools and data in its own right tools and made no offer more flexibility and abstraction. The models are then used as levers to achieve and change the data targeted by the adaptation.

Page generated in 0.4443 seconds