• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 20
  • 13
  • 7
  • 5
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 95
  • 95
  • 29
  • 26
  • 25
  • 23
  • 16
  • 16
  • 15
  • 15
  • 15
  • 15
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

AdaptFlow: Protocol-based Medical Treatment Using Adaptive Workflows

Greiner, U., Müller, R., Rahm, E., Ramsch, J., Heller, B., Löffler, M. 25 January 2019 (has links)
Objectives: In many medical domains investigator-initiated clinical trials are used to introduce new treatments and hence act as implementations of guideline-based therapies. Trial protocols contain detailed instructions to conduct the therapy and additionally specify reactions to exceptional situations (for instance an infection or a toxicity). To increase quality in health care and raise the number of patients treated according to trial protocols, a consultation system is needed that supports the handling of the complex trial therapy processes efficiently. Our objective was to design and evaluate a consultation system that should 1) observe the status of the therapies currently being applied, 2) offer automatic recognition of exceptional situations and appropriate decision support and 3) provide an automatic adaptation of affected therapy processes to handle exceptional situations. Methods: We applied a hybrid approach that combines process support for the timely and efficient execution of the therapy processes as offered by workflow management systems with a knowledge and rule base and a mechanism for dynamic workflow adaptation to change running therapy processes if induced by changed patient condition. Results and Conclusions: This approach has been implemented in the AdaptFlow prototype. We performed several evaluation studies on the practicability of the approach and the usefulness of the system. These studies show that the AdaptFlow prototype offers adequate support for the execution of real-world investigator-initiated trial protocols and is able to handle a large number of exceptions.
72

Event-Oriented Dynamic Adaptation of Workflows: Model, Architecture and Implementation

Müller, Robert 28 November 2004 (has links)
Workflow management is widely accepted as a core technology to support long-term business processes in heterogeneous and distributed environments. However, conventional workflow management systems do not provide sufficient flexibility support to cope with the broad range of failure situations that may occur during workflow execution. In particular, most systems do not allow to dynamically adapt a workflow due to a failure situation, e.g., to dynamically drop or insert execution steps. As a contribution to overcome these limitations, this dissertation introduces the agent-based workflow management system AgentWork. AgentWork supports the definition, the execution and, as its main contribution, the event-oriented and semi-automated dynamic adaptation of workflows. Two strategies for automatic workflow adaptation are provided. Predictive adaptation adapts workflow parts affected by a failure in advance (predictively), typically as soon as the failure is detected. This is advantageous in many situations and gives enough time to meet organizational constraints for adapted workflow parts. Reactive adaptation is typically performed when predictive adaptation is not possible. In this case, adaptation is performed when the affected workflow part is to be executed, e.g., before an activity is executed it is checked whether it is subject to a workflow adaptation such as dropping, postponement or replacement. In particular, the following contributions are provided by AgentWork: A Formal Model for Workflow Definition, Execution, and Estimation: In this context, AgentWork first provides an object-oriented workflow definition language. This language allows for the definition of a workflow’s control and data flow. Furthermore, a workflow’s cooperation with other workflows or workflow systems can be specified. Second, AgentWork provides a precise workflow execution model. This is necessary, as a running workflow usually is a complex collection of concurrent activities and data flow processes, and as failure situations and dynamic adaptations affect running workflows. Furthermore, mechanisms for the estimation of a workflow’s future execution behavior are provided. These mechanisms are of particular importance for predictive adaptation. Mechanisms for Determining and Processing Failure Events and Failure Actions: AgentWork provides mechanisms to decide whether an event constitutes a failure situation and what has to be done to cope with this failure. This is formally achieved by evaluating event-condition-action rules where the event-condition part describes under which condition an event has to be viewed as a failure event. The action part represents the necessary actions needed to cope with the failure. To support the temporal dimension of events and actions, this dissertation provides a novel event-condition-action model based on a temporal object-oriented logic. Mechanisms for the Adaptation of Affected Workflows: In case of failure situations it has to be decided how an affected workflow has to be dynamically adapted on the node and edge level. AgentWork provides a novel approach that combines the two principal strategies reactive adaptation and predictive adaptation. Depending on the context of the failure, the appropriate strategy is selected. Furthermore, control flow adaptation operators are provided which translate failure actions into structural control flow adaptations. Data flow operators adapt the data flow after a control flow adaptation, if necessary. Mechanisms for the Handling of Inter-Workflow Implications of Failure Situations: AgentWork provides novel mechanisms to decide whether a failure situation occurring to a workflow affects other workflows that communicate and cooperate with this workflow. In particular, AgentWork derives the temporal implications of a dynamic adaptation by estimating the duration that will be needed to process the changed workflow definition (in comparison with the original definition). Furthermore, qualitative implications of the dynamic change are determined. For this purpose, so-called quality measuring objects are introduced. All mechanisms provided by AgentWork include that users may interact during the failure handling process. In particular, the user has the possibility to reject or modify suggested workflow adaptations. A Prototypical Implementation: Finally, a prototypical Corba-based implementation of AgentWork is described. This implementation supports the integration of AgentWork into the distributed and heterogeneous environments of real-world organizations such as hospitals or insurance business enterprises.
73

Management of generic and multi-platform workflows for exploiting heterogeneous environments on e-Science

Carrión Collado, Abel Antonio 01 September 2017 (has links)
Scientific Workflows (SWFs) are widely used to model applications in e-Science. In this programming model, scientific applications are described as a set of tasks that have dependencies among them. During the last decades, the execution of scientific workflows has been successfully performed in the available computing infrastructures (supercomputers, clusters and grids) using software programs called Workflow Management Systems (WMSs), which orchestrate the workload on top of these computing infrastructures. However, because each computing infrastructure has its own architecture and each scientific applications exploits efficiently one of these infrastructures, it is necessary to organize the way in which they are executed. WMSs need to get the most out of all the available computing and storage resources. Traditionally, scientific workflow applications have been extensively deployed in high-performance computing infrastructures (such as supercomputers and clusters) and grids. But, in the last years, the advent of cloud computing infrastructures has opened the door of using on-demand infrastructures to complement or even replace local infrastructures. However, new issues have arisen, such as the integration of hybrid resources or the compromise between infrastructure reutilization and elasticity, everything on the basis of cost-efficiency. The main contribution of this thesis is an ad-hoc solution for managing workflows exploiting the capabilities of cloud computing orchestrators to deploy resources on demand according to the workload and to combine heterogeneous cloud providers (such as on-premise clouds and public clouds) and traditional infrastructures (supercomputers and clusters) to minimize costs and response time. The thesis does not propose yet another WMS, but demonstrates the benefits of the integration of cloud orchestration when running complex workflows. The thesis shows several configuration experiments and multiple heterogeneous backends from a realistic comparative genomics workflow called Orthosearch, to migrate memory-intensive workload to public infrastructures while keeping other blocks of the experiment running locally. The running time and cost of the experiments is computed and best practices are suggested. / Los flujos de trabajo científicos son comúnmente usados para modelar aplicaciones en e-Ciencia. En este modelo de programación, las aplicaciones científicas se describen como un conjunto de tareas que tienen dependencias entre ellas. Durante las últimas décadas, la ejecución de flujos de trabajo científicos se ha llevado a cabo con éxito en las infraestructuras de computación disponibles (supercomputadores, clústers y grids) haciendo uso de programas software llamados Gestores de Flujos de Trabajos, los cuales distribuyen la carga de trabajo en estas infraestructuras de computación. Sin embargo, debido a que cada infraestructura de computación posee su propia arquitectura y cada aplicación científica explota eficientemente una de estas infraestructuras, es necesario organizar la manera en que se ejecutan. Los Gestores de Flujos de Trabajo necesitan aprovechar el máximo todos los recursos de computación y almacenamiento disponibles. Habitualmente, las aplicaciones científicas de flujos de trabajos han sido ejecutadas en recursos de computación de altas prestaciones (tales como supercomputadores y clústers) y grids. Sin embargo, en los últimos años, la aparición de las infraestructuras de computación en la nube ha posibilitado el uso de infraestructuras bajo demanda para complementar o incluso reemplazar infraestructuras locales. No obstante, este hecho plantea nuevas cuestiones, tales como la integración de recursos híbridos o el compromiso entre la reutilización de la infraestructura y la elasticidad, todo ello teniendo en cuenta que sea eficiente en el coste. La principal contribución de esta tesis es una solución ad-hoc para gestionar flujos de trabajos explotando las capacidades de los orquestadores de recursos de computación en la nube para desplegar recursos bajo demando según la carga de trabajo y combinar proveedores de computación en la nube heterogéneos (privados y públicos) e infraestructuras tradicionales (supercomputadores y clústers) para minimizar el coste y el tiempo de respuesta. La tesis no propone otro gestor de flujos de trabajo más, sino que demuestra los beneficios de la integración de la orquestación de la computación en la nube cuando se ejecutan flujos de trabajo complejos. La tesis muestra experimentos con diferentes configuraciones y múltiples plataformas heterogéneas, haciendo uso de un flujo de trabajo real de genómica comparativa llamado Orthosearch, para traspasar cargas de trabajo intensivas de memoria a infraestructuras públicas mientras se mantienen otros bloques del experimento ejecutándose localmente. El tiempo de respuesta y el coste de los experimentos son calculados, además de sugerir buenas prácticas. / Els fluxos de treball científics són comunament usats per a modelar aplicacions en e-Ciència. En aquest model de programació, les aplicacions científiques es descriuen com un conjunt de tasques que tenen dependències entre elles. Durant les últimes dècades, l'execució de fluxos de treball científics s'ha dut a terme amb èxit en les infraestructures de computació disponibles (supercomputadors, clústers i grids) fent ús de programari anomenat Gestors de Fluxos de Treballs, els quals distribueixen la càrrega de treball en aquestes infraestructures de computació. No obstant açò, a causa que cada infraestructura de computació posseeix la seua pròpia arquitectura i cada aplicació científica explota eficientment una d'aquestes infraestructures, és necessari organitzar la manera en què s'executen. Els Gestors de Fluxos de Treball necessiten aprofitar el màxim tots els recursos de computació i emmagatzematge disponibles. Habitualment, les aplicacions científiques de fluxos de treballs han sigut executades en recursos de computació d'altes prestacions (tals com supercomputadors i clústers) i grids. No obstant açò, en els últims anys, l'aparició de les infraestructures de computació en el núvol ha possibilitat l'ús d'infraestructures sota demanda per a complementar o fins i tot reemplaçar infraestructures locals. No obstant açò, aquest fet planteja noves qüestions, tals com la integració de recursos híbrids o el compromís entre la reutilització de la infraestructura i l'elasticitat, tot açò tenint en compte que siga eficient en el cost. La principal contribució d'aquesta tesi és una solució ad-hoc per a gestionar fluxos de treballs explotant les capacitats dels orquestadors de recursos de computació en el núvol per a desplegar recursos baix demande segons la càrrega de treball i combinar proveïdors de computació en el núvol heterogenis (privats i públics) i infraestructures tradicionals (supercomputadors i clústers) per a minimitzar el cost i el temps de resposta. La tesi no proposa un gestor de fluxos de treball més, sinó que demostra els beneficis de la integració de l'orquestració de la computació en el núvol quan s'executen fluxos de treball complexos. La tesi mostra experiments amb diferents configuracions i múltiples plataformes heterogènies, fent ús d'un flux de treball real de genòmica comparativa anomenat Orthosearch, per a traspassar càrregues de treball intensives de memòria a infraestructures públiques mentre es mantenen altres blocs de l'experiment executant-se localment. El temps de resposta i el cost dels experiments són calculats, a més de suggerir bones pràctiques. / Carrión Collado, AA. (2017). Management of generic and multi-platform workflows for exploiting heterogeneous environments on e-Science [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86179
74

Upravljanje tokovima aktivnosti u distributivnom menadžment sistemu / Workflow management system for DMS

Nedić Nemanja 24 February 2016 (has links)
<p>U radu je predstavljeno istraživanje vezano za poboljšanje performansi rada velikih nadzorno-upravljačkih sistema poput DMS-a. Ovaj cilj je postignut koordinacijom izvršavanja tokova aktivnosti, što podrazumeva efikasnu raspodelu zadataka na računarske resurse. U te svrhe razvijeni su i testirani različiti algoritmi. Ovakav pristup je obezbedio veći stepen iskorišćenja računarskih resursa, što je rezultiralo boljim performansama.</p> / <p>Thе paper presents an approach how to improve performance of larger scale distributed utility management system such as DMS. This goal is accomplished by using an intelligent workflow management. Workflows are divided into the atomic tasks which are scheduled to computing resources for execution. For these purposes various scheduling algorithms are developed and thoroughly tested. This approach has provided greater utilization of computing resources which further have resulted in better performance.</p>
75

Die Einführung von Vorgangsbearbeitungssystemen in der öffentlichen Verwaltung als IT-organisatorischer Gestaltungsprozeß

Knaack, Ildiko 08 December 1999 (has links)
Die Arbeit entwickelt einen konzeptionellen Rahmen für den IT-organisatorischen Gestaltungsprozeß bei der Einführung von Vorgangsbearbeitungssystemen und berücksichtigt dabei die Besonderheiten der öffentlichen planenden Verwaltung (Ministerialverwaltung). Die Einführung von Vorgangsbearbeitungssystemen greift wie kein anderes IT-System zuvor in die Ablauf- und Aufbauorganisation der öffentlichen Verwaltung ein. Vorgangsbearbeitungssysteme eröffnen grundlegend neue Gestaltungsmöglichkeiten der Bearbeitung von Vorgängen und erfordern zugleich eine an die IT-Unterstützung angepaßte, optimierte Organisation der Vorgangsbearbeitung. Der Einführungsprozeß ist durch eine Vielzahl von informationstechnischen und organisatorischen Abhängigkeiten und Wechselwirkungen gekennzeichnet. Informa-tionstechnik und Organisation werden als zwei Determinanten der Gestaltung betrachtet, die bei der Durchführung informationstechnisch-organisatorischer Gestaltungsmaßnahmen zu berücksichtigen sind. Der Gestaltungsbedarf ergibt sich aus der Art und Weise der Nutzung des Vorgangsbearbeitungssystems. Ziel der Gestaltung ist eine schrittweise optimierte und an die Spezifika der Behörde angepaßte Nutzung des Vorgangsbearbeitungssystems. Ausgehend von einer Systematisierung der konventionellen und IT-gestützten Vorgangsbearbeitung und einer Untersuchung, inwieweit Methoden und Konzepte des Softwareengineerings, der Verwaltungsmodernisierung und der Organisationswissenschaft informationstechnisch-organisatorische Wechselwirkungen berücksichtigen, wird das Drei-Ebenen-Modell der IT-organisatorischen Gestaltung als konzeptioneller Rahmen entwickelt. Da die konkrete Durchführung IT-organisatorischer Gestaltungsmaßnahmen von den Spezifika der jeweiligen Behörde und dem einzuführenden Vorgangsbearbeitungssystem abhängig ist, wird der IT-organisatorische Gestaltungsprozeß anhand unterschiedlicher Nutzungsstufen von Vorgangsbearbeitungssystemen und an Beispielen IT-organisatorischer Gestaltung des Projekts DOMEA® verdeutlicht. / The work provides concept standards for the IT-related process of implementation a document- and workflowmanagement-system and at the same time takes into account the distinctive features of the public sector. Like no IT-system before, the implementation of a document- and workflowmanagement-system influences the process and structure of the public sector. Document- and workflowmanagement-systems ensure completely new ways of working and at the same time require an optimal organization of document- and workflowmanagement that fits with the needs of IT-support. The process of implementation is characterized by a large number of IT-related and organizational dependences and interaction. IT and organization are considered as two determinants are crucial of a successful organizational and IT-related framing. The need of framing depends on the way document- and workflowmanagement-system is used. The aim is to gradually optimize the use of document- and workflowmanagement-system and to adapt it to the specific needs of the individual organization. Starting from a systematization of conventional and IT-based document- and workflowmanagement and a study on the dependences of IT and organization with methods and concepts of software-engineering, the modernization of the public sector and with theory of organization, the three-tier-model of IT-based shaping is developed. The concrete organizational an IT-related implementation depends on the specific needs of the individual organization and of the document- and workflowmanagement-system to be implemented. Therefore, the organizational an IT-related implementation will be characterized by different levels of using a document- and workflowmanagement-system and by examples of the DOMEA®-project.
76

Semantics, verification, and implementation of workflows with cancellation regions and OR-joins

Wynn, Moe Thandar January 2006 (has links)
Workflow systems aim to provide automated support for the conduct of certain business processes. Workflow systems are driven by workflow specifications which among others, capture the execution interdependencies between various activities. These interdependencies are modelled by means of different control flow constructors, e.g., sequence, choice, parallelism and synchronisation. It has been shown in the research on workflow patterns that the support for and the interpretation of various control flow constructs varies substantially across workflow systems. Two of the most problematic patterns relate to the OR-join and to cancellation. An OR-join is used in situations when we need to model " wait and see" behaviour for synchronisation. Different approaches assign a different (often only intuitive) semantics to this type of join, though they do share the common theme that synchronisation is only to be performed for active paths. Depending on context assumptions this behaviour may be relatively easy to deal with, though in general its semantics is complicated, both from a definition point of view (in terms of formally capturing a desired intuitive semantics) and from a computational point of view (how does one determine whether an OR-join is enabled?). Many systems and languages struggle with the semantics and implementation of the OR-join because its non-local semantics require a synchronisation depending on an analysis of future execution paths. This may require some non-trivial reasoning. The presence of cancellation features and other OR-joins in a workflow further complicates the formal semantics of the OR-join. The cancellation feature is commonly used to model external events that can change the behaviour of a running workflow. It can be used to either disable activities in certain parts of a workflow or to stop currently running activities. Even though it is possible to cancel activities in workflow systems using some sort of abort function, many workflow systems do not provide direct support for this feature in the workflow language. Sometimes, cancellation affects only a selected part of a workflow and other activities can continue after performing a cancellation action. As cancellation occurs naturally in business scenarios, comprehensive support in a workflow language is desirable. We take on the challenge of providing formal semantics, verification techniques as well as an implementation for workflows with those features. This thesis addresses three interrelated issues for workflows with cancellation regions and OR-joins. The concept of the OR-join is examined in detail in the context of the workflow language YAWL, a powerful workflow language designed to support a collection of workflow patterns and inspired by Petri nets. The OR-join semantics has been redesigned to represent a general, formal, and decidable approach for workflows in the presence of cancellation regions and other OR-joins. This approach exploits a link that is proposed between YAWL and reset nets, a variant of Petri nets with a special type of arc that can remove all tokens from a place. Next, we explore verification techniques for workflows with cancellation regions and OR-joins. Four structural properties have been identified and a verification approach that exploits coverability and reachability notions from reset nets has been proposed. The work on verification techniques has highlighted potential problems with calculating state spaces for large workflows. Applying reduction rules before carrying out verification can decrease the size of the problem by cutting down the size of the workflow that needs to be examined while preserving some essential properties. Therefore, we have extended the work on verification by proposing reduction rules for reset nets and for YAWL nets with and without OR-joins. The proposed OR-join semantics as well as the proposed verification approach have been implemented in the YAWL environment.
77

Foundations of process-aware information systems

Russell, Nicholas Charles January 2007 (has links)
Over the past decade, the ubiquity of business processes and their need for ongoing management in the same manner as other corporate assets has been recognized through the establishment of a dedicated research area: Business Process Management (or BPM). There are a wide range of potential software technologies on which a BPM o®ering can be founded. Although there is signi¯cant variation between these alternatives, they all share one common factor { their execution occurs on the basis of a business process model { and consequently, this ¯eld of technologies can be termed Process-Aware Information Systems (or PAIS). This thesis develops a conceptual foundation for PAIS based on the results of a detailed examination of contemporary o®erings including work°ow and case han- dling systems, business process modelling languages and web service composition languages. This foundation is based on 126 patterns that identify recurrent core constructs in the control-°ow, data and resource perspectives of PAIS. These patterns have been used to evaluate some of the leading systems and business process modelling languages. It also proposes a generic graphical language for de¯ning exception handling strategies that span these perspectives. On the basis of these insights, a comprehensive reference language { newYAWL { is developed for business process modelling and enactment. This language is formally de¯ned and an abstract syntax and operational semantics are provided for it. An assessment of its capabilities is provided through a comprehensive patterns-based analysis which allows direct comparison of its functionality with other PAIS. newYAWL serves as a reference language and many of the ideas embodied within it are also applicable to existing languages and systems. The ultimate goal of both the patterns and newYAWL is to improve the support and applicability of PAIS.
78

Entwicklung einer Methode für die Integration von Standardsoftware am Beispiel der Integration von Prüfsystemen in die Leistungsabrechnung von Krankenversicherungen

Hutter, Michael January 2009 (has links)
Zugl.: Sankt Gallen, Univ., Diss., 2009
79

Um estudo sobre a aplicação do padrão BPMN (Business process modeland notation) para a modelagem do processo de desenvolvimento de produtos numa empresa de pequeno porte do segmento metal-mecânico

Mocrosky, Jeferson Ferreira 03 October 2012 (has links)
A modelagem do processo de negócio é uma abordagem da década de 1990 para melhoria do desempenho das organizações, que volta atualmente como forte contribuinte para a melhoria de desempenho das organizações. É com essa abordagem que esta pesquisa realiza a modelagem de um Processo Desenvolvimento de Produtos (PDP) de uma empresa de fabricação mecânica, que manufatura máquinas e equipamentos para apoiar a produção em frigoríficos, na região oeste Catarinense. A modelagem do PDP utiliza o padrão Business Process Model and Notation (BPMN) apoiada pelo aplicativo Intalio BPMS. O objetivo da pesquisa é avaliar a modelagem com BPMN para a formalização do Processo de Desenvolvimento de Produtos e como tratar as complexidades e interações intrínsecas deste processo, em pequenas empresas de fabricação mecânica. A modelagem com BPMN é estruturada na avalição do PDP de uma empresa selecionada e de observações in loco da execução do processo. A metodologia adotada para desenvolvimento da modelagem do PDP da empresa considerou os seguintes aspectos: i) estudo de uma empresa; ii) modelagem informacional; iii) automação do modelo e execução; iv) implementação do modelo do PDP na empresa. Também são apresentadas as características do Modelo Unificado de Rozenfeld et al. (2006), usado como referência para sistematizar a modelagem do Processo de Desenvolvimento de Produtos da empresa, através de avaliação do processo da empresa. Uma breve descrição é feita para apresentar as características dos principais padrões usados na Modelagem de Processos de Negócios, incluindo os principais aplicativos computacionais usados para apoiar os padrões de modelagem. Os resultados foram divididos em duas partes, em modelos abstratos estáticos e dinâmicos. O modelo abstrato estático tem caráter informacional, apresentando riqueza de detalhes, na forma de um mapa detalhado do processo. Para automação, esse modelo estático foi desdobrado em outros dois modelos abstratos, que são configurados para se tornarem dinâmicos, visando a implementação e execução de forma a atender satisfatoriamente a realidade da execução do processo na empresa. O primeiro modelo abstrato dinâmico implementado e executado define o produto e finaliza com a decisão do cliente sobre o orçamento solicitado ao setor de vendas da empresa. O segundo modelo abstrato dinâmico inicia com a aprovação do orçamento pelo cliente, dado início a atividades de projeto informacional e finaliza com a liberação para produção. Essa abordagem visa minimizar as complexidades de modelagem do processo e das particularidades especificas da empresa. A modelagem do PDP com o modelo de referência e a aplicação do padrão BPMN apoiado pelo Intalio BPMS permitiu relatar boas práticas, lições aprendidas, dificuldades e facilidades encontradas. Além disso, o PDP formalizado pela modelagem com BPMN e Intalio BPMS proporcionou mudanças significativas na execução atual do processo, contribuindo para maior integração entre os participantes. / Modeling the business process and an approach of the 1990s to improve the performance of organizations, this currently returns as a strong contributor to the improvement of performance of organizations Packing Company, in the region west of Santa Catarina State. Modeling the PDP uses the standard Business Process Model and Notation (BPMN) supported by the application Intalio BPMS. The objective of the research and evaluate the modeling with BPMN for the formalization of the development process of products and how to deal with the complexities and intrinsic interactions of this process, in small companies of mechanical manufacturing. The modeling with BPMN and structured in evaluation of PDP a company selected and comments on the site of the work, the execution of the process. The methodology adopted for the development of modeling PDP the company considered the following aspects: (i) study of a company; (ii) informational modeling; (iii) the automation model and implementation; (iv) implementation of the model of the PDP in the company. Also presented are the characteristics of Unified Model of Rozenfeld et al. (2006), used as a reference to systematize the modeling of the Products Development Process of the company, through evaluation of the process of the company. A brief description and made to have the characteristics of the major standards used in the modeling of business processes, including the main computational applications used to support the standards of modeling. The results were divided into two parts, in abstract models static and dynamic. The abstract model has static informational character, presenting richness of detail, in the form of a detailed map of the process. For automation, this static model was unfolded in two other abstract models, which are configured to become dynamic, aiming at the implementation and execution in order to meet satisfactorily the reality of the implementation of the process in the company. The first abstract model dynamic implemented and executed defines the product and finishes with the customer's decision on the budget requested the sales of the company. The second abstract model dynamic starts with the approval of the budget by the customer, initiated the activities of project informational and ends with the release to production. This approach aims to minimize the complexities of modeling the process and the specific peculiarities of the company. Modeling the PDP with the reference model and the application of standard BPMN supported by Intalio BPMS allowed report best practices, lessons learned, difficulties and facilities found. In addition, the PDP formalized by modeling with BPMN and Intalio BPMS provided significant changes in the implementation of the current process, contributing to greater integration between the participants.
80

Conception d’une architecture de services d’intelligence ambiante pour l’optimisation de la qualité de service de transmission de messages en e-santé / Design of an ambient intelligence services architecture for optimizing quality of service of message transmission in eHealth

Guizani, Nachoua 30 September 2016 (has links)
La gestion de l'acheminement de messages d'e-santé en environnement ubiquitaire soulève plusieurs défis majeurs liés à la diversité et à la spécificité des cas d'usage et des acteurs, à l'évolutivité des contextes médical, social, logistique, environnemental...Nous proposons une méthode originale d'orchestration autonome et auto-adaptative de services visant à optimiser le flux des messages et à personnaliser la qualité de transmission, en les adressant aux destinataires les plus appropriés dans les délais requis. Notre solution est une architecture générique dirigée par des modèles du domaine d'information considéré et des données contextuelles, basés sur l'identification des besoins et des contraintes soulevées par notre problématique.Notre approche consiste en la composition de services de fusion et de gestion dynamique en temps réel d'informations hétérogènes provenant des écosystèmes source, cible et message, pilotés par des méthodes d'intelligence artificielle pour l'aide à la prise de décision de routage. Le but est de garantir une communication fiable, personnalisable et sensible à l'évolution du contexte, quel que soit le scénario et le type de message (alarme, technique, etc.). Notre architecture, applicable à divers domaines, a été consolidée par une modélisation des processus métiers (BPM) explicitant le fonctionnement des services qui la composent.Le cadriciel proposé est basé sur des ontologies et est compatible avec le standard HL7 V3. L'auto-adaptation du processus décisionnel d'acheminement est assurée par un réseau bayésien dynamique et la supervision du statut des messages par une modélisation mathématique utilisant des réseaux de Petri temporels / Routing policy management of eHealth messages in ubiquitous environment leads to address several key issues, such as taking into account the diversity and specificity of the different use cases and actors, as well as the dynamicity of the medical, social, logistic and environmental contexts.We propose an original, autonomous and adaptive service orchestration methodology aiming at optimizing message flow and personalizing transmission quality by timely sending the messages to the appropriate recipients. Our solution consists in a generic, model-driven architecture where domain information and context models were designed according to user needs and requirements. Our approach consists in composing, in real time, services for dynamic fusion and management of heterogeneous information from source, target and message ecosystems, driven by artificial intelligence methods for routing decision support. The aim is to ensure reliable, personalized and dynamic context-aware communication, whatever the scenario and the message type (alarm, technical, etc.). Our architecture is applicable to various domains, and has been strengthened by business process modeling (BPM) to make explicit the services operation.The proposed framework is based on ontologies and is compatible with the HL7 V3 standard. Self-adaptation of the routing decision process is performed by means of a dynamic Bayesian network and the messages status supervision is based on timed Petri nets

Page generated in 0.2138 seconds