• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 277
  • 189
  • 50
  • 48
  • 29
  • 24
  • 19
  • 16
  • 13
  • 11
  • 10
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 779
  • 197
  • 131
  • 118
  • 107
  • 93
  • 91
  • 88
  • 82
  • 81
  • 79
  • 77
  • 76
  • 70
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
541

A science-gateway for workflow executions : Online and non-clairvoyant self-healing of workflow executions on grids / Auto-guérison en ligne et non clairvoyante des exécutions de chaînes de traitement sur grilles de calcul : Méthodes et évaluation dans une science-gateway pour l’imagerie médicale

Da Silva, Rafael Ferreira 29 November 2013 (has links)
Les science-gateways, telles que la Plate-forme d’Imagerie Virtuelle (VIP), permettent l’accès à un grand nombre de ressources de calcul et de stockage de manière transparente. Cependant, la quantité d’informations et de couches intergicielles utilisées créent beaucoup d’échecs et d’erreurs de système. Dans la pratique, ce sont souvent les administrateurs du système qui contrôlent le déroulement des expériences en réalisant des manipulations simples mais cruciales, comme par exemple replanifier une tâche, redémarrer un service, supprimer une exécution défaillante, ou copier des données dans des unités de stockages fiables. De cette manière, la qualité de service fournie est correcte mais demande une intervention humaine importante. Automatiser ces opérations constitue un défi pour deux raisons. Premièrement, la charge de la plate-forme est en ligne, c’est-à-dire que de nouvelles exécutions peuvent se présenter à tout moment. Aucune prédiction sur l’activité des utilisateurs n’est donc possible. De fait, les modèles, décisions et actions considérés doivent rester simples et produire des résultats pendant l’exécution de l’application. Deuxièmement, la plate-forme est non-clairvoyante à cause du manque d’information concernant les applications et ressources en production. Les ressources de calcul sont d’ordinaire fournies dynamiquement par des grappes hétérogènes, des clouds ou des grilles de volontaires, sans estimation fiable de leur disponibilité ou de leur caractéristiques. Les temps d’exécution des applications sont difficilement estimables également, en particulier dans le cas de ressources de calculs hétérogènes. Dans ce manuscrit, nous proposons un mécanisme d’auto-guérison pour la détection autonome et traitement des incidents opérationnels dans les exécutions des chaînes de traitement. Les objets considérés sont modélisés comme des automates finis à états flous (FuSM) où le degré de pertinence d’un incident est déterminé par un processus externe de guérison. Les modèles utilisés pour déterminer le degré de pertinence reposent sur l’hypothèse que les erreurs, par exemple un site ou une invocation se comportant différemment des autres, sont rares. Le mécanisme d’auto-guérison détermine le seuil de gravité des erreurs à partir de l’historique de la plate-forme. Un ensemble d’actions spécifiques est alors sélectionné par règle d’association en fonction du niveau d’erreur. / Science gateways, such as the Virtual Imaging Platform (VIP), enable transparent access to distributed computing and storage resources for scientific computations. However, their large scale and the number of middleware systems involved in these gateways lead to many errors and faults. In practice, science gateways are often backed by substantial support staff who monitors running experiments by performing simple yet crucial actions such as rescheduling tasks, restarting services, killing misbehaving runs or replicating data files to reliable storage facilities. Fair quality of service (QoS) can then be delivered, yet with important human intervention. Automating such operations is challenging for two reasons. First, the problem is online by nature because no reliable user activity prediction can be assumed, and new workloads may arrive at any time. Therefore, the considered metrics, decisions and actions have to remain simple and to yield results while the application is still executing. Second, it is non-clairvoyant due to the lack of information about applications and resources in production conditions. Computing resources are usually dynamically provisioned from heterogeneous clusters, clouds or desktop grids without any reliable estimate of their availability and characteristics. Models of application execution times are hardly available either, in particular on heterogeneous computing resources. In this thesis, we propose a general self-healing process for autonomous detection and handling of operational incidents in workflow executions. Instances are modeled as Fuzzy Finite State Machines (FuSM) where state degrees of membership are determined by an external healing process. Degrees of membership are computed from metrics assuming that incidents have outlier performance, e.g. a site or a particular invocation behaves differently than the others. Based on incident degrees, the healing process identifies incident levels using thresholds determined from the platform history. A specific set of actions is then selected from association rules among incident levels.
542

Uso de sistema de gerência de workflow para apoiar o desenvolvimento de software baseado no processo unificado da Rational estendido para alcançar níveis 2 e 3 do modelo de maturidade / Using a workflow management system to support software development based on extended rational unified process to reach maturity model levels 2 and 3

Manzoni, Lisandra Vielmo January 2001 (has links)
Este trabalho descreve a avaliação do Processo Unificado Rational (RUP) realizada com base no Modelo de Maturidade da Capacitação (CMM ou SW-CMM), e a utilização de um sistema de gerência de workflow comercial, Exchange 2000 Server, na implementação de um protótipo de um ambiente de apoio a este processo, chamado de Ambiente de Gerenciamento de Projetos (AGP). O Processo Unificado Rational (RUP) foi avaliado com relação às práticas-chave descritas pelo Modelo de Maturidade da Capacitação (CMM) do Software Engineering Institute (SEI), da Carnegie Mellon University. A avaliação identificou o suporte fornecido por este modelo de processo às organizações que desejam alcançar níveis 2 e 3 do CMM. A avaliação resultou na elaboração de propostas para complementar as macro-atividades (Core Workflows) do RUP, visando satisfazer as práticas-chave do CMM. O CMM apresenta um modelo de avaliação de processo que busca atingir a maturidade dos processos da organização, é específico para o desenvolvimento de software, os aspectos de melhoria contínua são fortemente evidenciados e várias organizações já estão utilizando-o com sucesso. O RUP surgiu como uma proposta de unificar as melhores práticas de desenvolvimento de software. Foi experimentada a utilização de um sistema de gerência de workflow, de fato um servidor de colaboração, para apoiar o processo de desenvolvimento de software. A ferramenta desenvolvida foi avaliada com base em requisitos considerados, por alguns autores da área, desejáveis em um ambiente de apoio ao processo de desenvolvimento. O protótipo do ambiente de gerenciamento de projetos é uma ferramenta de suporte baseada na Web, que visa auxiliar os gerentes de projeto de software nas atividades de gerenciamento e controle, e ajudar na interação e troca de informações entre os membros da equipe de desenvolvimento. O Processo Unificado apresenta uma abordagem bem-definida dos processos de engenharia de software e de gerenciamento de projetos de software, mas não se concentra em atividades de gerenciamento de sistemas. Ele apresenta lacunas em atividades envolvendo gerenciamento de recursos humanos, gerenciamento de custos e gerenciamento de aquisição. AGP é uma ferramenta flexível que pode ser acessada pela Internet, suporta a colaboração entre os membros de uma equipe, e oferece os benefícios da Web, como navegação intuitiva através de links e páginas. Esta ferramenta ajuda no suporte ao gerenciamento, fornecendo opções para planejar e monitorar o projeto, e suporta eventos, como mudança de estados, e comunicação aos usuários de suas novas tarefas. / This master dissertation describes the assessment of the Rational Unified Process (RUP) based on the Capability Maturity Model for Software (SW-CMM or CMM), and the implementation of a prototype tool to support this process based on of-the-shelf Workflow Management System, Exchange 2000 Server. The prototype developed is called Project Management Environment (PME). Rational Unified Process (RUP) was assessed based on the key practices described for the Capability Maturity Model (CMM) at the Carnegie Mellon Software Engineering Institute. The assessment identified the facilities that RUP offers to support an organization aiming at CMM levels 2 and 3. The assessment resulted in the elaboration of propositions to complement the Rational Unified Process in order to satisfy the key process areas of CMM. CMM shows a process model that is far fetched to reach the process maturity of an organization, is specific for the software development, and strongly emphasizes the aspects of continuous improvement and several organizations already used it with success. RUP describes how to apply best practices of software engineering. It was experimented the use of a Workflow Management System, in fact a collaboration server, to support the software development process. The experimental environment was assessed considering the requirements identified by various researchers for an environment to effectively support a software development process. The prototype software development environment is a web-based process support system, which provides means to assist the management of software development projects and help the interaction and exchange of information between disperse members of a development. The Rational Unified Process presents a well defined approach on software project management and software engineering processes, but it is not an approach centered on systems management concerns. Therefore it lacks activities involving issues as cost management, human resource management, communications management, and procurement management. PME is a flexible tool that can be accessed through the Internet, supporting the collaboration between team members, and offering the benefits of the Web, with intuitive navigation through of links and pages. It helps to support management control, providing options to plan and monitor the project, and supports events of the process, as changing states, and communicates users of their attributed tasks.
543

Sinfonia: Uma abordagem colaborativa e flexível para modelagem e execução de processos de négocios / Sinfonia: A colaborative and flexible approach for modeling and execute business process.

LOJA, Luiz Fernando Batista 10 February 2011 (has links)
Made available in DSpace on 2014-07-29T14:57:48Z (GMT). No. of bitstreams: 1 Dissertacao Mestrado Luiz Loja.pdf: 2304806 bytes, checksum: 345f8fb0018e00778e9d22ef4a49b0e2 (MD5) Previous issue date: 2011-02-10 / To offer products and services, organizations need to execute business processes. The effectiveness of these processes is critical to the success of any organization. For this reason, there have been major efforts to develop techniques and tools aimed at the improvement of business processes. One of the results of these efforts are the BPMS ( textit Business Process Management Systems), software that help to define, analyze, and manage business processes. Although there are many BPMS available, current systems only provide support for running a limited set of processes. Notably, BPMS do not allow the execution of flexible processes, restricting its operations to defined processes. In addition, they do not provide a collaborative environment for process modeling. This paper presents a software architecture for management of business processes that overcomes these limitations of current BPMS. The architecture was used to implement a software tool called Sinfonia which includes a metamodel for business processes, a process execution engine and a graphical process modeler. The innovative aspects of the proposals of this work include features such as support for defining and implementing flexible processes, such as empirical and textit adhoc processes, and support for modeling collaborative processes. Sinfonia has enough expressive power to define and implement key business process standards described in the literature. The ability of this tool to express flexible processes and to promote collaboration in the modeling of business processes were evaluated in an experiment involving fourteen participants. The results of this experiment provide evidence that Sinfonia contributes to the evolution of BPMS. / Para realizar serviços e produtos, as organizações precisam executar processos de negócio. A efetividade destes processos é um fator crítico para o sucesso de qualquer organização. Por essa razão, tem havido grandes esforços para o desenvolvimento de técnicas e ferramentas dirigidas para a melhoria de processos de negócio. Um dos resultados produzidos por esses esforços são os BPMS (Business Process Management Systems), softwares que auxiliam a definição, análise, e gerenciamento de processos de negócio. Embora existam diversos BPMS disponíveis, os sistemas atuais apenas provêem suporte à execução de um conjunto limitado de processos. Notadamente os BPMS não permitem a execução de processos flexíveis, restringindo sua atuação aos processos definidos. Além disso, eles não proporcionam um ambiente colaborativo de modelagem de processos. Este trabalho apresenta uma arquitetura de software para gerência de processos de negócio que supera essas limitações dos BPMS atuais. A arquitetura foi empregada para a implementação de uma ferramenta de software denominada Sinfonia, que contempla um metamodelo de processos de negócio, uma máquina de execução de processos e um modelador gráfico de processos. Os aspectos inovadores das propostas do presente trabalho abrangem características como o suporte à definição e execução de processos flexíveis, tais como processos empíricos e ad hoc, e o apoio à modelagem colaborativa de processos. A ferramenta Sinfonia tem poder de expressão suficiente para definir e executar os principais padrões de processos de negócio descritos na literatura. A capacidade dessa ferramenta de expressar processos flexíveis e de promover a colaboração na modelagem de processos de negócio foram avaliadas em um experimento envolvendo quatorze participantes. Os resultados desse experimento provêem evidências de que Sinfonia contribui para a evolução dos BPMS.
544

Método de avaliação do modelo de processos de negócio do EKD / Assessment method of business process model of EKD

Silvia Inês Dallavalle de Pádua 03 December 2004 (has links)
Atualmente as empresas precisam de sistemas ágeis a mudanças constantes do ambiente do negócio e para garantir que os sistemas cumpram com sua finalidade, os desenvolvedores devem possuir uma compreensão mais aprofundada sobre a organização, seus objetivos, metas e estratégias de mercado. O principal problema para o desenvolvimento de sistemas de software tem sido a dificuldade em se obter informações sobre o domínio da aplicação. Essa dificuldade levou ao surgimento de técnicas de modelagem organizacional, sendo uma atividade valiosa para a compreensão do ambiente empresarial. O EKD - Enterprise Knowledge Development - é uma metodologia que fornece uma forma sistemática e controlada de analisar, entender, desenvolver e documentar uma organização. Infelizmente não possui uma sintaxe e semântica bem definidas, dificultando análises mais complexas dos modelos. Como resultado, o modelo de processos de negócio do EKD pode ser ambíguo e de difícil análise, principalmente em sistemas mais complexos, não sendo possível verificar a consistência e completude do modelo. Neste trabalho, esses problemas serão estudados sob uma abordagem baseada em redes de Petri. O formalismo de redes de Petri a torna uma importante técnica de modelagem para a representação de processos. Além disso, redes de Petri permitem rastrear cada etapa da operação sem ambigüidade e possuem métodos eficientes de análise que garantem que o modelo está livre de erros. Assim, este trabalho tem como objetivo desenvolver um método de avaliação do modelo de processo de negócio do EKD (MPN-EKD). Por meio desse método é possível verificar se o modelo tem erros de construção e travamentos. Este método pode ser aplicado em modelos direcionados para o desenvolvimento de sistema de informação ou de controle do fluxo de trabalho, e pode ser utilizado também para o estudo de estratégias de trabalho e simulação do fluxo de trabalho. / Nowadays all companies need fast systems and frequent changes on the business environment and to guarantee that the systems are reaching their goals, the developers must have a deeper comprehension of the enterprise, its goals and market strategies. The main problem to the development of software systems has been the difficulty to obtain information about the application domain. This difficulty leaded to the creation of enterprise modeling techniques, which is a valuable activity for the comprehension of business environment. The EKD - Enterprise Knowledge Development - is a methodology that gives a systematic and controlled way to analyze, understand, develop, and document an enterprise. Unfortunately it doesn\'t have syntax neither a semantic well defined, which doesn\'t help on more complex analyses of the models. As a result, the enterprise process model of EKD can be ambiguous and hard to analyze, especially on more complex systems, and also it is not possible to verify the consistency and entireness of the model. On this paper, these problems will be studied under an approach based on Petri nets. Because of the Petri nets formalism this is an important modeling technique to process representation. Furthermore, Petri nets allow the tracking of each step of the operation without ambiguity and also they have efficient methodology for analyses, which guarantee the accuracy of the model. Therefore, this work has the objective to develop an evaluation methodology of the business process model of EKD (MPN-EKD). Such methodology will make possible the verification of possible building and locking model errors. This methodology can be applied to information systems or workflow, and also can be used to study the strategies of work and workflow simulations.
545

FORMAÇÃO E CUMPRIMENTO DE CONTRATOS ELETRÔNICOS NO SISTEMA DE COMÉRCIO INTELIGENTE - ICS / ELECTRONIC CONTRACT FORMATION AND FULFILMENT IN THE SYSTEM OF INTELLIGENT COMMERCE - ICS

Oliveira, Nathália Ribeiro Schalcher de 27 February 2004 (has links)
Made available in DSpace on 2016-08-17T14:52:51Z (GMT). No. of bitstreams: 1 Nathalia Ribeiro Schalcher de Oliveira.pdf: 789444 bytes, checksum: ab56306fa1e20e633e51fdbf806d48c3 (MD5) Previous issue date: 2004-02-27 / This work is part of the ICS Project, which has been developed in LSI at UFMA under the coordinator Sofiane Labidi s assistance. The main objective is to develop an intelligent environment to deal with the last two phases of the ICS lifecycle: the Formation and Fulfillment of Contracts. The ICS architecture and environment are presented as well as its development aspects. An implemented system that uses intelligent agent has the possibility to automatize the mechanism of closed deal contract among ICS users, as well as the monitoring of its assumed obligations. We propose the use of patterns and Semantic Web tools to deal with the information management included in the contracts. Related to the monitoring we propose a model that makes use of the Temporal Workflow and of active rules based on the ECAA s paradigm of Active Database System. / Este trabalho faz parte do Projeto ICS (Intelligent Commerce System), atualmente em desenvolvimento no Laboratório de Sistemas Inteligentes (LSI), da Universidade Federal do Maranhão (UFMA), sob a coordenação do Prof. Dr. Sofiane Labidi. Possui como objetivo, o desenvolvimento de um ambiente inteligente para lidar com as últimas duas fases do ciclo de vida do ICS: a Formação e o Cumprimento dos Contratos. O ambiente e a arquitetura do ICS são apresentados, inserindo todos os aspectos relativos ao seu desenvolvimento. Um sistema que possibilita automatizar tanto o mecanismo de contratação dos negócios fechados entre os usuários do ICS, quanto o monitoramento das obrigações por eles assumidas, através da utilização de agente inteligente, é implantado. Propomos o uso de padrões e ferramentas da Web Semântica para lidar com o gerenciamento das informações contidas nos contratos. Em relação ao monitoramento, propomos um modelo que faz uso, em conjunto, de Workflow Temporal e de regras ativas fundamentadas no paradigma ECAA de Sistemas de Banco de Dados Ativos.
546

Estratégia computacional para avaliação de propriedades mecânicas de concreto de agregado leve

Bonifácio, Aldemon Lage 16 March 2017 (has links)
Submitted by isabela.moljf@hotmail.com (isabela.moljf@hotmail.com) on 2017-06-21T11:44:49Z No. of bitstreams: 1 aldemonlagebonifacio.pdf: 14222882 bytes, checksum: a77833e828dc4a72cf27e6608d6e0c5d (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-08-07T19:04:13Z (GMT) No. of bitstreams: 1 aldemonlagebonifacio.pdf: 14222882 bytes, checksum: a77833e828dc4a72cf27e6608d6e0c5d (MD5) / Made available in DSpace on 2017-08-07T19:04:13Z (GMT). No. of bitstreams: 1 aldemonlagebonifacio.pdf: 14222882 bytes, checksum: a77833e828dc4a72cf27e6608d6e0c5d (MD5) Previous issue date: 2017-03-16 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / O concreto feito com agregados leves, ou concreto leve estrutural, é considerado um material de construção versátil, bastante usado em todo o mundo, em diversas áreas da construção civil, tais como, edificações pré-fabricadas, plataformas marítimas, pontes, entre outros. Porém, a modelagem das propriedades mecânicas deste tipo de concreto, tais como o módulo de elasticidade e a resistência a compressão, é complexa devido, principalmente, à heterogeneidade intrínseca aos componentes do material. Um modelo de predição das propriedades mecânicas do concreto de agregado leve pode ajudar a diminuir o tempo e o custo de projetos ao prover dados essenciais para os cálculos estruturais. Para esse fim, este trabalho visa desenvolver uma estratégia computacional para a avaliação de propriedades mecânicas do concreto de agregado leve, por meio da combinação da modelagem computacional do concreto via MEF (Método de Elementos Finitos), do método de inteligência computacional via SVR (Máquina de vetores suporte com regressão, do inglês Support Vector Regression) e via RNA (Redes Neurais Artificiais). Além disso, com base na abordagem de workflow científico e many-task computing, uma ferramenta computacional foi desenvolvida com o propósito de facilitar e automatizar a execução dos experimentos científicos numéricos de predição das propriedades mecânicas. / Concrete made from lightweight aggregates, or lightweight structural concrete, is considered a versatile construction material, widely used throughout the world, in many areas of civil construction, such as prefabricated buildings, offshore platforms, bridges, among others. However, the modeling of the mechanical properties of this type of concrete, such as the modulus of elasticity and the compressive strength, is complex due mainly to the intrinsic heterogeneity of the components of the material. A predictive model of the mechanical properties of lightweight aggregate concrete can help reduce project time and cost by providing essential data for structural calculations. To this end, this work aims to develop a computational strategy for the evaluation of mechanical properties of lightweight concrete by combining the concrete computational modeling via Finite Element Method, the computational intelligence method via Support Vector Regression, and via Artificial Neural Networks. In addition, based on the approachs scientific workflow and many-task computing, a computational tool will be developed with the purpose of facilitating and automating the execution of the numerical scientific experiments of prediction of the mechanical properties.
547

Fluxo de dados em redes de Petri coloridas e em grafos orientados a atores / Dataflow in colored Petri nets and in actors-oriented workflow graphs

Grace Anne Pontes Borges 11 September 2008 (has links)
Há três décadas, os sistemas de informação corporativos eram projetados para apoiar a execução de tarefas pontuais. Atualmente, esses sistemas também precisam gerenciar os fluxos de trabalho (workflows) e processos de negócio de uma organização. Em comunidades científicas de físicos, astrônomos, biólogos, geólogos, entre outras, seus sistemas de informações distinguem-se dos existentes em ambientes corporativos por: tarefas repetitivas (como re-execução de um mesmo experimento), processamento de dados brutos em resultados adequados para publicação; e controle de condução de experimentos em diferentes ambientes de hardware e software. As diferentes características dos dois ambientes corporativo e científico propiciam que ferramentas e formalismos existentes ou priorizem o controle de fluxo de tarefas, ou o controle de fluxo de dados. Entretanto, há situações em que é preciso atender simultaneamente ao controle de transferência de dados e ao controle de fluxo de tarefas. Este trabalho visa caracterizar e delimitar o controle e representação do fluxo de dados em processos de negócios e workflows científicos. Para isso, são comparadas as ferramentas CPN Tools e KEPLER, que estão fundamentadas em dois formalismos: redes de Petri coloridas e grafos de workflow orientados a atores, respectivamente. A comparação é feita por meio de implementações de casos práticos, usando os padrões de controle de dados como base de comparação entre as ferramentas. / Three decades ago, business information systems were designed to support the execution of individual tasks. Todays information systems also need to support the organizational workflows and business processes. In scientific communities composed by physicists, astronomers, biologists, geologists, among others, information systems have different characteristics from those existing in business environments, like: repetitive procedures (such as re-execution of an experiment), transforming raw data into publishable results; and coordinating the execution of experiments in several different software and hardware environments. The different characteristics of business and scientific environments propitiate the existence of tools and formalisms that emphasize control-flow or dataflow. However, there are situations where we must simultaneously handle the data transfer and control-flow. This work aims to characterize and define the dataflow representation and control in business processes and scientific workflows. In order to achieve this, two tools are being compared: CPN Tools and KEPLER, which are based in the formalisms: colored Petri nets and actors-oriented workflow graphs, respectively. The comparison will be done through implementation of practical cases, using the dataflow patterns as comparison basis.
548

Predictive Resource Management for Scientific Workflows

Witt, Carl Philipp 21 July 2020 (has links)
Um Erkenntnisse aus großen Mengen wissenschaftlicher Rohdaten zu gewinnen, sind komplexe Datenanalysen erforderlich. Scientific Workflows sind ein Ansatz zur Umsetzung solcher Datenanalysen. Um Skalierbarkeit zu erreichen, setzen die meisten Workflow-Management-Systeme auf bereits existierende Lösungen zur Verwaltung verteilter Ressourcen, etwa Batch-Scheduling-Systeme. Die Abschätzung der Ressourcen, die zur Ausführung einzelner Arbeitsschritte benötigt werden, wird dabei immer noch an die Nutzer:innen delegiert. Dies schränkt die Leistung und Benutzerfreundlichkeit von Workflow-Management-Systemen ein, da den Nutzer:innen oft die Zeit, das Fachwissen oder die Anreize fehlen, den Ressourcenverbrauch genau abzuschätzen. Diese Arbeit untersucht, wie die Ressourcennutzung während der Ausführung von Workflows automatisch erlernt werden kann. Im Gegensatz zu früheren Arbeiten werden Scheduling und Vorhersage von Ressourcenverbrauch in einem engeren Zusammenhang betrachtet. Dies bringt verschiedene Herausforderungen mit sich, wie die Quantifizierung der Auswirkungen von Vorhersagefehlern auf die Systemleistung. Die wichtigsten Beiträge dieser Arbeit sind: 1. Eine Literaturübersicht aktueller Ansätze zur Vorhersage von Spitzenspeicherverbrauch mittels maschinellen Lernens im Kontext von Batch-Scheduling-Systemen. 2. Ein Scheduling-Verfahren, das statistische Methoden verwendet, um vorherzusagen, welche Scheduling-Entscheidungen verbessert werden können. 3. Ein Ansatz zur Nutzung von zur Laufzeit gemessenem Spitzenspeicherverbrauch in Vorhersagemodellen, die die fortwährende Optimierung der Ressourcenallokation erlauben. Umfangreiche Simulationsexperimente geben Einblicke in Schlüsseleigenschaften von Scheduling-Heuristiken und Vorhersagemodellen. 4. Ein Vorhersagemodell, das die asymmetrischen Kosten überschätzten und unterschätzten Speicherverbrauchs berücksichtigt, sowie die Folgekosten von Vorhersagefehlern einbezieht. / Scientific experiments produce data at unprecedented volumes and resolutions. For the extraction of insights from large sets of raw data, complex analysis workflows are necessary. Scientific workflows enable such data analyses at scale. To achieve scalability, most workflow management systems are designed as an additional layer on top of distributed resource managers, such as batch schedulers or distributed data processing frameworks. However, like distributed resource managers, they do not automatically determine the amount of resources required for executing individual tasks in a workflow. The status quo is that workflow management systems delegate the challenge of estimating resource usage to the user. This limits the performance and ease-of-use of scientific workflow management systems, as users often lack the time, expertise, or incentives to estimate resource usage accurately. This thesis is an investigation of how to learn and predict resource usage during workflow execution. In contrast to prior work, an integrated perspective on prediction and scheduling is taken, which introduces various challenges, such as quantifying the effects of prediction errors on system performance. The main contributions are: 1. A survey of peak memory usage prediction in batch processing environments. It provides an overview of prior machine learning approaches, commonly used features, evaluation metrics, and data sets. 2. A static workflow scheduling method that uses statistical methods to predict which scheduling decisions can be improved. 3. A feedback-based approach to scheduling and predictive resource allocation, which is extensively evaluated using simulation. The results provide insights into the desirable characteristics of scheduling heuristics and prediction models. 4. A prediction model that reduces memory wastage. The design takes into account the asymmetric costs of overestimation and underestimation, as well as follow up costs of prediction errors.
549

Konzeption einer IT-Kooperationsplattform für den Export von Dienstleistungen im Rahmen des Forschungsprojektes IDEE

Rößner, Susanne, Löffler, Felicitas, Engelien, Heike January 2006 (has links)
No description available.
550

Méthodes d’ordonnancement et d’orchestration dynamique des tâches de soins pour optimiser la prise en charge des patients dans les urgences hospitalières / Scheduling and dynamic orchestration methods of care tasks to optimize the management of patients in hospital emergency department

Ajmi, Faten 11 July 2019 (has links)
Le service des urgences est un important service de soins qui représente le goulot d'étranglement de l'hôpital. Les urgences sont souvent confrontées à des problèmes de tension dans de nombreux pays à travers le monde. L'une des causes de la tension dans les urgences est l'interférence permanente entre trois types de patients : les patients déjà programmés, les patients non programmés et les patients non programmés urgents. Le but de cette thèse est de contribuer à l'étude et au développement d'un système d’aide à la décision pour améliorer la prise en charge des patients aussi bien en mode de fonctionnement normal qu’en mode tension. Deux principaux processus ont été développé. Un processus d’ordonnancement à horizon glissant en utilisant un algorithme mimétique avec l’intégration des opérateurs génétiques contrôlés pour déterminer un calendrier optimal de passage des patients. Le deuxième processus d’orchestration dynamique, à base d’agents communicants, tient compte de la nature dynamique et incertaine de l'environnement des urgences en actualisant continuellement ce calendrier. Cette orchestration pilote en temps réel le workflow du parcours patient, améliore pas à pas les indicateurs de performance durant l'exécution. Grâce aux comportements des agents et aux protocoles de communication, le système proposé a établi un lien direct en temps réel entre les performances requises sur le terrain et les actions afin de diminuer l'impact de la tension. Les résultats expérimentaux, mis en œuvre au CHRU de Lille, indiquent que l’application de nos approches permet d’améliorer les indicateurs de performance grâce aux pilotage par les agents du workflow en cours exécution. / The emergency department is an important care service that represents the hospital's bottleneck. Emergencies often face overcrowding problems in many countries worldwide. One of the causes of the emergency department overcrowding is the permanent interference between three types of arriving patients: already programmed patients, non-programmed patients and urgent non-programmed patients. The aim of this thesis is to contribute to the study and development a decision support system to improve patient management in both normal and overcrowding situation. Two main processes have been developed. A rolling-horizon scheduling process using a memetic algorithm with the integration of controlled genetic operators to determine an optimal schedule for patient. The second dynamic orchestration process, based on communicating agents, takes into account the dynamic and uncertain nature of the emergency environment by continually updating this schedule for patient. This orchestration monitoring in real time the workflow of the patient pathway improves step by step the performance indicators during the execution. Through agent behaviors and communication protocols, the proposed system has established a direct real-time link between the required performances and the effective actions in order to decrease the overcrowding impact. The experimental results in this thesis, implemented at the Regional University Hospital Center (RUHC) of Lille, justify the interest of the application of our approaches to improve the performance indicators thanks to the agents driven patient pathway workflows during their execution.

Page generated in 0.0893 seconds