• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 277
  • 189
  • 50
  • 48
  • 29
  • 24
  • 19
  • 16
  • 13
  • 11
  • 10
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 779
  • 197
  • 131
  • 118
  • 107
  • 93
  • 91
  • 88
  • 82
  • 81
  • 79
  • 77
  • 76
  • 70
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

A utilização de sistema de informação para gestão das demandas dos beneficiários de operadoras de saúde suplementar, como estratégia frente à regulação do setor e a Notificação de Intermediação Preliminar (NIP) / The information system as strategical tool for a health care company to avoid Intermediation Preliminary Notification (NIP)

Miraldo, Claudio de Oliveira 29 November 2016 (has links)
Submitted by Nadir Basilio (nadirsb@uninove.br) on 2017-01-26T14:45:18Z No. of bitstreams: 1 Claudio Miraldo.pdf: 1423858 bytes, checksum: 3a3a49b4e2e0f0eff50452e01285f090 (MD5) / Made available in DSpace on 2017-01-26T14:45:18Z (GMT). No. of bitstreams: 1 Claudio Miraldo.pdf: 1423858 bytes, checksum: 3a3a49b4e2e0f0eff50452e01285f090 (MD5) Previous issue date: 2016-11-29 / The Active Conflict Mediation was defined as a concept and a method of conflict resolution that seeks consensus and facilitating dialogue between the parties. With this point of vision the Nacional Agency of Suplementary Health of Brazil (ANS) established a procedure called Intermediation Preliminary Notification (NIP), which allows the regulatory agency intermediate conflicts between beneficiaries and health care providers more quickly. From the perspective of healthcare companies these notifications can mean a high cost if not answered promptly. The aim of this work is to present a solution through the implementation of a computerized system requests management and workflow, so that healthcare companies can have ways to ensure the process of responding to requests with a quick information retrieval, allowing answers to ANS, others agencies and the media, timely whenever necessary. This work demonstrates that the implementation of a computerized system contributed to the efficiency of service and improvement of the quality of the services provided by the health care company, presenting significant results in quantitative terms, as well as providing indicators that allow managers to perform real-time monitoring, historical reporting, and quick retrieval of documents. It was used the methodology of action research (Martins & Theóphilo, 2009) in this work and it could contribute to the improvement of services and reduction of risks arising from primary intermediation notifications in healthcare companies. / A Mediação Ativa de Conflitos foi definida como um conceito e um método de solução de conflitos que visa o consenso e a facilitação do diálogo entre as partes. Com esta visão a Agência Nacional de Saúde Complementar (ANS) estabeleceu um procedimento chamado Notificação de Intermediação Preliminar (NIP), que permite à agência reguladora intermediar e mediar conflitos entre os beneficiários e as operadoras de saúde com mais agilidade. Sob a ótica das operadoras estas notificações podem significar um elevado custo caso não sejam respondidas tempestivamente. O objetivo deste trabalho é apresentar uma solução, por meio da implantação de um sistema informatizado de gestão de solicitações e fluxo de trabalho, para que operadoras de saúde possuam meios de garantir o processo de atendimento às solicitações com uma rápida recuperação de informações, permitindo respostas a ANS, respostas a demais órgãos reguladores e resposta a mídia, tempestivamente, sempre que necessário. Como resultado este trabalho demonstra que a implantação de um sistema informatizado contribuiu para eficiência do atendimento e melhoria da qualidade dos serviços prestados de uma operadora de saúde, tendo apresentado como resultados significativos em termos quantitativos, além de proporcionar a apresentação de indicadores que permitem aos gestores fazer a monitoração da operação em tempo real, relatórios históricos e rápida recuperação dos documentos do processo. Neste trabalho foi utilizada a metodologia de pesquisa-ação (Martins & Theóphilo, 2009) e o resultado deve contribuir para as operadoras de saúde que buscam melhorias dos serviços prestados e diminuição dos riscos decorrentes das notificações de intermediação preliminar.
302

E-SECO ProVersion: uma arquitetura para manutenção e evolução de workflows científicos

Sirqueira, Tássio Ferenzini Martins 12 July 2016 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-06-07T11:29:45Z No. of bitstreams: 1 tassioferenzinimartinssirqueira.pdf: 6506958 bytes, checksum: 2145670dd9a80dec1aef328a3f8a0427 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-07T13:31:29Z (GMT) No. of bitstreams: 1 tassioferenzinimartinssirqueira.pdf: 6506958 bytes, checksum: 2145670dd9a80dec1aef328a3f8a0427 (MD5) / Made available in DSpace on 2017-06-07T13:31:29Z (GMT). No. of bitstreams: 1 tassioferenzinimartinssirqueira.pdf: 6506958 bytes, checksum: 2145670dd9a80dec1aef328a3f8a0427 (MD5) Previous issue date: 2016-07-12 / Um ecossistema de software científico, além de outras funcionalidades, busca integrar todas as etapas de um experimento, e comumente utiliza workflows científicos para a resolução de problemas complexos. Toda modificação ocorrida em um experimento deve ser propagada para os workflows associados, os quais devem ser mantidos e evoluídos para o prosseguimento com sucesso da pesquisa. Um das forma de garantir este controle é através da gerência de configuração. Para que ela possa ser utilizada, é importante o armazenamento dos dados de execução e modelagem do experimento e workflows associados. Neste trabalho, utilizamos conceitos e modelos relacionados à proveniência de dados para o armazenamento e consulta destes dados. O uso da proveniência de dados traz alguns benefícios neste armazenamento e consulta, conforme veremos nesta dissertação. Assim, nesse trabalho é proposta uma arquitetura para gerenciar a evolução e manutenção de experimentos e workflows científicos, denominada E-SECO ProVersion. A motivação para a especificação e implementação da arquitetura veio a partir da realização de uma revisão sistemática e de um estudo para verificar características de manutenção e evolução em repositórios de workflows existentes. A partir destas análises, as principais funcionalidades da arquitetura foram definidas e detalhadas. Além disso, um roteiro com diretrizes de uso e provas de conceito utilizando workflows extraídos do repositório myEx-periment foram apresentados, com o objetivo de avaliar a aplicabilidade da arquitetura. / A scientific software ecosystem, in addition to other features, seeks to integrate all stages of an experiment, and commonly used scientific workflows to solve complex problems. Any changes that occurred in an experiment must be propagated to the associated workflows, which must be maintained and evolved for further successful research. One of the way to ensure this control is through configuration management. So that it can be used, it is important the storage of performance data and modeling of the experiment and associated workflows. In this study, we use the concepts and models related to the source of data for storage and retrieval of this data. Use the data source brings some advantages in storage and query, as we will see in this dissertation. Thus, this paper proposes an architecture to manage the development and maintenance of scientific experiments and workflows, called E-SECO ProVersion. The motivation for the specification and implementation of architecture came from the realization of a systematic review and a study to check maintenance characteristics and evolution in existing workflows repositories. From these analyzes, the main features of the architecture are defined and detailed. In addition, a roadmap with usage guidelines and proofs of concept using workflows extracted from myExperiment repository were presented in order to evaluate the applicability of architecture.
303

Oběh dokladů a vnitřní kontrolní systém / The workflow of the accounting documents and the system internal control

Polcrová, Lucie January 2009 (has links)
The thesis topic is the accounting workflow of the company Siemens Enteprise Communications, s.r.o. The first, theoretical, part of the thesis focuses on general characteristics of the accounting system including the company rules and the need for such a system. The second part of the thesis deals with the workflow of accounting documents (invoices, credit notes, travel orders and other accounting documents). The thesis closes with a comparison of two workflow systems.
304

Standardy modelování podnikových procesů / Business Process Modeling Standards

Mezera, Jiří January 2010 (has links)
Today, business process modelling is important part of analysis and design of the information systems. There is number of standards, which is concerned with process modelling, whereas each of them represents a little different approach. The project Opensoul created general metamodel, which defines basic elements and their mutual associations, which all standards should meet. In other words, it defines basic rules for process modelling. The goal of this thesis is to compare chosen standards of process modelling with Business process metamodel of Opensoul initiative. The results of this comparison would have provide basis to answer the question, which standard supports in the best way the rules of process modelling defined by metamodel through the system of elements and their associations. This goal requires specifying the method, on which basis the comparison will be made. This method comprises the extension of metamodel with new elements and their relations to other elements, and the extension of some elements with some workflow patterns defined by Workflow Patterns initiative. That is the main benefit of this thesis.
305

La vérification de patrons de workflow métier basés sur les flux de contrôle : une approche utilisant les systèmes à base de connaissances / Control flow-based business workflow templates checking : an approach using the knowledge-based systems

Nguyen, Thi Hoa Hue 23 June 2015 (has links)
Cette thèse traite le problème de la modélisation des patrons de workflow sémantiquement riche et propose un processus pour développer des patrons de workflow. L'objectif est de transformer un processus métier en un patron de workflow métier basé sur les flux de contrôle qui garantit la vérification syntaxique et sémantique. Les défis majeurs sont : (i) de définir un formalisme permettant de représenter les processus métiers; (ii) d'établir des mécanismes de contrôle automatiques pour assurer la conformité des patrons de workflow métier basés sur un modèle formel et un ensemble de contraintes sémantiques; et (iii) d’organiser la base de patrons de workflow métier pour le développement de patrons de workflow. Nous proposons un formalisme qui combine les flux de contrôle (basés sur les Réseaux de Petri Colorés (CPNs)) avec des contraintes sémantiques pour représenter les processus métiers. L'avantage de ce formalisme est qu'il permet de vérifier non seulement la conformité syntaxique basée sur le modèle de CPNs mais aussi la conformité sémantique basée sur les technologies du Web sémantique. Nous commençons par une phase de conception d'une ontologie OWL appelée l’ontologie CPN pour représenter les concepts de patrons de workflow métier basés sur CPN. La phase de conception est suivie par une étude approfondie des propriétés de ces patrons pour les transformer en un ensemble d'axiomes pour l'ontologie. Ainsi, dans ce formalisme, un processus métier est syntaxiquement transformé en une instance de l’ontologie. / This thesis tackles the problem of modelling semantically rich business workflow templates and proposes a process for developing workflow templates. The objective of the thesis is to transform a business process into a control flow-based business workflow template that guarantees syntactic and semantic validity. The main challenges are: (i) to define formalism for representing business processes; (ii) to establish automatic control mechanisms to ensure the correctness of a business workflow template based on a formal model and a set of semantic constraints; and (iii) to organize the knowledge base of workflow templates for a workflow development process. We propose a formalism which combines control flow (based on Coloured Petri Nets (CPNs)) with semantic constraints to represent business processes. The advantage of this formalism is that it allows not only syntactic checks based on the model of CPNs, but also semantic checks based on Semantic Web technologies. We start by designing an OWL ontology called the CPN ontology to represent the concepts of CPN-based business workflow templates. The design phase is followed by a thorough study of the properties of these templates in order to transform them into a set of axioms for the CPN ontology. In this formalism, a business process is syntactically transformed into an instance of the CPN ontology. Therefore, syntactic checking of a business process becomes simply verification by inference, by concepts and by axioms of the CPN ontology on the corresponding instance.
306

ECOS PL-Science: Uma Arquitetura para Ecossistemas de Software Científico Apoiada por uma Rede Ponto a Ponto

Souza, Vitor Freitas e 27 February 2015 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2015-12-01T11:18:53Z No. of bitstreams: 1 vitorfreitasesouza.pdf: 4838221 bytes, checksum: 593f759949de45c0b044f62ba94f9a1a (MD5) / Made available in DSpace on 2015-12-01T11:18:53Z (GMT). No. of bitstreams: 1 vitorfreitasesouza.pdf: 4838221 bytes, checksum: 593f759949de45c0b044f62ba94f9a1a (MD5) Previous issue date: 2015-02-27 / A concepção de workflows científicos é uma abordagem amplamente utilizada no contexto de e-Science e experimentação científica. Existem muitas pesquisas voltadas para o gerenciamento e execução de experimentos baseados em workflows. No entanto, experimentos complexos envolvem interações entre pesquisadores geograficamente distribuídos, demandando utilização de grandes volumes de dados, serviços e recursos computacionais distribuídos. Este cenário categoriza um ecossistema de experimentação científica. Para conduzir experimentos neste contexto, cientistas precisam de uma arquitetura flexível, extensível e escalável. Durante o processo de experimentação, informações valiosas podem ser perdidas e oportunidades de reutilização de recursos e serviços desperdiçadas, caso a arquitetura de ecossistema para e-Science não considere estes aspectos. Com o objetivo de tratar a flexibilidade, a extensibilidade e a escalabilidade de plataformas de ecossistemas, este trabalho apresenta uma arquitetura orientada a serviços apoiada por uma rede ponto a ponto, desenvolvida para tratar as etapas do ciclo de vida de um experimento científico. Este trabalho apresenta como contribuições uma arquitetura para ecossistemas de software científico, a implementação desta arquitetura, bem como a sua avaliação. / The conception of scientific workflows is a widely used approach in the context of e- Science and scientific experimentation. There are many researches about the management and execution of experiments based on workflows. However, scientific experiments involve complex interactions between geographically distributed researchers, requiring the usage of large amount of data, services and distributed computing resources. This scenario categorizes a scientific experimentation ecosystem. In order to carry out experiments in this context researchers need an architecture for e-Science that supports flexibility, extensibility and scalability. During the experimentation process, valuable information can be unexploited and reusing opportunities of resources and services could be lost if the ecosystem architecture for e-Science does not consider previous mentioned requirements. In order to address the flexibility, extensibility and scalability of ecosystems platforms, this dissertation presents a service-oriented architecture supported by a peer-to-peer network. It was developed to support life-cycle stages of a scientific experiment. This work also presents, as contributions, an architecture to support experiments execution of scientific software ecosystems, the implementation of this architecture, as well as its evaluation.
307

Abordagem algébrica para seleção de clones ótimos em projetos genomas e metagenomas / Algebraic approach to optimal clone selection in genomics and metagenomics projects.

Mauricio Egidio Cantão 01 December 2009 (has links)
Devido à grande diversidade de microrganismos desconhecidos no meio ambiente, 99% deles não podem ser cultivados nos meios de cultura tradicionais dos laboratórios. Para isso, projetos metagenômicos são propostos para estudar comunidades microbianas presentes no meio ambiente, a partir de técnicas moleculares, em especial o seqüenciamento. Dessa forma, para os próximos anos é esperado um acúmulo de seqüências produzidas por esses projetos. As seqüências produzidas pelos projetos genomas e metagenomas apresentam vários desafios para o tratamento, armazenamento e análise, como exemplo: a busca de clones contendo genes de interesse. Este trabalho apresenta uma abordagem algébrica que define e gerencia de forma dinâmica as regras para a seleção de clones em bibliotecas genômicas e metagenômicas, que se baseiam em álgebra de processos. Além disso, uma interface web foi desenvolvida para permitir que os pesquisadores criem e executem facilmente suas próprias regras de seleção de clones em bancos de dados de seqüências genômicas e metagenômicas. Este software foi testado em bibliotecas genômicas e metagenômicas e foi capaz de selecionar clones contendo genes de interesse. / Due to the wide diversity of unknown organisms in the environment, 99% of them cannot be grown in traditional culture medium in laboratories. Therefore, metagenomics projects are proposed to study microbial communities present in the environment, from molecular techniques, especially the sequencing. Thereby, for the coming years it is expected an accumulation of sequences produced by these projects. Thus, the sequences produced by genomics and metagenomics projects present several challenges for the treatment, storing and analysis such as: the search for clones containing genes of interest. This work presents an algebraic approach that defines it dynamically and manages the rules of the selection of clones in genomic and metagenomic libraries, which are based on process algebra. Furthermore, a web interface was developed to allow researchers to easily create and execute their own rules to select clones in genomic and metagenomic sequence database. This software was tested in genomics and metagenomics libraries and it was able to select clones containing genes of interest.
308

Management of generic and multi-platform workflows for exploiting heterogeneous environments on e-Science

Carrión Collado, Abel Antonio 01 September 2017 (has links)
Scientific Workflows (SWFs) are widely used to model applications in e-Science. In this programming model, scientific applications are described as a set of tasks that have dependencies among them. During the last decades, the execution of scientific workflows has been successfully performed in the available computing infrastructures (supercomputers, clusters and grids) using software programs called Workflow Management Systems (WMSs), which orchestrate the workload on top of these computing infrastructures. However, because each computing infrastructure has its own architecture and each scientific applications exploits efficiently one of these infrastructures, it is necessary to organize the way in which they are executed. WMSs need to get the most out of all the available computing and storage resources. Traditionally, scientific workflow applications have been extensively deployed in high-performance computing infrastructures (such as supercomputers and clusters) and grids. But, in the last years, the advent of cloud computing infrastructures has opened the door of using on-demand infrastructures to complement or even replace local infrastructures. However, new issues have arisen, such as the integration of hybrid resources or the compromise between infrastructure reutilization and elasticity, everything on the basis of cost-efficiency. The main contribution of this thesis is an ad-hoc solution for managing workflows exploiting the capabilities of cloud computing orchestrators to deploy resources on demand according to the workload and to combine heterogeneous cloud providers (such as on-premise clouds and public clouds) and traditional infrastructures (supercomputers and clusters) to minimize costs and response time. The thesis does not propose yet another WMS, but demonstrates the benefits of the integration of cloud orchestration when running complex workflows. The thesis shows several configuration experiments and multiple heterogeneous backends from a realistic comparative genomics workflow called Orthosearch, to migrate memory-intensive workload to public infrastructures while keeping other blocks of the experiment running locally. The running time and cost of the experiments is computed and best practices are suggested. / Los flujos de trabajo científicos son comúnmente usados para modelar aplicaciones en e-Ciencia. En este modelo de programación, las aplicaciones científicas se describen como un conjunto de tareas que tienen dependencias entre ellas. Durante las últimas décadas, la ejecución de flujos de trabajo científicos se ha llevado a cabo con éxito en las infraestructuras de computación disponibles (supercomputadores, clústers y grids) haciendo uso de programas software llamados Gestores de Flujos de Trabajos, los cuales distribuyen la carga de trabajo en estas infraestructuras de computación. Sin embargo, debido a que cada infraestructura de computación posee su propia arquitectura y cada aplicación científica explota eficientemente una de estas infraestructuras, es necesario organizar la manera en que se ejecutan. Los Gestores de Flujos de Trabajo necesitan aprovechar el máximo todos los recursos de computación y almacenamiento disponibles. Habitualmente, las aplicaciones científicas de flujos de trabajos han sido ejecutadas en recursos de computación de altas prestaciones (tales como supercomputadores y clústers) y grids. Sin embargo, en los últimos años, la aparición de las infraestructuras de computación en la nube ha posibilitado el uso de infraestructuras bajo demanda para complementar o incluso reemplazar infraestructuras locales. No obstante, este hecho plantea nuevas cuestiones, tales como la integración de recursos híbridos o el compromiso entre la reutilización de la infraestructura y la elasticidad, todo ello teniendo en cuenta que sea eficiente en el coste. La principal contribución de esta tesis es una solución ad-hoc para gestionar flujos de trabajos explotando las capacidades de los orquestadores de recursos de computación en la nube para desplegar recursos bajo demando según la carga de trabajo y combinar proveedores de computación en la nube heterogéneos (privados y públicos) e infraestructuras tradicionales (supercomputadores y clústers) para minimizar el coste y el tiempo de respuesta. La tesis no propone otro gestor de flujos de trabajo más, sino que demuestra los beneficios de la integración de la orquestación de la computación en la nube cuando se ejecutan flujos de trabajo complejos. La tesis muestra experimentos con diferentes configuraciones y múltiples plataformas heterogéneas, haciendo uso de un flujo de trabajo real de genómica comparativa llamado Orthosearch, para traspasar cargas de trabajo intensivas de memoria a infraestructuras públicas mientras se mantienen otros bloques del experimento ejecutándose localmente. El tiempo de respuesta y el coste de los experimentos son calculados, además de sugerir buenas prácticas. / Els fluxos de treball científics són comunament usats per a modelar aplicacions en e-Ciència. En aquest model de programació, les aplicacions científiques es descriuen com un conjunt de tasques que tenen dependències entre elles. Durant les últimes dècades, l'execució de fluxos de treball científics s'ha dut a terme amb èxit en les infraestructures de computació disponibles (supercomputadors, clústers i grids) fent ús de programari anomenat Gestors de Fluxos de Treballs, els quals distribueixen la càrrega de treball en aquestes infraestructures de computació. No obstant açò, a causa que cada infraestructura de computació posseeix la seua pròpia arquitectura i cada aplicació científica explota eficientment una d'aquestes infraestructures, és necessari organitzar la manera en què s'executen. Els Gestors de Fluxos de Treball necessiten aprofitar el màxim tots els recursos de computació i emmagatzematge disponibles. Habitualment, les aplicacions científiques de fluxos de treballs han sigut executades en recursos de computació d'altes prestacions (tals com supercomputadors i clústers) i grids. No obstant açò, en els últims anys, l'aparició de les infraestructures de computació en el núvol ha possibilitat l'ús d'infraestructures sota demanda per a complementar o fins i tot reemplaçar infraestructures locals. No obstant açò, aquest fet planteja noves qüestions, tals com la integració de recursos híbrids o el compromís entre la reutilització de la infraestructura i l'elasticitat, tot açò tenint en compte que siga eficient en el cost. La principal contribució d'aquesta tesi és una solució ad-hoc per a gestionar fluxos de treballs explotant les capacitats dels orquestadors de recursos de computació en el núvol per a desplegar recursos baix demande segons la càrrega de treball i combinar proveïdors de computació en el núvol heterogenis (privats i públics) i infraestructures tradicionals (supercomputadors i clústers) per a minimitzar el cost i el temps de resposta. La tesi no proposa un gestor de fluxos de treball més, sinó que demostra els beneficis de la integració de l'orquestració de la computació en el núvol quan s'executen fluxos de treball complexos. La tesi mostra experiments amb diferents configuracions i múltiples plataformes heterogènies, fent ús d'un flux de treball real de genòmica comparativa anomenat Orthosearch, per a traspassar càrregues de treball intensives de memòria a infraestructures públiques mentre es mantenen altres blocs de l'experiment executant-se localment. El temps de resposta i el cost dels experiments són calculats, a més de suggerir bones pràctiques. / Carrión Collado, AA. (2017). Management of generic and multi-platform workflows for exploiting heterogeneous environments on e-Science [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86179 / TESIS
309

Analýza a návrh workflow vybraných procesů podniku / Analysis and Proposal of Selected Business Processes Workflow

Novotný, Tomáš January 2010 (has links)
The master’s thesis is concerning the field of process management and workflow in company 2Tom s.r.o. Theoretical part of the thesis presents actual trends of management and methods of increasing process effectiveness, the thesis analyses actual state of the company and presents proposals for implementation and usage of workflow oriented on the key processes of the company.
310

Model workflow a jeho grafické rozhraní / Workflow Model and Its Graphic Interface

Jadrný, Miroslav January 2010 (has links)
Business process management is important topic in business information systems. Workflow systems are taking the top places in company information system architecture due to aspiration to make business process more and more optimized. This project is about parallel processing and implementation of business processes parallel processing in complex information systems. Content of this project is o function and object library for modeling business process in Vema, a. s. Workflow system. Important part of this project is parallel processing solution and its implementation.

Page generated in 0.0722 seconds