• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 13
  • 13
  • 13
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A novel workflow management system for handling dynamic process adaptation and compliance

Haji-Omar, Mohamad S. January 2014 (has links)
Modern enterprise organisations rely on dynamic processes. Generally these processes cannot be modelled once and executed repeatedly without change. Enterprise processes may evolve unpredictably according to situations that cannot always be prescribed. However, no mechanism exists to ensure an updated process does not violate any compliance requirements. Typical workflow processes may follow a process definition and execute several thousand instances using a workflow engine without any changes. This is suitable for routine business processes. However, when business processes need flexibility, adaptive features are needed. Updating processes may violate compliance requirements so automatic verification of compliance checking is necessary. The research work presented in this Thesis investigates the problem of current workflow technology in defining, managing and ensuring the specification and execution of business processes that are dynamic in nature, combined with policy standards throughout the process lifycle. The findings from the literature review and the system requirements are used to design the proposed system architecture. Since a two-tier reference process model is not sufficient as a basis for the reference model for an adaptive and compliance workflow management system, a three-tier process model is proposed. The major components of the architecture consist of process models, business rules and plugin modules. This architecture exhibits the concept of user adaptation with structural checks and dynamic adaptation with data-driven checks. A research prototype - Adaptive and Compliance Workflow Management System (ACWfMS) - was developed based on the proposed system architecture to implement core services of the system for testing and evaluation purposes. The ACWfMS enables the development of a workflow management tool to create or update the process models. It automatically validates compliance requirements and, in the case of violations, visual feedback is presented to the user. In addition, the architecture facilitates process migration to manage specific instances with modified definitions. A case study based on the postgraduate research process domain is discussed.
2

Analýza využití workflow produktů / Analysis workflow products

Šich, Jan January 2012 (has links)
The thesis is focused on workflow processes and systems for their design and management. It tries to show readers more familiar way with the nature of these systems and features that should support. It tries to accomplish three goals. The first goal is to determine the criteria by which you can judge the quality of workflow management system. These criteria must be sorted by the importance of specific situations, which are designed weighting of the criteria and technique for their calculation. The second goal of this work is the selection of appropriate tools for testing and subsequent practical test, focusing on appointment criteria and metrics selected for their assessment. The third goal is the inclusion of a practical example of a real process design modifications to make it suitable for using of workflow system to increase its effectiveness. Selected process is the transfer of information between school studies and legal guardians of students by e-mail, prepared for Secondary school, Dubno. The thesis is splitted in two halves for achieving the objectives of the work.The first two chapters are about the theoretical introduce, what benefits can bring the use of workflow systems and what steps to take to be able to effectively and meaningfully utilized. These informations were used from the literature focused primarily on workflow and process management. The third chapter focuses on the practical part, where the selected studies focused on the evaluation criteria workflow systems, build a suitable combination of the current set of criteria and application to selected systems management workflow. In the last chapter of the thesis are processed informations obtained from employees of SOS and SOU Dubno and where is described the current state of that process, a proposal how to make it more effective with using workflow system. Contribution of this thesis is the first assess of the current state workflow on the market and, secondly, the possibility of its use to increase the efficiency of the administrative process SOS and SOU Dubno.
3

Web-based Thesis Workflow Management System

Garip, Omer 15 May 2020 (has links)
No description available.
4

A Logic-Based Methodology for Business Process Analysis and Design: Linking Business Policies to Workflow Models

Wang, Jiannan January 2006 (has links)
Today, organizations often need to modify their business processes to cope with changes in the environment, such as mergers/acquisitions, new government regulations, and new customer demand. Most organizations also have a set of business policies defining the way they conduct their business. Although there has been extensive research on process analysis and design, how to systematically extract workflow models from business policies has not been studied, resulting in a missing link between the specification of business policies and the modeling of business processes.Given that process changes are often determined by executives and managers at the policy level, the aforementioned missing link often leads to inefficient and inaccurate implementation of process changes by business analysts and process designers. We refer to this problem as the policy mismatch problem in business process management. For organizations with large-scale business processes and a large number of business policies, solving the policy mismatch problem is very difficult and challenging.In this dissertation, we attempt to provide a formal link between business policies and workflow models by proposing a logic-based methodology for process analysis and design. In particular, we first propose a Policy-driven Process Design (PPD) methodology to formalize the procedure of extracting workflow models from business policies. In PPD, narrative process policies are parsed into precise information on various workflow components, and a set of process design rules and algorithms are applied to generate workflow models from that information.We also develop a logic-based process modeling language named Unified Predicate Language (UPL). UPL is able to represent all workflow components in a single logic format and provides analytical capability via logic inference and query. We demonstrate UPL's expressive power and analytical ability by applying it to process design and process change analysis. In particular, we use UPL to define and classify process change anomalies and develop algorithms to verify and enforce process consistency.The Policy-driven Process Design, Unified Predicate Language, and process change analysis approach found in this dissertation contribute to business process management research by providing a formal methodology for resolving the policy mismatch problem.
5

An investigation into the relevance of flexibility- and interoperability requirements for implementation processes for workflow-management-applications

Kühl, Lukas W. H. January 2009 (has links)
Flexibility and Interoperability have become important characteristics for organisations and their business processes. The need to control flexible business processes within an organisation’s boundaries and between organisations imposes major requirements on a company’s process control capabilities. Workflow Management Systems (WFMS) try to fulfil these requirements by offering respective product features. Evidence suggests that the achievement of flexible business processes and an inter-organisational process control is also influenced by implementation processes for Workflow Management Applications (WFMA). [A WFMA comprises the WFMS and "all WFMS specific data with regard to one or more business processes" [VER01]]. The impact of a WFMA implementation methodology on the fulfilment of these requirements is the research scope of the project. The thesis provides knowledge in the following areas: 1. Review of the relationship between workflow management and the claim for process flexibility respectively -interoperability. 2. Definition of a research-/evaluation framework for workflow projects. This framework is composed of all relevant research variables that have been identified for the thesis. 3. Empirical survey of relevant workflow-project objectives and their priority in the context of process flexibility and –interoperability. 4. Empirical survey of the objectives’ achievement. 5. Empirical survey of methodologies / activities that have been applied within workflow projects. 6. Derivation of the project methodologies’ effectiveness in terms of the impact that applied activities had on project objectives. 7. Evaluation of existing workflow life-cycle models in accordance with the research framework. 8. Identification of basic improvements for workflow implementation processes with respect to the achievement of flexible and interoperable business processes. The first part of the thesis argues the relevance of the subject. Afterwards research variables that constitute the evaluation framework for WFMA implementation processes are stepwise identified and defined. An empirical study then proves the variables’ effectiveness for the achievement of process flexibility and –interoperability within the WFMA implementation process. After this the framework is applied to evaluate chosen WFMA implementation methodologies. Identified weaknesses and effective methodological aspects are utilised to develop generic methodological improvements. These improvements are later validated by means of a case study and interviews with workflow experts.
6

[en] WORK-FLOW EXECUTION IN DISCONNECTED ENVIRONMENTS / [pt] EXECUÇÃO DE WORKFLOW EM AMBIENTES COM DESCONEXÃO

FABIO MEIRA DE OLIVEIRA DIAS 15 September 2003 (has links)
[pt] Os sistemas de gerência de workflow são freqüentemente utilizados para modelagem, monitoramento e execução coordenada de atividades realizadas por grupos de usuários em diferentes contextos. Com a atual proliferação de computadores portáteis e seu crescente poder de computação, os sistemas tradicionalmente desenvolvidos têm se mostrado, muitas vezes, excessivamente rígidos, limitando o grau de autonomia dos usuários. O objetivo deste trabalho é identificar e analisar diferentes técnicas de flexibilização e mecanismos que possam ser empregados em um sistema de gerência de work-flow destinado a dar suporte à operação desconectada. O principal desafio é garantir um nível de independência satisfatório entre grupos de pessoas trabalhando de forma conjunta que possibilite a realização coordenada de tarefas, com um objetivo global comum, em ambientes com desconexão. Para testar a viabilidade das idéias discutidas nesta dissertação, foi construído um sistema cujo projeto levou em conta os vários requisitos apresentados e que permite explorar características específicas de diferentes tipos de work-flow, buscando flexibilizar sua execução, sem comprometer a estruturação preestabelecida. / [en] Workflow management systems are frequently used for modeling, monitoring and controlling the coordinated execution of activities performed by workgroups in a variety of contexts. With the widespread use of portable computers and their growing computational power, conventional systems have often proved to be overly restrictive, effectively limiting the level of autonomy of the users involved. The primary goal of this work is to identify and analyze different flexibilization techniques and mechanisms that can be employed in a workflow management system aimed at supporting disconnected operation. The main challenge is to provide a satisfactory degree of independence among individuals in cooperating teams who share a common goal and work in disconnected environments. In order to test the viability of the ideas discussed in this dissertation, a system was built whose design met the requirements presented in the text and which allows the exploration of specific features of different kinds of workflow so as to enhance execution flexibility, without compromising the predefined structure.
7

Management of generic and multi-platform workflows for exploiting heterogeneous environments on e-Science

Carrión Collado, Abel Antonio 01 September 2017 (has links)
Scientific Workflows (SWFs) are widely used to model applications in e-Science. In this programming model, scientific applications are described as a set of tasks that have dependencies among them. During the last decades, the execution of scientific workflows has been successfully performed in the available computing infrastructures (supercomputers, clusters and grids) using software programs called Workflow Management Systems (WMSs), which orchestrate the workload on top of these computing infrastructures. However, because each computing infrastructure has its own architecture and each scientific applications exploits efficiently one of these infrastructures, it is necessary to organize the way in which they are executed. WMSs need to get the most out of all the available computing and storage resources. Traditionally, scientific workflow applications have been extensively deployed in high-performance computing infrastructures (such as supercomputers and clusters) and grids. But, in the last years, the advent of cloud computing infrastructures has opened the door of using on-demand infrastructures to complement or even replace local infrastructures. However, new issues have arisen, such as the integration of hybrid resources or the compromise between infrastructure reutilization and elasticity, everything on the basis of cost-efficiency. The main contribution of this thesis is an ad-hoc solution for managing workflows exploiting the capabilities of cloud computing orchestrators to deploy resources on demand according to the workload and to combine heterogeneous cloud providers (such as on-premise clouds and public clouds) and traditional infrastructures (supercomputers and clusters) to minimize costs and response time. The thesis does not propose yet another WMS, but demonstrates the benefits of the integration of cloud orchestration when running complex workflows. The thesis shows several configuration experiments and multiple heterogeneous backends from a realistic comparative genomics workflow called Orthosearch, to migrate memory-intensive workload to public infrastructures while keeping other blocks of the experiment running locally. The running time and cost of the experiments is computed and best practices are suggested. / Los flujos de trabajo científicos son comúnmente usados para modelar aplicaciones en e-Ciencia. En este modelo de programación, las aplicaciones científicas se describen como un conjunto de tareas que tienen dependencias entre ellas. Durante las últimas décadas, la ejecución de flujos de trabajo científicos se ha llevado a cabo con éxito en las infraestructuras de computación disponibles (supercomputadores, clústers y grids) haciendo uso de programas software llamados Gestores de Flujos de Trabajos, los cuales distribuyen la carga de trabajo en estas infraestructuras de computación. Sin embargo, debido a que cada infraestructura de computación posee su propia arquitectura y cada aplicación científica explota eficientemente una de estas infraestructuras, es necesario organizar la manera en que se ejecutan. Los Gestores de Flujos de Trabajo necesitan aprovechar el máximo todos los recursos de computación y almacenamiento disponibles. Habitualmente, las aplicaciones científicas de flujos de trabajos han sido ejecutadas en recursos de computación de altas prestaciones (tales como supercomputadores y clústers) y grids. Sin embargo, en los últimos años, la aparición de las infraestructuras de computación en la nube ha posibilitado el uso de infraestructuras bajo demanda para complementar o incluso reemplazar infraestructuras locales. No obstante, este hecho plantea nuevas cuestiones, tales como la integración de recursos híbridos o el compromiso entre la reutilización de la infraestructura y la elasticidad, todo ello teniendo en cuenta que sea eficiente en el coste. La principal contribución de esta tesis es una solución ad-hoc para gestionar flujos de trabajos explotando las capacidades de los orquestadores de recursos de computación en la nube para desplegar recursos bajo demando según la carga de trabajo y combinar proveedores de computación en la nube heterogéneos (privados y públicos) e infraestructuras tradicionales (supercomputadores y clústers) para minimizar el coste y el tiempo de respuesta. La tesis no propone otro gestor de flujos de trabajo más, sino que demuestra los beneficios de la integración de la orquestación de la computación en la nube cuando se ejecutan flujos de trabajo complejos. La tesis muestra experimentos con diferentes configuraciones y múltiples plataformas heterogéneas, haciendo uso de un flujo de trabajo real de genómica comparativa llamado Orthosearch, para traspasar cargas de trabajo intensivas de memoria a infraestructuras públicas mientras se mantienen otros bloques del experimento ejecutándose localmente. El tiempo de respuesta y el coste de los experimentos son calculados, además de sugerir buenas prácticas. / Els fluxos de treball científics són comunament usats per a modelar aplicacions en e-Ciència. En aquest model de programació, les aplicacions científiques es descriuen com un conjunt de tasques que tenen dependències entre elles. Durant les últimes dècades, l'execució de fluxos de treball científics s'ha dut a terme amb èxit en les infraestructures de computació disponibles (supercomputadors, clústers i grids) fent ús de programari anomenat Gestors de Fluxos de Treballs, els quals distribueixen la càrrega de treball en aquestes infraestructures de computació. No obstant açò, a causa que cada infraestructura de computació posseeix la seua pròpia arquitectura i cada aplicació científica explota eficientment una d'aquestes infraestructures, és necessari organitzar la manera en què s'executen. Els Gestors de Fluxos de Treball necessiten aprofitar el màxim tots els recursos de computació i emmagatzematge disponibles. Habitualment, les aplicacions científiques de fluxos de treballs han sigut executades en recursos de computació d'altes prestacions (tals com supercomputadors i clústers) i grids. No obstant açò, en els últims anys, l'aparició de les infraestructures de computació en el núvol ha possibilitat l'ús d'infraestructures sota demanda per a complementar o fins i tot reemplaçar infraestructures locals. No obstant açò, aquest fet planteja noves qüestions, tals com la integració de recursos híbrids o el compromís entre la reutilització de la infraestructura i l'elasticitat, tot açò tenint en compte que siga eficient en el cost. La principal contribució d'aquesta tesi és una solució ad-hoc per a gestionar fluxos de treballs explotant les capacitats dels orquestadors de recursos de computació en el núvol per a desplegar recursos baix demande segons la càrrega de treball i combinar proveïdors de computació en el núvol heterogenis (privats i públics) i infraestructures tradicionals (supercomputadors i clústers) per a minimitzar el cost i el temps de resposta. La tesi no proposa un gestor de fluxos de treball més, sinó que demostra els beneficis de la integració de l'orquestració de la computació en el núvol quan s'executen fluxos de treball complexos. La tesi mostra experiments amb diferents configuracions i múltiples plataformes heterogènies, fent ús d'un flux de treball real de genòmica comparativa anomenat Orthosearch, per a traspassar càrregues de treball intensives de memòria a infraestructures públiques mentre es mantenen altres blocs de l'experiment executant-se localment. El temps de resposta i el cost dels experiments són calculats, a més de suggerir bones pràctiques. / Carrión Collado, AA. (2017). Management of generic and multi-platform workflows for exploiting heterogeneous environments on e-Science [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86179 / TESIS
8

[en] TEAM: AN ARCHITECTURE FOR E-WORKFLOW MANAGEMENT / [pt] TEAM: UMA ARQUITETURA PARA GERÊNCIA DE E-WORKFLOWS

LUIZ ANTONIO DE MORAES PEREIRA 30 August 2004 (has links)
[pt] Em aplicações colaborativas distribuídas, o uso de repositórios centralizados para armazenamento dos dados e programas compartilhados compromete algumas características importantes desse tipo de aplicações, tais como tolerância a falhas, escalabilidade e autonomia local. Aplicações como Kazaa, Gnutella e Edutella exemplificam o emprego de computação ponto-a-ponto (P2P), que tem se mostrado uma alternativa interessante para solução dos problemas apontados acima, sem impor as restrições típicas de sistemas centralizados ou mesmo distribuídos do tipo mediadores e SGBDHs. Nesse trabalho apresentamos a arquitetura TEAM (Teamwork-support Environment Architectural Model) para gerência de workflows na web. Além de descrevermos os componentes e conectores da arquitetura, que se baseia em computação P2P, tratamos dos aspectos de modelagem de processos, gerência dos dados, metadados e das informações de controle de execução dos processos. Exploramos, também, a estratégia adotada para disseminação das consultas e mensagens encaminhadas aos pares da rede em ambientes baseados na arquitetura. Ilustramos o emprego da arquitetura TEAM em um estudo de caso em e-learning. / [en] In distributed collaborative applications, the use of centralized repositories for storing shared data and programs compromises some important characteristics of this type of applications, such as fault tolerance, scalability and local autonomy. Applications like Kazaa, Gnutella and Edutella exemplify the use of peer-to-peer computing, which is being considered an interesting alternative for the solution of the problems mentioned above, without imposing typical restrictions of centralized or even distributed systems such as mediators and HDBMSs. In this work we present the TEAM (Teamwork-support Environment Architectural Model) architecture for managing workflows in the Web. Besides describing the components and connectors of the architecture, which is based on P2P computing, we address the modelling of processes and management of data, metadata and execution control information.We also discuss the strategy adopted for queries dissemination and messages sent to peers in environments based on the architecture. We illustrate the application of TEAM in a case study in e-learning.
9

Context driven workflow adaptation applied to healthcare planning = Adaptação de workflows dirigida por contexto aplicada ao planejamento de saúde / Adaptação de workflows dirigida por contexto aplicada ao planejamento de saúde

Vilar, Bruno Siqueira Campos Mendonça, 1982- 25 August 2018 (has links)
Orientadores: Claudia Maria Bauzer Medeiros, André Santanchè / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-25T03:19:51Z (GMT). No. of bitstreams: 1 Vilar_BrunoSiqueiraCamposMendonca_D.pdf: 3275725 bytes, checksum: 4ccdd82eebca5b8da9748c7c515ea4c1 (MD5) Previous issue date: 2014 / Resumo: Sistemas de Gerenciamento de Workflows (WfMS -- em inglês) são usados para gerenciar a execução de processos, melhorando eficiência e eficácia de procedimentos em uso. As forças motrizes por trás da adoção e do desenvolvimento de um WfMS são aplicações científicas e de negócios. Esforços conjuntos de ambos resultaram em mecanismos consolidados, além de padrões e protocolos consensuais. Em particular, um WfMS científico (SWfMS -- \textit{Scientific WfMS}) auxilia cientistas a especificar e executar experimentos distribuídos. Ele fornece diferentes recursos que suportam atividades em um ambiente experimental, como prover flexibilidade para mudar o projeto de workflow, manter a proveniência e suportar reproducibilidade de experimentos. Por outro lado, apesar de poucas iniciativas de pesquisa, WfMSs não fornecem suporte apropriado à personalização dinâmica e baseada em contexto durante a execução; adaptações em tempo de execução normalmente requerem intervenção do usuário. Esta tese se concentra em superar essa deficiência, fornecendo a WfMSs um mecanismo de ciente do contexto para personalizar a execução de workflows. Como resultado, foi projetado e desenvolvido o DynFlow -- uma arquitetura de software que permite tal personalização aplicada a um domínio: planejamento de saúde. Este domínio foi escolhido por ser um ótimo exemplo de personalização sensível ao contexto. Procedimentos de saúde constantemente sofrem mudanças que podem ocorrer durante um tratamento, como a reação de um paciente a um medicamento. Para suprir a demanda, a pesquisa em planejamento de saúde desenvolveu técnicas semi-automáticas para suportar mudanças rápidas dos passos de fluxos de tratamento, de acordo com o estado e a evolução do paciente. Uma dessas técnicas é \textit{Computer-Interpretable Guidelines} (CIG), cujo membro mais proeminente é \textit{Task-Network Model} (TNM) -- uma abordagem baseada em regras capaz de construir um plano em tempo de execução. Nossa pesquisa nos levou a concluir que CIGs não suportam características necessárias por profissionais de saúde, como proveniência e extensibilidade, disponíveis em WfMSs. Em outras palavras, CIGs e WfMSs têm características complementares e são direcionadas à execução de atividades. Considerando os fatos citados, as principais contribuições desta tese são: (a) especificação e desenvolvimento do DynFlow, cujo modelo associa características de TNMs e WfMS; (b) caracterização das principais vantagens e desvantagens de modelos CIGs e WfMSs; (c) implementação de um protótipo, baseado em ontologias e aplicadas ao domínio da saúde e enfermagem / Abstract: Workflow Management Systems (WfMS) are used to manage the execution of processes, improving efficiency and efficacy of the procedure in use. The driving forces behind the adoption and development of WfMSs are business and scientific applications. Associated research efforts resulted in consolidated mechanisms, consensual protocols and standards. In particular, a scientific WfMS helps scientists to specify and run distributed experiments. It provides several features that support activities within an experimental environment, such as providing flexibility to change workflow design and keeping provenance (and thus reproducibility) of experiments. On the other hand, barring a few research initiatives, WfMSs do not provide appropriate support to dynamic, context-based customization during run-time; on-the-fly adaptations usually require user intervention. This thesis is concerned with mending this gap, providing WfMSs with a context-aware mechanism to dynamically customize workflow execution. As a result, we designed and developed DynFlow ¿ a software architecture that allows such a customization, applied to a specific domain: healthcare planning. This application domain was chosen because it is a very good example of context-sensitive customization. Indeed, healthcare procedures constantly undergo unexpected changes that may occur during a treatment, such as a patient¿s reaction to a medicine. To meet dynamic customization demands, healthcare planning research has developed semi-automated techniques to support fast changes of the careflow steps according to a patient¿s state and evolution. One such technique is Computer-Interpretable Guidelines (CIG), whose most prominent member is the Task-Network Model (TNM) -- a rule based approach able to build on the fly a plan according to the context. Our research led us to conclude that CIGs do not support features required by health professionals, such as distributed execution, provenance and extensibility, which are available from WfMSs. In other words, CIGs and WfMSs have complementary characteristics, and both are directed towards execution of activities. Given the above facts, the main contributions of the thesis are the following: (a) the design and development of DynFlow, whose underlying model blends TNM characteristics with WfMS; (b) the characterization of the main advantages and disadvantages of CIG models and workflow models; and (c) the implementation of a prototype, based on ontologies, applied to nursing care. Ontologies are used as a solution to enable interoperability across distinct SWfMS internal representations, as well as to support distinct healthcare vocabularies and procedures / Doutorado / Ciência da Computação / Doutor em Ciência da Computação
10

On the construction of decentralised service-oriented orchestration systems

Jaradat, Ward January 2016 (has links)
Modern science relies on workflow technology to capture, process, and analyse data obtained from scientific instruments. Scientific workflows are precise descriptions of experiments in which multiple computational tasks are coordinated based on the dataflows between them. Orchestrating scientific workflows presents a significant research challenge: they are typically executed in a manner such that all data pass through a centralised computer server known as the engine, which causes unnecessary network traffic that leads to a performance bottleneck. These workflows are commonly composed of services that perform computation over geographically distributed resources, and involve the management of dataflows between them. Centralised orchestration is clearly not a scalable approach for coordinating services dispersed across distant geographical locations. This thesis presents a scalable decentralised service-oriented orchestration system that relies on a high-level data coordination language for the specification and execution of workflows. This system's architecture consists of distributed engines, each of which is responsible for executing part of the overall workflow. It exploits parallelism in the workflow by decomposing it into smaller sub-workflows, and determines the most appropriate engines to execute them using computation placement analysis. This permits the workflow logic to be distributed closer to the services providing the data for execution, which reduces the overall data transfer in the workflow and improves its execution time. This thesis provides an evaluation of the presented system which concludes that decentralised orchestration provides scalability benefits over centralised orchestration, and improves the overall performance of executing a service-oriented workflow.

Page generated in 0.0461 seconds