• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 45
  • 15
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Méthodologie d'évaluation pour les types de données répliqués / Evaluation methodology for replicated data types

Ahmed-Nacer, Mehdi 05 May 2015 (has links)
Pour fournir une disponibilité permanente des données et réduire la latence réseau, les systèmes de partage de données se basent sur la réplication optimiste. Dans ce paradigme, il existe plusieurs copies de l'objet partagé dite répliques stockées sur des sites. Ces répliques peuvent être modifiées librement et à tout moment. Les modifications sont exécutées en local puis propagées aux autres sites pour y être appliquées. Les algorithmes de réplication optimiste sont chargés de gérer les modifications parallèles. L'objectif de cette thèse est de proposer une méthodologie d'évaluation pour les algorithmes de réplication optimiste. Le contexte de notre étude est l'édition collaborative. Nous allons concevoir pour cela un outil d'évaluation qui intègre un mécanisme de génération de corpus et un simulateur d'édition collaborative. À travers cet outil, nous allons dérouler plusieurs expériences sur deux types de corpus: synchrone et asynchrone. Dans le cas d'une édition collaborative synchrone, nous évaluerons les performances des différents algorithmes de réplication sur différents critères tels que le temps d'exécution, l'occupation mémoire, la taille des messages, etc. Nous proposerons ensuite quelques améliorations. En plus, dans le cas d'une édition collaborative asynchrone, lorsque deux répliques se synchronisent, les conflits sont plus nombreux à apparaître. Le système peut bloquer la fusion des modifications jusqu'à ce que l'utilisateur résolut les conflits. Pour réduire le nombre de ces conflits et l'effort des utilisateurs, nous proposerons une métrique d'évaluation et nous évaluerons les différents algorithmes sur cette métrique. Nous analyserons le résultat pour comprendre le comportement des utilisateurs et nous proposerons ensuite des algorithmes pour résoudre les conflits les plus important et réduire ainsi l'effort des développeurs. Enfin, nous proposerons une nouvelle architecture hybride basée sur deux types d'algorithmes de réplication. Contrairement aux architectures actuelles, l'architecture proposéeest simple, limite les ressources sur les dispositifs clients et ne nécessite pas de consensus entre les centres de données / To provide a high availability from any where, at any time, with low latency, data is optimistically replicated. This model allows any replica to apply updates locally, while the operations are later sent to all the others. In this way, all replicas eventually apply all updates, possibly even in different order. Optimistic replication algorithms are responsible for managing the concurrent modifications and ensure the consistency of the shared object. In this thesis, we present an evaluation methodology for optimistic replication algorithms. The context of our study is collaborative editing. We designed a tool that implements our methodology. This tool integrates a mechanism to generate a corpus and a simulator to simulate sessions of collaborative editing. Through this tool, we made several experiments on two different corpus: synchronous and asynchronous. In synchronous collaboration, we evaluate the performance of optimistic replication algorithms following several criteria such as execution time, memory occupation, message's size, etc. After analysis, some improvements were proposed. In addition, in asynchronous collaboration, when replicas synchronize their modifications, more conflicts can appear in the document. In this case, the system cannot merge the modifications until a user resolves them. In order to reduce the conflicts and the user's effort, we propose an evaluation metric and we evaluate the different algorithms on this metric. Afterward, we analyze the quality of the merge to understand the behavior of the users and the collaboration cases that create conflicts. Then, we propose algorithms for resolving the most important conflicts, therefore reducing the user's effort. Finally, we propose a new architecture for supporting cloud-based collaborative editing system. This architecture is based on two optimistic replication algorithms. Unlike current architectures, the proposed one removes the problems of the centralization and consensus between data centers, is simple and accessible for any developers
32

A entrega de notícias em aconselhamentos genéticos: uma investigação interacional sobre como acontece na prática

Frezza, Minéia 27 February 2015 (has links)
Submitted by Maicon Juliano Schmidt (maicons) on 2015-06-16T13:56:09Z No. of bitstreams: 1 Minéia Frezza.pdf: 1758793 bytes, checksum: 506bdf85c94ed952d8aa63d7ca579de0 (MD5) / Made available in DSpace on 2015-06-16T13:56:09Z (GMT). No. of bitstreams: 1 Minéia Frezza.pdf: 1758793 bytes, checksum: 506bdf85c94ed952d8aa63d7ca579de0 (MD5) Previous issue date: 2015-02-27 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Esta dissertação de mestrado consiste em um subprojeto de um estudo maior (Uma mulher, um feto, e uma má notícia: a entrega de diagnósticos de síndromes e de malformações fetais – em busca de uma melhor compreensão do que está por vir e do que pode ser feito, OSTERMANN, 2013) e tem como objetivo descrever a entrega de notícias por um geneticista a gestantes ou puérperas e seus/suas acompanhantes durante aconselhamentos genéticos gravados em áudio, em um hospital materno infantil do Sistema Único de Saúde (SUS), localizado na região sul do Brasil. Depois de transcrever os 54 aconselhamentos genéticos gravados, selecionamos apenas os 21 aconselhamentos que lidam com a comunicação de resultados de exames e os investigamos por meio da Análise da Conversa (SACKS; SCHEGLOFF; JEFFERSON, 1974) para descrever as ações envolvidas na fase das consultas em que ocorre uma sequência de entrega de boas e de más notícias. A análise de dados revelou que o geneticista em questão entrega as notícias seguindo uma sequência didática composta pelos seguintes elementos: (1) pré-anúncio da notícia, (2) retomada dos resultados de exames anteriores que indicavam a realização de exames mais especializados, (3) série(s) de apresentação de perspectiva (MAYNARD, 1992) nos casos de más notícias, (4) anúncio da notícia em si e, (5) apresentação de algo potencialmente “positivo” dentro de cada quadro, quando as notícias tratam de um diagnóstico “ruim”. Outra tendência evidenciada nos dados refere-se ao processo de alocação da categoria de porta-voz da notícia ao médico, à equipe de medicina fetal e à própria instituição quando a notícia é boa. Quando a notícia a ser entregue é ruim, há um processo de distanciamento da pessoa que a entrega, uma vez que essa categoria é alocada ao exame. Assim, por ser colocado na posição de agente dos verbos que montam a notícia, o exame acaba sendo apresentado como “o responsável” pelo porte e entrega das más notícias. Esse processo de “agentivização do exame” está ligado ao processo de “despessoalização da doença”, que acontece devido à falta de referentes pronominais e/ou nominais que categorizem o feto como portador da doença e dos sintomas apresentados durante a sequência de entrega de notícias. A análise linguístico-interacional da entrega de notícias neste contexto de pesquisa revela práticas recorrentes no evento aconselhamento genético. A partir dessas recorrências, o geneticista apresenta formas para lidar com o sofrimento de pacientes e de seus/suas acompanhantes que podem ser disseminadas na formação de profissionais de saúde em áreas em que a entrega de diagnósticos é prática do dia-a-dia. / This master’s dissertation consists of a subproject of a larger study (Uma mulher, um feto, e uma má notícia: a entrega de diagnósticos de síndromes e de malformações fetais – em busca de uma melhor compreensão do que está por vir e do que pode ser feito, OSTERMANN, 2013) and aims at describing the news delivery made by a geneticist to pregnant or puerperal women and their companion during audio recorded genetic counselings held in a mother and child hospital of the Sistema Único de Saúde (SUS), located in southern Brazil. After transcribing the 54 audiorecorded genetic counselings, we selected only the 21 counseling sessions which dealt with the communication of tests results. By taking a conversation analytical perspective (SACKS; SCHEGLOFF; JEFFERSON, 1974), the interactions were analyzed so as to describe the actions involved within the phase of the consultations in which the sequence of good and bad news delivery occurs. The data analysis reveals that the delivery of the news follows a didactic sequence composed by the following elements: (1) a news preannouncement, (2) a retake of the previous exams results which indicated the realization of more specialized exams, (3) a/some perspective-display series (MAYNARD, 1992) in case of bad news, (4) a news announcement itself, and when the news consists of a “bad” diagnosis, (5) a display of something potentially “positive” within each case. Another tendency revealed in the data refers to the process of placing the category of spokesperson to the doctor, to the staff of fetal medicine and to the institution itself when the news is good. When the news to be delivered is bad, on the other hand, there is a process of distancing of the person who delivers it and this category is alocated to the exam, which is then put in the position of the “agent” of the verbs that announce the news and, as a consequence, ends up being “the responsible one” for bearing it. This process of agentivization of the exam is linked to the process of “depersonalization of the disease”, which happens due to the lack of nominal and/or pronominal referents that categorize the fetus as the bearer of the disease and the symptoms presented throughout the news delivery sequence. The linguistic-interactional analysis of the news delivery in this research study shows recurring practices in the event genetic counseling. By these recurrences, the geneticist presents ways to deal with the patients and their companions’ distress, and these ways can be spread to the health professionals’ education in areas in which diagnosis delivery is a daily practice.
33

過度自信與過度樂觀經理人對公司價值影響 / How overconfident and optimistic manager will affect firm value

施維筑, Shih, Wei Chu Unknown Date (has links)
現實社會中,由於人並非如傳統學派所聲稱完全理性制定決策,自1980年以來即產生諸多傳統學派無法說明的現象,因此行為財務學派興起。本篇導入行為財務學的模型,探討當經理人具過度自信與過度樂觀特質時,經理人的特性會如何影響公司價值。研究假設當經理人為風險中立者,可達到公司價值極大化first-best value。若經理人為風險趨避,產生的效用成本將使其無法達成此目標,但此時經理人若具過度自信,則可抵消風險趨避帶來的公司價值減損,而達成股東所希望達到的公司價值極大。除此之外,根據Heaton(2003)模型所聲稱,過度樂觀的經理人無法達成公司價值極大,而本篇修改其模型,得出當公司經理人具過度樂觀特性,是有可能符合股東利益,而達到公司價值極大化的目標。 / In real world, people don’t make decisions depend on rationality. Therefore, there exists many facts that traditional researchers can’t explain since 1980’s and that’s why behavioral finance school arises. In this paper, we use behavioral finance models to discuss when managers are overconfident or optimistic, how their personality will affect the value of company. We find that when a manager is risk-neutral, he can maximize the firm value that we called “first-best value.” However, when a manager is risk-averse, the utility cost will be the huge obstacle to attain the goal. However, if the manager is overconfident, this characteristic will counterbalance the drawback that risk-averse will decrease company value. In addition, according to Heaton’s (2003) model, an optimistic manager can’t maximize firm value. This paper modifies Heaton’s model and finds that when managers are optimistic, it is likely that a manager can meet shareholder’s needs and maximize the firm value.
34

Cognitive bias and welfare of egg-laying chicks: Impacts of commercial hatchery procedures on cognition.

Palazon, Tiphaine January 2020 (has links)
Egg-laying hens coming from commercial hatchery go through hatchery procedures considered as stressful and engaging prolonged stress response in adult chickens. The aim of our study was to evaluate the impact of commercial hatching procedure on the affective state of chicks, on their short- and long-term memory and on their need for social reinstatement. To assess the affective state of the chicks we used a cognitive bias protocol integrating the ecological response of a chick to the picture of another chick, to an owl and to an ambiguous cue mixing features of both the chickand the owl pictures. Short-term memory was evaluated by using a delayed matching-to-sample experiment (with 10, 30,60 and 120 s delays), with conspecifics as sample stimuli. We assessed long-term memory with an arena containing multiple doors leading to conspecifics, in which a chick had to remember which door was open after a delay of one hour or three hours. Finally, we observed the need for social reinstatement through a sociality test arena allowing a chick to be more or less close to conspecifics. We found that chicks coming from commercial hatchery were in a depressive affective state compare to control group. Those chicks also showed higher need for social reinstatement and loss weight. No differences were found regarding short- and long-time working memory between the two groups, but the methods used during these experiments will be discussed. Studying how commercial procedures impact the cognition and more specifically the emotions and state of mind of chickens, is a necessary step forward into the understanding of farm animals’ welfare.
35

Návrh podnikového finančního plánu / A Draft of a Corporate Financial Plan

Hajdová, Veronika January 2019 (has links)
The diploma thesis is focused on the issues of financial planning. In the introductory chapter the basic theoretical terms, which are important to understanding the issue, are explained. The aim of this thesis is to create a financial plan for selected company for the next 3 years, which is also the subject of the main part of this diploma thesis. The financial plan is compiled in two variants: optimistic and pessimistic. In both options of the financial plan the consequences of the company's planned investment will incorporate. At the end of the thesis, the evaluation and control of the results of both variants of the financial plan is made.
36

The Effects of the Planning Fallacy and Organizational Error Management Culture onOccupational Self-Efficacy

Kuczmanski, Jacob John 21 March 2016 (has links)
No description available.
37

Extracting Parallelism from Legacy Sequential Code Using Transactional Memory

Saad Ibrahim, Mohamed Mohamed 26 July 2016 (has links)
Increasing the number of processors has become the mainstream for the modern chip design approaches. However, most applications are designed or written for single core processors; so they do not benefit from the numerous underlying computation resources. Moreover, there exists a large base of legacy software which requires an immense effort and cost of rewriting and re-engineering to be made parallel. In the past decades, there has been a growing interest in automatic parallelization. This is to relieve programmers from the painful and error-prone manual parallelization process, and to cope with new architecture trend of multi-core and many-core CPUs. Automatic parallelization techniques vary in properties such as: the level of paraellism (e.g., instructions, loops, traces, tasks); the need for custom hardware support; using optimistic execution or relying on conservative decisions; online, offline or both; and the level of source code exposure. Transactional Memory (TM) has emerged as a powerful concurrency control abstraction. TM simplifies parallel programming to the level of coarse-grained locking while achieving fine-grained locking performance. This dissertation exploits TM as an optimistic execution approach for transforming a sequential application into parallel. The design and the implementation of two frameworks that support automatic parallelization: Lerna and HydraVM, are proposed, along with a number of algorithmic optimizations to make the parallelization effective. HydraVM is a virtual machine that automatically extracts parallelism from legacy sequential code (at the bytecode level) through a set of techniques including code profiling, data dependency analysis, and execution analysis. HydraVM is built by extending the Jikes RVM and modifying its baseline compiler. Correctness of the program is preserved through exploiting Software Transactional Memory (STM) to manage concurrent and out-of-order memory accesses. Our experiments show that HydraVM achieves speedup between 2×-5× on a set of benchmark applications. Lerna is a compiler framework that automatically and transparently detects and extracts parallelism from sequential code through a set of techniques including code profiling, instrumentation, and adaptive execution. Lerna is cross-platform and independent of the programming language. The parallel execution exploits memory transactions to manage concurrent and out-of-order memory accesses. This scheme makes Lerna very effective for sequential applications with data sharing. This thesis introduces the general conditions for embedding any transactional memory algorithm into Lerna. In addition, the ordered version of four state-of-art algorithms have been integrated and evaluated using multiple benchmarks including RSTM micro benchmarks, STAMP and PARSEC. Lerna showed great results with average 2.7× (and up to 18×) speedup over the original (sequential) code. While prior research shows that transactions must commit in order to preserve program semantics, placing the ordering enforces scalability constraints at large number of cores. In this dissertation, we eliminates the need for commit transactions sequentially without affecting program consistency. This is achieved by building a cooperation mechanism in which transactions can forward some changes safely. This approach eliminates some of the false conflicts and increases the concurrency level of the parallel application. This thesis proposes a set of commit order algorithms that follow the aforementioned approach. Interestingly, using the proposed commit-order algorithms the peak gain over the sequential non-instrumented execution in RSTM micro benchmarks is 10× and 16.5× in STAMP. Another main contribution is to enhance the concurrency and the performance of TM in general, and its usage for parallelization in particular, by extending TM primitives. The extended TM primitives extracts the embedded low level application semantics without affecting TM abstraction. Furthermore, as the proposed extensions capture common code patterns, it is possible to be handled automatically through the compilation process. In this work, that was done through modifying the GCC compiler to support our TM extensions. Results showed speedups of up to 4× on different applications including micro benchmarks and STAMP. Our final contribution is supporting the commit-order through Hardware Transactional Memory (HTM). HTM contention manager cannot be modified because it is implemented inside the hardware. Given such constraint, we exploit HTM to reduce the transactional execution overhead by proposing two novel commit order algorithms, and a hybrid reduced hardware algorithm. The use of HTM improves the performance by up to 20% speedup. / Ph. D.
38

Designing, Modeling, and Optimizing Transactional Data Structures

Hassan, Ahmed Mohamed Elsayed 25 September 2015 (has links)
Transactional memory (TM) has emerged as a promising synchronization abstraction for multi-core architectures. Unlike traditional lock-based approaches, TM shifts the burden of implementing threads synchronization from the programmer to an underlying framework using hardware (HTM) and/or software (STM) components. Although TM can be leveraged to implement transactional data structures (i.e., those where multiple operations are allowed to execute atomically, all-or-nothing, according to the transaction paradigm), its intensive speculation may result in significantly lower performance than the optimized concurrent data structures. This poor performance motivates the need to find other, more effective, alternatives for designing transactional data structures without losing the simple programming abstraction proposed by TM. To do so, we identified three major challenges that need to be addressed to design efficient transactional data structures. The first challenge is composability, namely allowing an atomic execution of two or more data structure operations in the same way as TM provides, but without its high overheads. The second challenge is integration, which enables the execution of data structure operations within generic transactions that may contain other memory- based operations. The last challenge is modeling, which encompasses the necessity of defining a unified formal methodology to reason about the correctness of transactional data structures. In this dissertation, we propose different approaches to address the above challenges. First, we address the composability challenge by introducing an optimistic methodology to effi- ciently convert concurrent data structures into transactional ones. Second, we address the integration challenge by injecting the semantic operations of those transactional data struc- ture into TM frameworks, and by presenting two novel STM algorithms in order to enhance the overall performance of those frameworks. Finally, we address the modeling challenge by presenting two models for concurrent and transactional data structures designs. • Our first main contribution in this dissertation is Optimistic transactional boosting (OTB), a methodology to design transactional versions of the highly concurrent optimistic (i.e., lazy) data structures. An earlier (pessimistic) boosting proposal added a layer of abstract locks on top of existing concurrent data structures. Instead, we propose an optimistic boosting methodology, which allows greater data structure-specific optimizations, easier integration with TM frameworks, and lower restrictions on the operations than the original (more pessimistic) boosting methodology. Based on the proposed OTB methodology, we implement the transactional version of two list-based data structures (i.e., set and priority queue). Then, we present TxCF-Tree, a balanced tree whose design is optimized to support transactional accesses. The core optimizations of TxCF-Tree's operations are: providing a traversal phase that does not use any lock and/or speculation and deferring the lock acquisition or physical modification to the transaction's commit phase; isolating the structural operations (such as re-balancing) in an interference-less housekeeping thread; and minimizing the interference between structural operations and the critical path of semantic operations (i.e., additions and removals on the tree). • Our second main contribution is to integrate OTB with both STM and HTM algorithms. For STM, we extend the design of both DEUCE, a Java STM framework, and RSTM, a C++ STM framework, to support the integration with OTB. Using our extension, programmers can include both OTB data structure operations and traditional memory reads/writes in the same transaction. Results show that OTB performance is closer to the optimal lazy (non-transactional) data structures than the original boosting algorithm. On the HTM side, we introduce a methodology to inject semantic operations into the well-known hybrid transactional memory algorithms (e.g., HTM-GL, HyNOrec, and NOre- cRH). In addition, we enhance the proposed semantically-enabled HTM algorithms with a lightweight adaptation mechanism that allows bypassing the HTM paths if the overhead of the semantic operations causes repeated HTM aborts. Experiments on micro- and macro- benchmarks confirm that our proposals outperform the other TM solutions in almost all the tested workloads. • Our third main contribution is to enhance the performance of TM frameworks in gen- eral by introducing two novel STM algorithms. Remote Transaction Commit (RTC) is a mechanism for executing commit phases of STM transactions in dedicated server cores. RTC shows significant improvements compared to its corresponding validation based STM algorithm (up to 4x better) as it decreases the overhead of spin locking during commit, in terms of cache misses, blocking of lock holders, and CAS operations. Remote Inval- idation (RInval) applies the same idea of RTC on invalidation based STM algorithms. Furthermore, it allows more concurrency by executing commit and invalidation routines concurrently in different servers. RInval performs up to 10x better than its corresponding invalidation based STM algorithm (InvalSTM), and up to 2x better than its corresponding validation-based algorithm (NOrec). • Our fourth and final main contribution is to provide a theoretical model for concurrent and transactional data structures. We exploit the similarities of the OTB-based data structures and provide a unified model to reason about the correctness of those designs. Specifically, we extend a recent approach that models data structures with concurrent readers and a single writer (called SWMR), and we propose two novel models that additionally allow multiple writers and transactional execution. Those models are more practical because they cover a wider set of data structures than the original SWMR model. / Ph. D.
39

Des algorithmes presque optimaux pour les problèmes de décision séquentielle à des fins de collecte d'information / Near-Optimal Algorithms for Sequential Information-Gathering Decision Problems

Araya-López, Mauricio 04 February 2013 (has links)
Cette thèse s'intéresse à des problèmes de prise de décision séquentielle dans lesquels l'acquisition d'information est une fin en soi. Plus précisément, elle cherche d'abord à savoir comment modifier le formalisme des POMDP pour exprimer des problèmes de collecte d'information et à proposer des algorithmes pour résoudre ces problèmes. Cette approche est alors étendue à des tâches d'apprentissage par renforcement consistant à apprendre activement le modèle d'un système. De plus, cette thèse propose un nouvel algorithme d'apprentissage par renforcement bayésien, lequel utilise des transitions locales optimistes pour recueillir des informations de manière efficace tout en optimisant la performance escomptée. Grâce à une analyse de l'existant, des résultats théoriques et des études empiriques, cette thèse démontre que ces problèmes peuvent être résolus de façon optimale en théorie, que les méthodes proposées sont presque optimales, et que ces méthodes donnent des résultats comparables ou meilleurs que des approches de référence. Au-delà de ces résultats concrets, cette thèse ouvre la voie (1) à une meilleure compréhension de la relation entre la collecte d'informations et les politiques optimales dans les processus de prise de décision séquentielle, et (2) à une extension des très nombreux travaux traitant du contrôle de l'état d'un système à des problèmes de collecte d'informations / The purpose of this dissertation is to study sequential decision problems where acquiring information is an end in itself. More precisely, it first covers the question of how to modify the POMDP formalism to model information-gathering problems and which algorithms to use for solving them. This idea is then extended to reinforcement learning problems where the objective is to actively learn the model of the system. Also, this dissertation proposes a novel Bayesian reinforcement learning algorithm that uses optimistic local transitions to efficiently gather information while optimizing the expected return. Through bibliographic discussions, theoretical results and empirical studies, it is shown that these information-gathering problems are optimally solvable in theory, that the proposed methods are near-optimal solutions, and that these methods offer comparable or better results than reference approaches. Beyond these specific results, this dissertation paves the way (1) for understanding the relationship between information-gathering and optimal policies in sequential decision processes, and (2) for extending the large body of work about system state control to information-gathering problems
40

Master/worker parallel discrete event simulation

Park, Alfred John 16 December 2008 (has links)
The execution of parallel discrete event simulation across metacomputing infrastructures is examined. A master/worker architecture for parallel discrete event simulation is proposed providing robust executions under a dynamic set of services with system-level support for fault tolerance, semi-automated client-directed load balancing, portability across heterogeneous machines, and the ability to run codes on idle or time-sharing clients without significant interaction by users. Research questions and challenges associated with issues and limitations with the work distribution paradigm, targeted computational domain, performance metrics, and the intended class of applications to be used in this context are analyzed and discussed. A portable web services approach to master/worker parallel discrete event simulation is proposed and evaluated with subsequent optimizations to increase the efficiency of large-scale simulation execution through distributed master service design and intrinsic overhead reduction. New techniques for addressing challenges associated with optimistic parallel discrete event simulation across metacomputing such as rollbacks and message unsending with an inherently different computation paradigm utilizing master services and time windows are proposed and examined. Results indicate that a master/worker approach utilizing loosely coupled resources is a viable means for high throughput parallel discrete event simulation by enhancing existing computational capacity or providing alternate execution capability for less time-critical codes.

Page generated in 0.0418 seconds