• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 166
  • 65
  • 52
  • 12
  • 10
  • 9
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 400
  • 204
  • 118
  • 107
  • 80
  • 73
  • 71
  • 54
  • 42
  • 41
  • 38
  • 36
  • 36
  • 32
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Programming High-Performance Clusters with Heterogeneous Computing Devices

Aji, Ashwin M. 19 May 2015 (has links)
Today's high-performance computing (HPC) clusters are seeing an increase in the adoption of accelerators like GPUs, FPGAs and co-processors, leading to heterogeneity in the computation and memory subsystems. To program such systems, application developers typically employ a hybrid programming model of MPI across the compute nodes in the cluster and an accelerator-specific library (e.g.; CUDA, OpenCL, OpenMP, OpenACC) across the accelerator devices within each compute node. Such explicit management of disjointed computation and memory resources leads to reduced productivity and performance. This dissertation focuses on designing, implementing and evaluating a runtime system for HPC clusters with heterogeneous computing devices. This work also explores extending existing programming models to make use of our runtime system for easier code modernization of existing applications. Specifically, we present MPI-ACC, an extension to the popular MPI programming model and runtime system for efficient data movement and automatic task mapping across the CPUs and accelerators within a cluster, and discuss the lessons learned. MPI-ACC's task-mapping runtime subsystem performs fast and automatic device selection for a given task. MPI-ACC's data-movement subsystem includes careful optimizations for end-to-end communication among CPUs and accelerators, which are seamlessly leveraged by the application developers. MPI-ACC provides a familiar, flexible and natural interface for programmers to choose the right computation or communication targets, while its runtime system achieves efficient cluster utilization. / Ph. D.
262

Improving Bio-Inspired Frameworks

Varadarajan, Aravind Krishnan 05 October 2018 (has links)
In this thesis, we provide solutions to two different bio-inspired algorithms. The first is enhancing the performance of bio-inspired test generation for circuits described in RTL Verilog, specifically for branch coverage. We seek to improve upon an existing framework, BEACON, in terms of performance. BEACON is an Ant Colony Optimization (ACO) based test generation framework. Similar to other ACO frameworks, BEACON also has a good scope in improving performance using parallel computing. We try to exploit the available parallelism using both multi-core Central Processing Units (CPUs) and Graphics Processing Units(GPUs). Using our new multithreaded approach we can reduce test generation time by a factor of 25 — compared to the original implementation for a wide variety of circuits. We also provide a 2-dimensional factoring method for BEACON to improve available parallelism to yield some additional speedup. The second bio-inspired algorithm we address is for Deep Neural Networks. With the increasing prevalence of Neural Nets in artificial intelligence and mission-critical applications such as self-driving cars, questions arise about its reliability and robustness. We have developed a test-generation based technique and metric to evaluate the robustness of a Neural Nets outputs based on its sensitivity to its inputs. This is done by generating inputs which the neural nets find difficult to classify but at the same time is relatively apparent to human perception. We measure the degree of difficulty for generating such inputs to calculate our metric. / MS / High-level Hardware Design Languages (HDLs) has allowed designers to implement complicated hardware designs with considerably lesser effort. Unfortunately, design verification for the same circuits has failed to scale gracefully in terms of time and effort. Not only has it become more difficult for formal methods due to exponential complexity from increasing path explosion, but concrete test generation frameworks also face new issues such as the increased requirement in the volume of simulations. The advent of parallel computing using General Purpose Graphics Processing Units (GPGPUs) has led to improved performance for various applications. We propose to leverage both the multi-core CPU and the GPGPU for RTL test generation. This is achieved by implementing a test generation framework that can utilize the SIMD type parallelism available in GPGPUs and task level parallelism available on CPUs. The speedup achieved is extracted from both the test generation framework itself and also from refactoring the hardware model for multi-threaded test generation. For this purpose, we translate the RTL Verilog to a C++ and a CUDA compilable program. Experimental results show that considerable speedup can be achieved for test generation without loss of coverage. In recent years, machine learning and artificial intelligence have taken a substantial leap forward with the discovery of Deep Neural Networks(DNN). Unfortunately, apart from Accuracy and FTest numbers, there exist very few metrics to qualify a DNN. This becomes a reliability issue as DNNs are quite frequently used in safety-critical applications. It is difficult to interpret how the parameters of a trained DNN help store the knowledge from the training inputs. Therefore it is also difficult to infer whether a DNN has learned parameters which might cause an output neuron to misfire wrongly, a bug. An exhaustive search of the input space of the DNN is not only infeasible but is also misleading. Thus, in our work, we try to apply test generation techniques to generate new test inputs based on existing training and testing set to qualify the underlying robustness. Attempts to generate these inputs are guided only by the prediction probability values at the final output layer. We observe that depending on the amount of perturbation and time needed to generate these inputs we can differentiate between DNNs of varying quality.
263

Accelerating Hardware Simulation on Multi-cores

Nanjundappa, Mahesh 04 June 2010 (has links)
Electronic design automation (EDA) tools play a central role in bridging the productivity gap for designing complex hardware systems. However, with an increase in the size and complexity of today's design requirements, current methodologies and EDA tools are unable to effectively mitigate the further widening of productivity gap. It is estimated that testing and verification takes 2/3rd of the total development time of complex hardware systems. Functional simulation forms the main stay of testing and verification process and is the most widely used technique for testing and verification. Most of the simulation algorithms and their implementations are designed for uniprocessor systems that cannot easily leverage the parallelism in multi-core and GPU platforms. For example, logic simulation often uses levelized sequential algorithms, whereas the discrete-event simulation frameworks for Verilog, VHDL and SystemC employ concurrency in the form of multi-threading to given an illusion of the inherent parallelism present in circuits. However, the discrete-event model of computation requires a global notion of an event-queue, which makes improving its simulation performance via parallelization even more challenging. This work investigates automatic parallelization of simulation algorithms used to simulate hardware models. In particular, we focus on parallelizing the simulation of hardware designs described at the RTL using SystemC/HDL with examples to clearly describe the parallelization. Even though multi-cores and GPUs other parallelism, efficiently exploiting this parallelism with their programming models is not straightforward. To overcome this, we also focus our research on building intelligent translators to map simulation applications onto multi-cores and GPUs such that the complexity of the low-level programming models is hidden from the designers. / Master of Science
264

Construção de mosaico de imagens aéreas em plataformas heterogêneas para aplicações agrícolas / Construction of aerial imagery mosaic on platforms for agricultural applications

Candido, Leandro Rosendo 29 March 2019 (has links)
A agricultura de precisão tem agregado alto valor para os agricultores por causa das tecnologias que estão ligadas a ela. Sistemas que extraem informações de imagens digitais são extremamente utilizados para que o agricultor tome decisões a fim de aumentar sua produtividade. Uma das técnicas de realizar o monitoramento é a construção de um mosaico de imagens aéreas, onde são utilizadas aeronaves voando em baixa altitude. Esta técnica pode levar dezenas de horas para ser concluída, dependendo da configuração do computador que a executa. Com o intuito de reduzir o tempo nessa construção e tornar possível o embarque a essa aplicação, este trabalho apresenta uma maneira simplificada de construir o mosaico de imagens aéreas baseada na técnica de georreferenciamento direto, no qual utiliza a computação heterogênea para acelerar o desempenho. Essa abordagem é composta por apenas três técnicas que também compõem a abordagem clássica para a construção de mosaicos (warping, extração de características e combinação de características), além de inserir em seus cálculos os dados fornecidos pelos sensores GPS e IMU com a finalidade de direcionar e posicionar cada imagem pertencente ao conjunto que formará o mosaico. A plataforma de computação heterogênea utilizada neste trabalho é a NVIDIA Jetson TK1 escolhida pelo fato de disponibilizar de uma GPU que suporta a linguagem de programação CUDA. Utilizando esta abordagem, a falta de correção da perspectiva do conteúdo (geometria) da imagem gera um resultado inesperado, pois os dados fornecidos pela IMU, ao contrário do que se imagina, apenas servem para corrigir a posição das coordenadas do GPS registradas no momento de captura de cada imagem que compõem o mosaico. O tempo de execução da aplicação desenvolvida é satisfatório tornando possível a adoção desta abordagem. / Accuracy agriculture has added value to farmers thanks to the new technologies that are linked to it. Systems that extract information from digital images are very usefull to help farmers making decisions in order to increase their productivity. One of the techniques to perform this kind of monitoring is the construction of an aerial imagery mosaic where aircrafts flies in low altitude. This technique may take hours to be completed, depending on computer\'s configuration. With the purpose of reducing time in this construction, this thesis presents a simplified way to make aerial imagery mosaic based on direct georeferencing. This approach is composed by three techniques that also make up the classic approach to building mosaics (warping, extraction of characteristics and combination of characteristics), the difference is with this technique here presented is also possible to insert into the calculations the data provided by the GPS and IMU sensors with the purpose of directing and positioning each image to the belonging set to form the mosaic. The heterogeneous computing platform used in this work is the NVIDIA JetsonTK1, this platform was chosen because it offers a GPU that supports the language of CUDA programming. If the images\' geometry errors weren\'t rectfyed, using this approach, an unexpected result happens, because the data provided by IMU, contrary to what is imagined, only serve to correct the position of the GPS coordinates recorded at the moment of capture of each image that composes the mosaic. The developing time in this application is satisfactory making the adoption of this approch favorable.
265

Soluções aproximadas para algoritmos escaláveis de mineração de dados em domínios de dados complexos usando GPGPU / On approximate solutions to scalable data mining algorithms for complex data problems using GPGPU

Mamani, Alexander Victor Ocsa 22 September 2011 (has links)
A crescente disponibilidade de dados em diferentes domínios tem motivado o desenvolvimento de técnicas para descoberta de conhecimento em grandes volumes de dados complexos. Trabalhos recentes mostram que a busca em dados complexos é um campo de pesquisa importante, já que muitas tarefas de mineração de dados, como classificação, detecção de agrupamentos e descoberta de motifs, dependem de algoritmos de busca ao vizinho mais próximo. Para resolver o problema da busca dos vizinhos mais próximos em domínios complexos muitas abordagens determinísticas têm sido propostas com o objetivo de reduzir os efeitos da maldição da alta dimensionalidade. Por outro lado, algoritmos probabilísticos têm sido pouco explorados. Técnicas recentes relaxam a precisão dos resultados a fim de reduzir o custo computacional da busca. Além disso, em problemas de grande escala, uma solução aproximada com uma análise teórica sólida mostra-se mais adequada que uma solução exata com um modelo teórico fraco. Por outro lado, apesar de muitas soluções exatas e aproximadas de busca e mineração terem sido propostas, o modelo de programação em CPU impõe restrições de desempenho para esses tipos de solução. Uma abordagem para melhorar o tempo de execução de técnicas de recuperação e mineração de dados em várias ordens de magnitude é empregar arquiteturas emergentes de programação paralela, como a arquitetura CUDA. Neste contexto, este trabalho apresenta uma proposta para buscas kNN de alto desempenho baseada numa técnica de hashing e implementações paralelas em CUDA. A técnica proposta é baseada no esquema LSH, ou seja, usa-se projeções em subespac¸os. O LSH é uma solução aproximada e tem a vantagem de permitir consultas de custo sublinear para dados em altas dimensões. Usando implementações massivamente paralelas melhora-se tarefas de mineração de dados. Especificamente, foram desenvolvidos soluções de alto desempenho para algoritmos de descoberta de motifs baseados em implementações paralelas de consultas kNN. As implementações massivamente paralelas em CUDA permitem executar estudos experimentais sobre grandes conjuntos de dados reais e sintéticos. A avaliação de desempenho realizada neste trabalho usando GeForce GTX470 GPU resultou em um aumento de desempenho de até 7 vezes, em média sobre o estado da arte em buscas por similaridade e descoberta de motifs / The increasing availability of data in diverse domains has created a necessity to develop techniques and methods to discover knowledge from huge volumes of complex data, motivating many research works in databases, data mining and information retrieval communities. Recent studies have suggested that searching in complex data is an interesting research field because many data mining tasks such as classification, clustering and motif discovery depend on nearest neighbor search algorithms. Thus, many deterministic approaches have been proposed to solve the nearest neighbor search problem in complex domains, aiming to reduce the effects of the well-known curse of dimensionality. On the other hand, probabilistic algorithms have been slightly explored. Recently, new techniques aim to reduce the computational cost relaxing the quality of the query results. Moreover, in large-scale problems, an approximate solution with a solid theoretical analysis seems to be more appropriate than an exact solution with a weak theoretical model. On the other hand, even though several exact and approximate solutions have been proposed, single CPU architectures impose limits on performance to deliver these kinds of solution. An approach to improve the runtime of data mining and information retrieval techniques by an order-of-magnitude is to employ emerging many-core architectures such as CUDA-enabled GPUs. In this work we present a massively parallel kNN query algorithm based on hashing and CUDA implementation. Our method, based on the LSH scheme, is an approximate method which queries high-dimensional datasets with sub-linear computational time. By using the massively parallel implementation we improve data mining tasks, specifically we create solutions for (soft) realtime time series motif discovery. Experimental studies on large real and synthetic datasets were carried out thanks to the highly CUDA parallel implementation. Our performance evaluation on GeForce GTX 470 GPU resulted in average runtime speedups of up to 7x on the state-of-art of similarity search and motif discovery solutions
266

Resolução numérica de escoamentos compressíveis empregando um método de partículas livre de malhas e o processamento em paralelo (CUDA) / Numerical resolution of compressible flows employing a mesfree particle method and CUDA

Josecley Fialho Góes 25 August 2011 (has links)
Os métodos numéricos convencionais, baseados em malhas, têm sido amplamente aplicados na resolução de problemas da Dinâmica dos Fluidos Computacional. Entretanto, em problemas de escoamento de fluidos que envolvem superfícies livres, grandes explosões, grandes deformações, descontinuidades, ondas de choque etc., estes métodos podem apresentar algumas dificuldades práticas quando da resolução destes problemas. Como uma alternativa viável, existem os métodos de partículas livre de malhas. Neste trabalho é feita uma introdução ao método Lagrangeano de partículas, livre de malhas, Smoothed Particle Hydrodynamics (SPH) voltado para a simulação numérica de escoamentos de fluidos newtonianos compressíveis e quase-incompressíveis. Dois códigos numéricos foram desenvolvidos, uma versão serial e outra em paralelo, empregando a linguagem de programação C/C++ e a Compute Unified Device Architecture (CUDA), que possibilita o processamento em paralelo empregando os núcleos das Graphics Processing Units (GPUs) das placas de vídeo da NVIDIA Corporation. Os resultados numéricos foram validados e a eficiência computacional avaliada considerandose a resolução dos problemas unidimensionais Shock Tube e Blast Wave e bidimensional da Cavidade (Shear Driven Cavity Problem). / The conventional mesh-based numerical methods have been widely applied to solving problems in Computational Fluid Dynamics. However, in problems involving fluid flow free surfaces, large explosions, large deformations, discontinuities, shock waves etc. these methods suffer from some inherent difficulties which limit their applications to solving these problems. Meshfree particle methods have emerged as an alternative to the conventional grid-based methods. This work introduces the Smoothed Particle Hydrodynamics (SPH), a meshfree Lagrangian particle method to solve compressible flows. Two numerical codes have been developed, serial and parallel versions, using the Programming Language C/C++ and Compute Unified Device Architecture (CUDA). CUDA is NVIDIAs parallel computing architecture that enables dramatic increasing in computing performance by harnessing the power of the Graphics Processing Units (GPUs). The numerical results were validated and the speedup evaluated for the Shock Tube and Blast Wave one-dimensional problems and Shear Driven Cavity Problem.
267

Paralelização do algoritmo FDK para reconstrução 3D de imagens tomográficas usando unidades gráficas de processamento e CUDA-C / Parallelization of the FDK algotithm for 3D reconstruction of tomographic images using graphic processing units and CUDA-C

Joel Sánchez Domínguez 12 January 2012 (has links)
Conselho Nacional de Desenvolvimento Científico e Tecnológico / A obtenção de imagens usando tomografia computadorizada revolucionou o diagnóstico de doenças na medicina e é usada amplamente em diferentes áreas da pesquisa científica. Como parte do processo de obtenção das imagens tomográficas tridimensionais um conjunto de radiografias são processadas por um algoritmo computacional, o mais usado atualmente é o algoritmo de Feldkamp, David e Kress (FDK). Os usos do processamento paralelo para acelerar os cálculos em algoritmos computacionais usando as diferentes tecnologias disponíveis no mercado têm mostrado sua utilidade para diminuir os tempos de processamento. No presente trabalho é apresentada a paralelização do algoritmo de reconstrução de imagens tridimensionais FDK usando unidades gráficas de processamento (GPU) e a linguagem CUDA-C. São apresentadas as GPUs como uma opção viável para executar computação paralela e abordados os conceitos introdutórios associados à tomografia computadorizada, GPUs, CUDA-C e processamento paralelo. A versão paralela do algoritmo FDK executada na GPU é comparada com uma versão serial do mesmo, mostrando maior velocidade de processamento. Os testes de desempenho foram feitos em duas GPUs de diferentes capacidades: a placa NVIDIA GeForce 9400GT (16 núcleos) e a placa NVIDIA Quadro 2000 (192 núcleos). / The imaging using computed tomography has revolutionized the diagnosis of diseases in medicine and is widely used in different areas of scientific research. As part of the process to obtained three-dimensional tomographic images a set of x-rays are processed by a computer algorithm, the most widely used algorithm is Feldkamp, David and Kress (FDK). The use of parallel processing to speed up calculations on computer algorithms with the different available technologies, showing their usefulness to decrease processing times. In the present paper presents the parallelization of the algorithm for three-dimensional image reconstruction FDK using graphics processing units (GPU) and CUDA-C. GPUs are shown as a viable option to perform parallel computing and addressed the introductory concepts associated with computed tomographic, GPUs, CUDA-C and parallel processing. The parallel version of the FDK algorithm is executed on the GPU and compared to a serial version of the same, showing higher processing speed. Performance tests were made in two GPUs with different capacities, the NVIDIA GeForce 9400GT (16 cores) and NVIDIA GeForce 2000 (192 cores).
268

Desenvolvimento de um simulador numérico empregando o método Smoothed Particle Hydrodynamics para a resolução de escoamentos incompressíveis. Implementação computacional em paralelo (CUDA) / Numerical modelling of incompressible flows with the smoothed particles hydrodynamics method. Implementation of parallel numerical algorithms using CUDA

Marciana Lima Góes 30 August 2012 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Neste trabalho, foi desenvolvido um simulador numérico baseado no método livre de malhas Smoothed Particle Hydrodynamics (SPH) para a resolução de escoamentos de fluidos newtonianos incompressíveis. Diferentemente da maioria das versões existentes deste método, o código numérico faz uso de uma técnica iterativa na determinação do campo de pressões. Este procedimento emprega a forma diferencial de uma equação de estado para um fluido compressível e a equação da continuidade a fim de que a correção da pressão seja determinada. Uma versão paralelizada do simulador numérico foi implementada usando a linguagem de programação C/C++ e a Compute Unified Device Architecture (CUDA) da NVIDIA Corporation. Foram simulados três problemas, o problema unidimensional do escoamento de Couette e os problemas bidimensionais do escoamento no interior de uma Cavidade (Shear Driven Cavity Problem) e da Quebra de Barragem (Dambreak). / In this work a numerical simulator was developed based on the mesh-free Smoothed Particle Hydrodynamics (SPH) method to solve incompressible newtonian fluid flows. Unlike most existing versions of this method, the numerical code uses an iterative technique in the pressure field determination. This approach employs a differential state equation for a compressible fluid and the continuity equation to calculate the pressure correction. A parallel version of the numerical code was implemented using the Programming Language C/C++ and Compute Unified Device Architecture (CUDA) from the NVIDIA Corporation. The numerical results were validated and the speed-up evaluated for an one-dimensional Couette flow and two-dimensional Shear Driven Cavity and Dambreak problems.
269

Paralelização do algoritmo FDK para reconstrução 3D de imagens tomográficas usando unidades gráficas de processamento e CUDA-C / Parallelization of the FDK algotithm for 3D reconstruction of tomographic images using graphic processing units and CUDA-C

Joel Sánchez Domínguez 12 January 2012 (has links)
Conselho Nacional de Desenvolvimento Científico e Tecnológico / A obtenção de imagens usando tomografia computadorizada revolucionou o diagnóstico de doenças na medicina e é usada amplamente em diferentes áreas da pesquisa científica. Como parte do processo de obtenção das imagens tomográficas tridimensionais um conjunto de radiografias são processadas por um algoritmo computacional, o mais usado atualmente é o algoritmo de Feldkamp, David e Kress (FDK). Os usos do processamento paralelo para acelerar os cálculos em algoritmos computacionais usando as diferentes tecnologias disponíveis no mercado têm mostrado sua utilidade para diminuir os tempos de processamento. No presente trabalho é apresentada a paralelização do algoritmo de reconstrução de imagens tridimensionais FDK usando unidades gráficas de processamento (GPU) e a linguagem CUDA-C. São apresentadas as GPUs como uma opção viável para executar computação paralela e abordados os conceitos introdutórios associados à tomografia computadorizada, GPUs, CUDA-C e processamento paralelo. A versão paralela do algoritmo FDK executada na GPU é comparada com uma versão serial do mesmo, mostrando maior velocidade de processamento. Os testes de desempenho foram feitos em duas GPUs de diferentes capacidades: a placa NVIDIA GeForce 9400GT (16 núcleos) e a placa NVIDIA Quadro 2000 (192 núcleos). / The imaging using computed tomography has revolutionized the diagnosis of diseases in medicine and is widely used in different areas of scientific research. As part of the process to obtained three-dimensional tomographic images a set of x-rays are processed by a computer algorithm, the most widely used algorithm is Feldkamp, David and Kress (FDK). The use of parallel processing to speed up calculations on computer algorithms with the different available technologies, showing their usefulness to decrease processing times. In the present paper presents the parallelization of the algorithm for three-dimensional image reconstruction FDK using graphics processing units (GPU) and CUDA-C. GPUs are shown as a viable option to perform parallel computing and addressed the introductory concepts associated with computed tomographic, GPUs, CUDA-C and parallel processing. The parallel version of the FDK algorithm is executed on the GPU and compared to a serial version of the same, showing higher processing speed. Performance tests were made in two GPUs with different capacities, the NVIDIA GeForce 9400GT (16 cores) and NVIDIA GeForce 2000 (192 cores).
270

Desenvolvimento de um simulador numérico empregando o método Smoothed Particle Hydrodynamics para a resolução de escoamentos incompressíveis. Implementação computacional em paralelo (CUDA) / Numerical modelling of incompressible flows with the smoothed particles hydrodynamics method. Implementation of parallel numerical algorithms using CUDA

Marciana Lima Góes 30 August 2012 (has links)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Neste trabalho, foi desenvolvido um simulador numérico baseado no método livre de malhas Smoothed Particle Hydrodynamics (SPH) para a resolução de escoamentos de fluidos newtonianos incompressíveis. Diferentemente da maioria das versões existentes deste método, o código numérico faz uso de uma técnica iterativa na determinação do campo de pressões. Este procedimento emprega a forma diferencial de uma equação de estado para um fluido compressível e a equação da continuidade a fim de que a correção da pressão seja determinada. Uma versão paralelizada do simulador numérico foi implementada usando a linguagem de programação C/C++ e a Compute Unified Device Architecture (CUDA) da NVIDIA Corporation. Foram simulados três problemas, o problema unidimensional do escoamento de Couette e os problemas bidimensionais do escoamento no interior de uma Cavidade (Shear Driven Cavity Problem) e da Quebra de Barragem (Dambreak). / In this work a numerical simulator was developed based on the mesh-free Smoothed Particle Hydrodynamics (SPH) method to solve incompressible newtonian fluid flows. Unlike most existing versions of this method, the numerical code uses an iterative technique in the pressure field determination. This approach employs a differential state equation for a compressible fluid and the continuity equation to calculate the pressure correction. A parallel version of the numerical code was implemented using the Programming Language C/C++ and Compute Unified Device Architecture (CUDA) from the NVIDIA Corporation. The numerical results were validated and the speed-up evaluated for an one-dimensional Couette flow and two-dimensional Shear Driven Cavity and Dambreak problems.

Page generated in 0.0354 seconds