• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 23
  • 9
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 42
  • 42
  • 30
  • 15
  • 9
  • 9
  • 9
  • 9
  • 8
  • 8
  • 6
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Um método baseado em inteligência computacional para a geração automática de casos de teste de caixa preta. / A method based on computational intelligence for automatic Black Box test cases generation.

Hindenburgo Elvas Gonçalves de Sá 09 September 2010 (has links)
Este trabalho de dissertação apresenta um método baseado em técnicas de inteligência computacional, como aprendizado de conjunto de regras, redes neurais artificiais e lógica fuzzy, para propor o desenvolvimento de ferramentas capazes de gerar e classificar casos de testes de caixa preta com as finalidades de auxiliar na atividade de preparação de testes, na detecção de defeitos em características ou funcionalidades e na diminuição do tempo de detecção de correção do software visando, com isto, atingir uma cobertura de testes qualitativamente superior ao processo criação manual. A obtenção de novos casos de testes e a classificação dos casos de testes gerados utilizam técnicas de aprendizado de um conjunto de regras, utilizando algoritmos de cobertura seqüencial, e de uma máquina de inferência fuzzy. A definição dos métodos, tanto para gerar como para classificar os casos de testes, foram fundamentados em experimentos visando comparar as similaridades entre os métodos fuzzy, redes neurais artificiais e aprendizado de conjunto de regras. Por fim, procurou-se desenvolver uma ferramenta à titulo de prova de conceitos objetivando aplicar os métodos que obtiveram melhores resultados nas experimentações. Os critérios adotados para definir os métodos foram às métricas de complexidade ciclomática e total de linhas de código (LOC). / This dissertation work presents a method based on computational intelligence techniques, such as learning set of rules, artificial neural networks and fuzzy logic, proposed the development of tools that generate test cases and sort of black box with the purposes of assisting activity in the preparation of tests for detection of defects in features or functionality and decreasing the detection time correction software aimed, with this, reach a qualitatively higher test coverage to the manual creation process. The acquisition of new test cases and classification of test cases generated using techniques Learning learning a whole set of Regrasregras using sequential covering algorithms, and a fuzzy inference machine. The definition of methods, both to generate and to classify the test cases were substantiated in experiments aimed at comparing the similarities between the fuzzy methods, neural networks and learning of the rule set. Finally, we sought to develop a tool for evidence of concepts aiming to apply the methods which obtained better results in trials. The criteria adopted to define the methods were metrics cyclomatic complexity and total lines of code (LOC).
32

Generative fixation : a unified explanation for the adaptive capacity of simple recombinative genetic algorithms /

Burjorjee, Keki M. January 2009 (has links)
Thesis (Ph. D.)--Brandeis University, 2009. / "UMI:3369218." MICROFILM COPY ALSO AVAILABLE IN THE UNIVERSITY ARCHIVES. Includes bibliographical references.
33

Intractability Results for some Computational Problems

Ponnuswami, Ashok Kumar 08 July 2008 (has links)
In this thesis, we show results for some well-studied problems from learning theory and combinatorial optimization. Learning Parities under the Uniform Distribution: We study the learnability of parities in the agnostic learning framework of Haussler and Kearns et al. We show that under the uniform distribution, agnostically learning parities reduces to learning parities with random classification noise, commonly referred to as the noisy parity problem. Together with the parity learning algorithm of Blum et al, this gives the first nontrivial algorithm for agnostic learning of parities. We use similar techniques to reduce learning of two other fundamental concept classes under the uniform distribution to learning of noisy parities. Namely, we show that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables and learning of k-juntas reduces to learning noisy parities of k variables. Agnostic Learning of Halfspaces: We give an essentially optimal hardness result for agnostic learning of halfspaces over rationals. We show that for any constant ε finding a halfspace that agrees with an unknown function on 1/2+ε fraction of examples is NP-hard even when there exists a halfspace that agrees with the unknown function on 1-ε fraction of examples. This significantly improves on a number of previous hardness results for this problem. We extend the result to ε = 2[superscript-Ω(sqrt{log n})] assuming NP is not contained in DTIME(2[superscript(log n)O(1)]). Majorities of Halfspaces: We show that majorities of halfspaces are hard to PAC-learn using any representation, based on the cryptographic assumption underlying the Ajtai-Dwork cryptosystem. This also implies a hardness result for learning halfspaces with a high rate of adversarial noise even if the learning algorithm can output any efficiently computable hypothesis. Max-Clique, Chromatic Number and Min-3Lin-Deletion: We prove an improved hardness of approximation result for two problems, namely, the problem of finding the size of the largest clique in a graph (also referred to as the Max-Clique problem) and the problem of finding the chromatic number of a graph. We show that for any constant γ > 0, there is no polynomial time algorithm that approximates these problems within factor n/2[superscript(log n)3/4+γ] in an n vertex graph, assuming NP is not contained in BPTIME(2[superscript(log n)O(1)]). This improves the hardness factor of n/2[superscript (log n)1-γ'] for some small (unspecified) constant γ' > 0 shown by Khot. Our main idea is to show an improved hardness result for the Min-3Lin-Deletion problem. An instance of Min-3Lin-Deletion is a system of linear equations modulo 2, where each equation is over three variables. The objective is to find the minimum number of equations that need to be deleted so that the remaining system of equations has a satisfying assignment. We show a hardness factor of 2[superscript sqrt{log n}] for this problem, improving upon the hardness factor of (log n)[superscriptβ] shown by Hastad, for some small (unspecified) constant β > 0. The hardness results for Max-Clique and chromatic number are then obtained using the reduction from Min-3Lin-Deletion as given by Khot. Monotone Multilinear Boolean Circuits for Bipartite Perfect Matching: A monotone Boolean circuit is said to be multilinear if for any AND gate in the circuit, the minimal representation of the two input functions to the gate do not have any variable in common. We show that monotone multilinear Boolean circuits for computing bipartite perfect matching require exponential size. In fact we prove a stronger result by characterizing the structure of the smallest monotone multilinear Boolean circuits for the problem.
34

Mathematical foundations of graded knowledge spaces

Bartl, Eduard. January 2009 (has links)
Thesis (Ph. D.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Systems Science and Industrial Engineering, 2009. / Includes bibliographical references.
35

On learning assumptions for compositional verification of probabilistic systems

Feng, Lu January 2014 (has links)
Probabilistic model checking is a powerful formal verification method that can ensure the correctness of real-life systems that exhibit stochastic behaviour. The work presented in this thesis aims to solve the scalability challenge of probabilistic model checking, by developing, for the first time, fully-automated compositional verification techniques for probabilistic systems. The contributions are novel approaches for automatically learning probabilistic assumptions for three different compositional verification frameworks. The first framework considers systems that are modelled as Segala probabilistic automata, with assumptions captured by probabilistic safety properties. A fully-automated approach is developed to learn assumptions for various assume-guarantee rules, including an asymmetric rule Asym for two-component systems, an asymmetric rule Asym-N for n-component systems, and a circular rule Circ. This approach uses the L* and NL* algorithms for automata learning. The second framework considers systems where the components are modelled as probabilistic I/O systems (PIOSs), with assumptions represented by Rabin probabilistic automata (RPAs). A new (complete) assume-guarantee rule Asym-Pios is proposed for this framework. In order to develop a fully-automated approach for learning assumptions and performing compositional verification based on the rule Asym-Pios, a (semi-)algorithm to check language inclusion of RPAs and an L*-style learning method for RPAs are also proposed. The third framework considers the compositional verification of discrete-time Markov chains (DTMCs) encoded in Boolean formulae, with assumptions represented as Interval DTMCs (IDTMCs). A new parallel operator for composing an IDTMC and a DTMC is defined, and a new (complete) assume-guarantee rule Asym-Idtmc that uses this operator is proposed. A fully-automated approach is formulated to learn assumptions for rule Asym-Idtmc, using the CDNF learning algorithm and a new symbolic reachability analysis algorithm for IDTMCs. All approaches proposed in this thesis have been implemented as prototype tools and applied to a range of benchmark case studies. Experimental results show that these approaches are helpful for automating the compositional verification of probabilistic systems through learning small assumptions, but may suffer from high computational complexity or even undecidability. The techniques developed in this thesis can assist in developing scalable verification frameworks for probabilistic models.
36

Modelos de tópicos na classificação automática de resenhas de usuários. / Topic models in user review automatic classification.

Mauá, Denis Deratani 14 August 2009 (has links)
Existe um grande número de resenhas de usuário na internet contendo valiosas informações sobre serviços, produtos, política e tendências. A compreensão automática dessas opiniões é não somente cientificamente interessante, mas potencialmente lucrativa. A tarefa de classificação de sentimentos visa a extração automática das opiniões expressas em documentos de texto. Diferentemente da tarefa mais tradicional de categorização de textos, na qual documentos são classificados em assuntos como esportes, economia e turismo, a classificação de sentimentos consiste em anotar documentos com os sentimentos expressos no texto. Se comparados aos classificadores tradicionais, os classificadores de sentimentos possuem um desempenho insatisfatório. Uma das possíveis causas do baixo desempenho é a ausência de representações adequadas que permitam a discriminação das opiniões expressas de uma forma concisa e própria para o processamento de máquina. Modelos de tópicos são modelos estatísticos que buscam extrair informações semânticas ocultas na grande quantidade de dados presente em coleções de texto. Eles representam um documento como uma mistura de tópicos, onde cada tópico é uma distribuição de probabilidades sobre palavras. Cada distribuição representa um conceito semântico implícito nos dados. Modelos de tópicos, as palavras são substituídas por tópicos que representam seu significado de forma sucinta. De fato, os modelos de tópicos realizam uma redução de dimensionalidade nos dados que pode levar a um aumento do desempenho das técnicas de categorização de texto e recuperação de informação. Na classificação de sentimentos, eles podem fornecer a representação necessária através da extração de tópicos que representem os sentimentos expressos no texto. Este trabalho dedica-se ao estudo da aplicação de modelos de tópicos na representação e classificação de sentimentos de resenhas de usuário. Em particular, o modelo Latent Dirichlet Allocation (LDA) e quatro extensões (duas delas desenvolvidas pelo autor) são avaliados na tarefa de classificação de sentimentos baseada em múltiplos aspectos. As extensões ao modelo LDA permitem uma investigação dos efeitos da incorporação de informações adicionais como contexto, avaliações de aspecto e avaliações de múltiplos aspectos no modelo original. / There is a large number of user reviews on the internet with valuable information on services, products, politics and trends. There is both scientific and economic interest in the automatic understanding of such data. Sentiment classification is concerned with automatic extraction of opinions expressed in user reviews. Unlike standard text categorization tasks that deal with the classification of documents into subjects such as sports, economics and tourism, sentiment classification attempts to tag documents with respect to the feelings they express. Compared to the accuracy of standard methods, sentiment classifiers have shown poor performance. One possible cause of such a poor performance is the lack of adequate representations that lead to opinion discrimination in a concise and machine-readable form. Topic Models are statistical models concerned with the extraction of semantic information hidden in the large number of data available in text collections. They represent a document as a mixture of topics, probability distributions over words that represent a semantic concept. According to Topic Model representation, words can be substituted by topics able to represent concisely its meaning. Indeed, Topic Models perform a data dimensionality reduction that can improve the performance of text classification and information retrieval techniques. In sentiment classification, they can provide the necessary representation by extracting topics that represent the general feelings expressed in text. This work presents a study of the use of Topic Models for representing and classifying user reviews with respect to their feelings. In particular, the Latent Dirichlet Allocation (LDA) model and four extensions (two of them developed by the author) are evaluated on the task of aspect-based sentiment classification. The extensions to the LDA model enables us to investigate the effects of the incorporation of additional information such as context, aspect rating and multiple aspect rating into the original model.
37

"Projeto multirresolução de operadores morfológicos a partir de exemplos" / "Multiresolution design of morphological operators from examples"

Vaquero, Daniel André 19 April 2006 (has links)
Resolver um problema de processamento de imagens pode ser uma tarefa bastante complexa. Em geral, isto depende de diversos fatores, como o conhecimento, experiência e intuição de um especialista, e o conhecimento do domínio da aplicação em questão. Motivados por tal complexidade, alguns grupos de pesquisa têm trabalhado na criação de técnicas para projetar operadores de imagens automaticamente, a partir de uma coleção de exemplos de entrada e saída do operador desejado. A abordagem multirresolução tem sido empregada com sucesso no projeto estatístico de W-operadores de janelas grandes. Esta metodologia usa uma estrutura piramidal de janelas para auxiliar na estimação das distribuições de probabilidade condicional para padrões não observados no conjunto de treinamento. No entanto, a qualidade do operador projetado depende diretamente da pirâmide escolhida. Tal escolha é feita pelo projetista a partir de sua intuição e de seu conhecimento prévio sobre o problema. Neste trabalho, investigamos o uso da entropia condicional como um critério para determinar automaticamente uma boa pirâmide a ser usada no projeto do W-operador. Para isto, desenvolvemos uma técnica que utiliza o arcabouço piramidal multirresolução como um modelo na estimação da distribuição conjunta de probabilidades. Experimentos com o problema de reconhecimento de dígitos manuscritos foram realizados para avaliar o desempenho do método. Utilizamos duas bases de dados diferentes, com bons resultados. Além disso, outra contribuição deste trabalho foi a experimentação com mapeamentos de resolução da teoria de pirâmides de imagens no contexto do projeto de W-operadores multirresolução. / The task of finding a good solution for an image processing problem is often very complex. It usually depends on the knowledge, experience and intuition of an image processing specialist. This complexity has served as a motivation for some research groups to create techniques for automatically designing image operators based on a collection of input and output examples of a desired operator. The multiresolution approach has been successfully used to statistically design W-operators for large windows. However, the success of this method directly depends on the adequate choice of a pyramidal window structure, which is used to aid in the estimation of the conditional probability distributions for patterns that do not appear in the training set. The choice is made by the designer, based on his intuition and previous knowledge of the problem domain. In this work, we investigate the use of the conditional entropy criterion for automatically determining a good pyramid. In order to compute the entropy, we have developed a technique that uses the multiresolution pyramidal framework as a model in the estimation of the joint probability distribution. The performance of the method is evaluated on the problem of handwritten digits recognition. Two different databases are used, with good practical results. Another important contribution of this work is the experimentation with resolution mappings from image pyramids theory in the context of multiresolution W-operator design.
38

Modelos de tópicos na classificação automática de resenhas de usuários. / Topic models in user review automatic classification.

Denis Deratani Mauá 14 August 2009 (has links)
Existe um grande número de resenhas de usuário na internet contendo valiosas informações sobre serviços, produtos, política e tendências. A compreensão automática dessas opiniões é não somente cientificamente interessante, mas potencialmente lucrativa. A tarefa de classificação de sentimentos visa a extração automática das opiniões expressas em documentos de texto. Diferentemente da tarefa mais tradicional de categorização de textos, na qual documentos são classificados em assuntos como esportes, economia e turismo, a classificação de sentimentos consiste em anotar documentos com os sentimentos expressos no texto. Se comparados aos classificadores tradicionais, os classificadores de sentimentos possuem um desempenho insatisfatório. Uma das possíveis causas do baixo desempenho é a ausência de representações adequadas que permitam a discriminação das opiniões expressas de uma forma concisa e própria para o processamento de máquina. Modelos de tópicos são modelos estatísticos que buscam extrair informações semânticas ocultas na grande quantidade de dados presente em coleções de texto. Eles representam um documento como uma mistura de tópicos, onde cada tópico é uma distribuição de probabilidades sobre palavras. Cada distribuição representa um conceito semântico implícito nos dados. Modelos de tópicos, as palavras são substituídas por tópicos que representam seu significado de forma sucinta. De fato, os modelos de tópicos realizam uma redução de dimensionalidade nos dados que pode levar a um aumento do desempenho das técnicas de categorização de texto e recuperação de informação. Na classificação de sentimentos, eles podem fornecer a representação necessária através da extração de tópicos que representem os sentimentos expressos no texto. Este trabalho dedica-se ao estudo da aplicação de modelos de tópicos na representação e classificação de sentimentos de resenhas de usuário. Em particular, o modelo Latent Dirichlet Allocation (LDA) e quatro extensões (duas delas desenvolvidas pelo autor) são avaliados na tarefa de classificação de sentimentos baseada em múltiplos aspectos. As extensões ao modelo LDA permitem uma investigação dos efeitos da incorporação de informações adicionais como contexto, avaliações de aspecto e avaliações de múltiplos aspectos no modelo original. / There is a large number of user reviews on the internet with valuable information on services, products, politics and trends. There is both scientific and economic interest in the automatic understanding of such data. Sentiment classification is concerned with automatic extraction of opinions expressed in user reviews. Unlike standard text categorization tasks that deal with the classification of documents into subjects such as sports, economics and tourism, sentiment classification attempts to tag documents with respect to the feelings they express. Compared to the accuracy of standard methods, sentiment classifiers have shown poor performance. One possible cause of such a poor performance is the lack of adequate representations that lead to opinion discrimination in a concise and machine-readable form. Topic Models are statistical models concerned with the extraction of semantic information hidden in the large number of data available in text collections. They represent a document as a mixture of topics, probability distributions over words that represent a semantic concept. According to Topic Model representation, words can be substituted by topics able to represent concisely its meaning. Indeed, Topic Models perform a data dimensionality reduction that can improve the performance of text classification and information retrieval techniques. In sentiment classification, they can provide the necessary representation by extracting topics that represent the general feelings expressed in text. This work presents a study of the use of Topic Models for representing and classifying user reviews with respect to their feelings. In particular, the Latent Dirichlet Allocation (LDA) model and four extensions (two of them developed by the author) are evaluated on the task of aspect-based sentiment classification. The extensions to the LDA model enables us to investigate the effects of the incorporation of additional information such as context, aspect rating and multiple aspect rating into the original model.
39

"Projeto multirresolução de operadores morfológicos a partir de exemplos" / "Multiresolution design of morphological operators from examples"

Daniel André Vaquero 19 April 2006 (has links)
Resolver um problema de processamento de imagens pode ser uma tarefa bastante complexa. Em geral, isto depende de diversos fatores, como o conhecimento, experiência e intuição de um especialista, e o conhecimento do domínio da aplicação em questão. Motivados por tal complexidade, alguns grupos de pesquisa têm trabalhado na criação de técnicas para projetar operadores de imagens automaticamente, a partir de uma coleção de exemplos de entrada e saída do operador desejado. A abordagem multirresolução tem sido empregada com sucesso no projeto estatístico de W-operadores de janelas grandes. Esta metodologia usa uma estrutura piramidal de janelas para auxiliar na estimação das distribuições de probabilidade condicional para padrões não observados no conjunto de treinamento. No entanto, a qualidade do operador projetado depende diretamente da pirâmide escolhida. Tal escolha é feita pelo projetista a partir de sua intuição e de seu conhecimento prévio sobre o problema. Neste trabalho, investigamos o uso da entropia condicional como um critério para determinar automaticamente uma boa pirâmide a ser usada no projeto do W-operador. Para isto, desenvolvemos uma técnica que utiliza o arcabouço piramidal multirresolução como um modelo na estimação da distribuição conjunta de probabilidades. Experimentos com o problema de reconhecimento de dígitos manuscritos foram realizados para avaliar o desempenho do método. Utilizamos duas bases de dados diferentes, com bons resultados. Além disso, outra contribuição deste trabalho foi a experimentação com mapeamentos de resolução da teoria de pirâmides de imagens no contexto do projeto de W-operadores multirresolução. / The task of finding a good solution for an image processing problem is often very complex. It usually depends on the knowledge, experience and intuition of an image processing specialist. This complexity has served as a motivation for some research groups to create techniques for automatically designing image operators based on a collection of input and output examples of a desired operator. The multiresolution approach has been successfully used to statistically design W-operators for large windows. However, the success of this method directly depends on the adequate choice of a pyramidal window structure, which is used to aid in the estimation of the conditional probability distributions for patterns that do not appear in the training set. The choice is made by the designer, based on his intuition and previous knowledge of the problem domain. In this work, we investigate the use of the conditional entropy criterion for automatically determining a good pyramid. In order to compute the entropy, we have developed a technique that uses the multiresolution pyramidal framework as a model in the estimation of the joint probability distribution. The performance of the method is evaluated on the problem of handwritten digits recognition. Two different databases are used, with good practical results. Another important contribution of this work is the experimentation with resolution mappings from image pyramids theory in the context of multiresolution W-operator design.
40

Unraveling the neural circuitry of sequence-based navigation using a combined fos imaging and computational approach / Caractérisation des circuits neuronaux sous-tendant la navigation de type séquence : imagerie Fos, connectivité fonctionnelle et approche computationnelle

Babayan, Bénédicte 27 June 2014 (has links)
La navigation spatiale est une fonction complexe qui nécessite de combiner des informations sur l’environnement et notre mouvement propre pour construire une représentation du monde et trouver le chemin le plus direct vers notre but. Cette intégration multimodale suggère qu’un large réseau de structures corticales et sous-corticales interagit avec l’hippocampe, structure clé de la navigation. Je me suis concentrée chez la souris sur la navigation de type séquence (ou stratégie égocentrique séquentielle) qui repose sur l’organisation temporelle de mouvements associés à des points de choix spatialement distincts. Après avoir montré que l’apprentissage de cette navigation de type séquence nécessitait l’hippocampe et le striatum dorso-médian, nous avons caractérisé le réseau fonctionnel la sous-tendant en combinant de l’imagerie Fos, de l’analyse de connectivité fonctionnelle et une approche computationnelle. Les réseaux fonctionnels changent au cours de l’apprentissage. Lors de la phase précoce, le réseau impliqué comprend un ensemble de régions cortico-striatales fortement corrélées. L’hippocampe était activé ainsi que des structures impliquées dans le traitement d’informations de mouvement propre (cervelet), dans la manipulation de représentations mentales de l’espace (cortex rétrosplénial, pariétal, entorhinal) et dans la planification de trajectoires dirigées vers un but (boucle cortex préfrontal-ganglions de la base). Le réseau de la phase tardive est caractérisé par l’apparition d’activations coordonnées de l’hippocampe et du cervelet avec le reste du réseau. Parallèlement, nous avons testé si l’intégration de chemin, de l’apprentissage par renforcement basé modèle ou non-basé modèle pouvaient reproduire le comportement des souris. Seul un apprentissage par renforcement non-basé modèle auquel une mémoire rétrospective était ajoutée pouvait reproduire les dynamiques d’apprentissage à l’échelle du groupe ainsi que la variabilité individuelle. Ces résultats suggèrent qu’un modèle d’apprentissage par renforcement suffit à l’apprentissage de la navigation de type séquence et que l’ensemble des structures que cet apprentissage requiert adaptent leurs interactions fonctionnelles au cours de l’apprentissage. / Spatial navigation is a complex function requiring the combination of external and self-motion cues to build a coherent representation of the external world and drive optimal behaviour directed towards a goal. This multimodal integration suggests that a large network of cortical and subcortical structures interacts with the hippocampus, a key structure in navigation. I have studied navigation in mice through this global approach and have focused on one particular type of navigation, which consists in remembering a sequence of turns, named sequence-based navigation or sequential egocentric strategy. This navigation specifically relies on the temporal organization of movements at spatially distinct choice points. We first showed that sequence-based navigation learning required the hippocampus and the dorsomedial striatum. Our aim was to identify the functional network underlying sequence-based navigation using Fos imaging and computational approaches. The functional networks dynamically changed across early and late learning stages. The early stage network was dominated by a highly inter-connected cortico-striatal cluster. The hippocampus was activated alongside structures known to be involved in self-motion processing (cerebellar cortices), in mental representation of space manipulations (retrosplenial, parietal, entorhinal cortices) and in goal-directed path planning (prefrontal-basal ganglia loop). The late stage was characterized by the emergence of correlated activity between the hippocampus, the cerebellum and the cortico-striatal structures. Conjointly, we explored whether path integration, model-based or model-free reinforcement learning algorithms could explain mice’s learning dynamics. Only the model-free system, as long as a retrospective memory component was added to it, was able to reproduce both the group learning dynamics and the individual variability observed in the mice. These results suggest that a unique model-free reinforcement learning algorithm was sufficient to learn sequence-based navigation and that the multiple structures this learning required adapted their functional interactions across learning.

Page generated in 0.5372 seconds