• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 110
  • 60
  • 32
  • 22
  • 11
  • 9
  • 9
  • 7
  • 6
  • 5
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 303
  • 303
  • 66
  • 63
  • 42
  • 35
  • 32
  • 32
  • 32
  • 31
  • 31
  • 29
  • 28
  • 27
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Uma infraestrutura baseada em serviço para evolução do teste de mutação utilizando o tamanho semântico do mutante / A service-based infrastructure for evolution of mutation testing

Sousa, Leonardo da Silva 08 August 2014 (has links)
Submitted by Cláudia Bueno (claudiamoura18@gmail.com) on 2015-10-15T21:21:20Z No. of bitstreams: 2 Dissertação - Leonardo da Silva Sousa - 2014.pdf: 1923255 bytes, checksum: d4e7d20a76f4c4ff25ce8512122e8a58 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-10-16T11:14:33Z (GMT) No. of bitstreams: 2 Dissertação - Leonardo da Silva Sousa - 2014.pdf: 1923255 bytes, checksum: d4e7d20a76f4c4ff25ce8512122e8a58 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2015-10-16T11:14:33Z (GMT). No. of bitstreams: 2 Dissertação - Leonardo da Silva Sousa - 2014.pdf: 1923255 bytes, checksum: d4e7d20a76f4c4ff25ce8512122e8a58 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2014-08-08 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Software Testing is indispensable if you want to achieve and guarantee the quality of developed software. There are some techniques to test software, among them Fault-based Testing Technique which includes the Mutation Testing criteria. Mutation Testing uses mutation operators to generate a set of alternative programs, called mutants, which differ from the original program at a particular point in the code. The test cases are applied in the original program and in the mutants in order to verify that the test cases are able to show the difference in behavior between the original program and each mutant. This test criterion stands out because of its effectiveness in measuring the quality of the test while finding defects in the program, however it suffers from the high computational cost required for its execution. There are some approaches that aim to reduce the cost of Mutation Testing, for example, Selective Mutation. Selective Mutation reduces the mutation cost applying a subset of mutation operators that it is capable of generating fewer mutants and still achieving high testing effectiveness. The aim of this paper is find a subset of mutation operators and show such subset is almost as good as the whole set. Thereby, such subset can be used in Selective Mutation.Here, fault is used to select a subset of mutation operators, this is main difference between this work and others works in Mutation Selective. Since Mutation Testing use fault that program could have, there is nothing more logical than using such fault to select operators. / O Teste de Software é imprescindível caso se queira alcançar e garantir a qualidade do software desenvolvido. Existem algumas técnicas para se testar um software, entre tais técnicas há o Teste Baseado em Defeitos que inclui o critério denominado Teste de Mutação. O Teste de Mutação consiste em usar operadores de mutação para gerar um conjunto de programas alternativos, chamados de mutantes, que se diferem do programa original em um determinado ponto no código. Os casos de testes são aplicados no programa original e nos mutantes com o objetivo de verificar se os casos de testes são capazes de evidenciar a diferença de comportamento entre o programa original e cada mutante. Esse critério de teste se destaca devido sua eficácia em medir a qualidade do conjunto de teste enquanto encontra defeitos no programa, entretanto ela sofre com o alto custo computacional necessário para sua execução. Existem algumas abordagens que visam diminuir o custo do Teste de Mutação, entre elas a Mutação Seletiva. A Mutação Seletiva reduz o custo do teste, aplicando um conjunto reduzido de operadores de mutação capaz de gerar menos mutantes e ainda alcançar alta efetividade no teste. O objetivo desse trabalho é encontrar esse conjunto reduzido de operadores e mostrar que tal conjunto é quase tão bom quanto o conjunto total. Consequentemente, podendo ser usado na Mutação Seletiva. A diferença desse trabalho para outros que usam a Mutação Seletiva é que neste é usado uma abordagem baseada no defeito para selecionar o conjunto reduzido de defeitos, uma vez que o Teste de Mutação usa os defeitos que o programa poderia ter para testálo. Portanto nada mais lógico do que usar o próprio defeito como base para a seleção de operadores.
212

Composer-science: um framework para a composição de workflows científicos

Silva, Laryssa Aparecida Machado da 05 July 2010 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-05-31T11:20:44Z No. of bitstreams: 1 laryssaaparecidamachadodasilva.pdf: 4042568 bytes, checksum: 22bb878bf9e226b2225e96b0e5b6405a (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-05-31T12:42:13Z (GMT) No. of bitstreams: 1 laryssaaparecidamachadodasilva.pdf: 4042568 bytes, checksum: 22bb878bf9e226b2225e96b0e5b6405a (MD5) / Made available in DSpace on 2017-05-31T12:42:13Z (GMT). No. of bitstreams: 1 laryssaaparecidamachadodasilva.pdf: 4042568 bytes, checksum: 22bb878bf9e226b2225e96b0e5b6405a (MD5) Previous issue date: 2010-07-05 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Um conceito importante nas pesquisas em e-Science é o de workflows científicos, que, em geral, são longos, compostos de várias aplicações que, em conjunto, representam um experimento científico. Uma possibilidade para auxiliar na definição destes workflows científicos é o uso de ferramentas que agreguem semântica para auxiliar na sua composição. Os serviços Web semânticos apresentam tecnologias altamente favoráveis à sua composição para a obtenção de processos mais complexos, tais como o uso de padrões Web, independência de plataforma, independência de linguagem de programação utilizada para o desenvolvimento, possibilidade de processamento distribuído, e, principalmente, o uso de recursos semânticos que possibilitem sua descoberta, composição e invocação automáticas. Com o objetivo de auxiliar na descoberta de serviços Web para a composição de workflows científicos, propomos o desenvolvimento de um framework, denominado Composer-Science, que realize a busca de serviços Web semânticos e componha estes, definindo assim, um workflow científico. O objetivo geral do ComposerScience é permitir que o pesquisador descreva semanticamente um workflow científico e, considerando essa descrição, automatize, por meio do uso de serviços Web semânticos e ontologias, a busca semântica por serviços em repositórios e a geração de workflows científicos a partir dessa composição. O objetivo geral do framework pode ser decomposto em objetivos específicos: o registro e o armazenamento, nos repositórios distribuídos (bancos de dados) do framework, de ontologias de domínio (OWL) e anotações dos serviços Web semânticos (OWL-S); a realização de pesquisa semântica, baseada em requisitos fornecidos pelo pesquisador, nos repositórios distribuídos, a fim de realizar a descoberta de serviços Web semânticos que atendam os requisitos semânticos fornecidos; a análise sintática, baseada em requisitos estruturais (dados de entrada e saída), além da análise semântica dos serviços descobertos por meio da pesquisa semântica, a fim de se obter possíveis composições dos mesmos; a geração de modelos de workflows em WS-BPEL a partir das composições possíveis. Desta forma, os modelos gerados pelo framework podem ser utilizados em Sistemas de Gerenciamento de Workflows Científicos (SGWfC) e serem compostos com outros modelos de workflow. / An important concept in e-Science research is scientific workflows, which are usually long, consisting of several applications that, together, represent a scientific experiment. One possibility to assist in defining these scientific workflows is the use of tools that add semantics to the composition process. Semantic Web services have technologies that are highly favorable to their composition, in order to obtain more complex processes. Examples of these technologies are the use of Web standards, platform independence, programming language independence, possibility of distributed processing and especially the use of semantic resources that enable their discovery, automatic composition and invocation. With the aim of assisting in the discovery of Web services for scientific workflows composition, we propose the development of a framework, named Composer-Science, to conduct the search for semantic Web services and compose them, thus defining a scientific workflow. The overall objective of Composer-Science is to allow researcher to describe semantically a scientific workflow and, considering this description automatize, through the use of semantic web services and ontologies, the semantic search for services in repositories and the generation of scientific workflows from this composition. The overall objective of the framework can be broken down into specific objectives: registration and storage of domain ontologies (OWL) and semantic annotations of Web services (OWL-S), in distributed repositories (databases) of the framework; implementation of semantic search, based on requirements provided by the researcher, in distributed repositories, in order to discovery semantic Web services that match the semantic requirements provided; the syntactic analysis, based on structural requirements (input and output), and semantic analysis of services discovered using semantic search, in order to obtain their possible compositions; the generation of WS-BPEL workflow models from the possible compositions. Finally, the models generated by the framework can be used in Workflow Management Systems (WMS) and composed with other workflow models.
213

SciProvMiner: captura e consulta de proveniência utilizando recursos Web semânticos para ampliação do conhecimento gerado e otimização do processo de coleta

Alves, Tatiane Ornelas Matins 06 September 2013 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-05-31T14:07:30Z No. of bitstreams: 1 tatianeornelasmartinsalves.pdf: 7124590 bytes, checksum: c53abf20bb1470077226587298efa22d (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-01T11:36:48Z (GMT) No. of bitstreams: 1 tatianeornelasmartinsalves.pdf: 7124590 bytes, checksum: c53abf20bb1470077226587298efa22d (MD5) / Made available in DSpace on 2017-06-01T11:36:48Z (GMT). No. of bitstreams: 1 tatianeornelasmartinsalves.pdf: 7124590 bytes, checksum: c53abf20bb1470077226587298efa22d (MD5) Previous issue date: 2013-09-06 / Prover informação histórica de experimentos científicos com o objetivo de tratar o problema de perda de conhecimento do cientista sobre o experimento tem sido o foco de diversas pesquisas. No entanto, o apoio computacional ao experimento científico em larga escala encontra-se ainda incipiente e é considerado um grande desafio. Este trabalho tem o intuito de colaborar para as pesquisas nessa área, apresentando a arquitetura SciProvMiner, cujo principal objetivo é coletar proveniência prospectiva e retrospectiva de experimentos científicos fazendo uso de recursos Web semânticos para otimizar o processo de captura das informações de proveniência e aumentar o conhecimento do cientista sobre o experimento realizado. Como contribuições específicas do SciProvMiner, podemos destacar: - Desenvolvimento de um modelo para contemplar a proveniência prospectiva e retrospectiva como uma extensão do Open Provenance Model (OPM), que em sua forma original modela somente proveniência retrospectiva. - Especificação e implementação de um coletor de proveniência que utiliza a tecnologia de serviços Web para capturar ambos os tipos de proveniência segundo o modelo acima; - Desenvolvimento de uma ontologia denominada OPMO-e, que estende a ontologia Open Provenance Model Ontology (OPMO) de forma a modelar o conhecimento acerca da proveniência prospectiva além da retrospectiva já contemplada na OPMO e onde são implementadas as regras de completude e inferência definidas na documentação do modelo OPM. Estas regras aumentam o conhecimento do cientista sobre o experimento realizado por inferir informações que não foram explicitamente fornecidas pelo usuário e tornando possível a otimização do processo de captura de proveniência e a consequente diminuição do trabalho do cientista para instrumentalizar o workflow; - Especificação de um banco de dados relacional onde são armazenadas as informações de proveniência capturadas pelo coletor, que pode ser utilizado para ser consultado a respeito da proveniência explicitamente capturada, além de fornecer dadosparaas demais funcionalidades do SciProvMiner. / To provide historical scientific information to deal with knowledge loss about scientific experiment has been the focus of several researches. However, the computational support for scientific experiment on a large scale is still incipient and is considered one of the challenges set by the Brazilian Computer Society for 2006 to 2016 period. This work aims to contribute in this area, presenting the SciProvMiner architecture, which main objective is to collect prospective and retrospective provenance of scientific experiments, using ontologies and inference engines to provide useful information in order to increase the knowledge of scientists about a given experiment. We can highlight as specific contributions of SciProvMiner: - Development of a model that encompass prospective and retrospective provenance as an extension of the Open Provenance Model (OPM), which originally onlydeals with retrospective provenance. - Specification and implementation of a provenance collector that uses Web services technology to capture both types of provenance (prospective and retrospective) according to the above model; - Development of an ontology,named Extended OPMO-e, that extends the Open Provenance Model Ontology (OPMO) in order to model prospective provenance beyond the retrospective provenance already covered in OPMO and where are implemented inference and completeness rules defined in OPM documentation. These rules increase the knowledge of scientists on the experiment inferring information that were not explicitly provided by the user and making it possible to optimize the provenance capture mechanism and the consequent decrease on scientist work in order to instrument the workflow. - A relational database specification, where captured provenance information are stored. These information can be used to formulate queries about the provenance explicitly captured, besides provide data to other functionalities of SciProvMiner.
214

Novel optimization schemes for service composition in the cloud using learning automata-based matrix factorization

Shehu, Umar Galadima January 2015 (has links)
Service Oriented Computing (SOC) provides a framework for the realization of loosely couple service oriented applications (SOA). Web services are central to the concept of SOC. They possess several benefits which are useful to SOA e.g. encapsulation, loose coupling and reusability. Using web services, an application can embed its functionalities within the business process of other applications. This is made possible through web service composition. Web services are composed to provide more complex functions for a service consumer in the form of a value added composite service. Currently, research into how web services can be composed to yield QoS (Quality of Service) optimal composite service has gathered significant attention. However, the number and services has risen thereby increasing the number of possible service combinations and also amplifying the impact of network on composite service performance. QoS-based service composition in the cloud addresses two important sub-problems; Prediction of network performance between web service nodes in the cloud, and QoS-based web service composition. We model the former problem as a prediction problem while the later problem is modelled as an NP-Hard optimization problem due to its complex, constrained and multi-objective nature. This thesis contributed to the prediction problem by presenting a novel learning automata-based non-negative matrix factorization algorithm (LANMF) for estimating end-to-end network latency of a composition in the cloud. LANMF encodes each web service node as an automaton which allows v it to estimate its network coordinate in such a way that prediction error is minimized. Experiments indicate that LANMF is more accurate than current approaches. The thesis also contributed to the QoS-based service composition problem by proposing four evolutionary algorithms; a network-aware genetic algorithm (INSGA), a K-mean based genetic algorithm (KNSGA), a multi-population particle swarm optimization algorithm (NMPSO), and a non-dominated sort fruit fly algorithm (NFOA). The algorithms adopt different evolutionary strategies coupled with LANMF method to search for low latency and QoSoptimal solutions. They also employ a unique constraint handling method used to penalize solutions that violate user specified QoS constraints. Experiments demonstrate the efficiency and scalability of the algorithms in a large scale environment. Also the algorithms outperform other evolutionary algorithms in terms of optimality and calability. In addition, the thesis contributed to QoS-based web service composition in a dynamic environment. This is motivated by the ineffectiveness of the four proposed algorithms in a dynamically hanging QoS environment such as a real world scenario. Hence, we propose a new cellular automata-based genetic algorithm (CellGA) to address the issue. Experimental results show the effectiveness of CellGA in solving QoS-based service composition in dynamic QoS environment.
215

Using Web Services for Transparent Access to Distributed Databases

Schneider, Jan, Cárdenas, Héctor, Talamantes, José Alfonso January 2007 (has links)
This thesis consists of a strategy to integrate distributed systems with the aid of web services. The focus of this research involves three subjects, web services and distributed database systems and its application on a real-life project. For defining the context in this thesis, we present the research methodology that provides the path where the investigation will be performed and the general concepts of the running environment and architecture of web services. The mayor contribution for this thesis is a solution for the Chamber Trade in Sweden and VNemart in Vietnam obtaining the requirement specification according to the SPIDER project needs and our software design specification using distributed databases and web services. As results, we present the software implementation and the way or software meets and the requirements previously defined. For future web services developments, this document provides guidance for best practices in this subject.
216

SOA Governance jako další vývojový stupeň zavádění SOA architektury. / SOA Governance

Pršala, Ondřej January 2008 (has links)
The thesis is focused on administration and supervision of Service oriented architecture (SOA). The main idea of SOA is system decomposition to functional units, which contain bounded and well understandable functionality. This functionality, in form of service, is provided to other units through clearly described interface. Well-identified relationships between service provider and service consumer are created. SOA governance aims to manage these relationships, to monitor quality of services and to control adherence of stated rules and policies. These rules and policies are established by central team responsible for integration and architecture development. Starting point of successful architecture development is acquaintance of service portfolio and their inter-relationships. Each service should go through whole formal life-cycle, which particular phases have special policies applied. Main interest is devoted to run-time phase - run-time monitoring and run-time governance. Theory is complemented by practical examples of report generation and service monitoring with SLA adherence observation. The thesis also contains structural model of SOA governance taking into account infrastructural elements of SOA and information flows between them.
217

Mobilní asistent pro cestování MHD / Mobile Public Transportation Assistant

Tůma, Jan January 2017 (has links)
This thesis is a documentation covering complete design and implementation of a mobile public transportation assistant for Brno. The resulting solution consists of a mobile application and a server part. The mobile application allow user with actual position of public transport vehicles and positon of smart device navigate and determine optimal route. The server part includes web service for client-server communication.
218

Počítačové vidění jako webová služba / Computer Vision as a Service

Jež, Adam January 2017 (has links)
The goal of this thesis is to create a web service for sharing and easy access to computer vision algorithms. Currently, there is a large number of algorithms and it's beneficial for their authors to simply share them with other people, even from other disciplines. The main part of the thesis consists of creating web service architecture and suggesting a method for request processing to run algorithms. Part of the implemented service is a web interface that allows use of algorithms with its own data, and client library that makes integration into other apps easier.
219

Mikroblog pro týmy / Team-Based Microblogging Service

Kotásek, Marek January 2013 (has links)
The aim of this master's thesis is design and implementation of team-based microblogging service, which will be used on the Faculty of Information Technology, Brno University of Technology. First, concept of the microblogging together with the analysis of an existing microblogging services and implementation tools is described. Next part of this work goes into the details of the design, implementation and testing of the developed microblogging service.
220

Implementace GIS služby WPS / Implementation of the WPS Service

Maďarka, Dušan January 2016 (has links)
The aim of this diploma thesis was to design and implement web mapping service Web Processing Service. The purpose of this service is to provide geospatial oriented calculations on the internet. Introduction part is dedicated to analysis of existing mapping services implementations and defined standards. Thereafter conclusion was drawn and the service was implemented, with respect to computing system efficiency, pluggable design and simplicity of integration in client applications. The interface for work with geographic information system GRASS GIS is also part of the implemented service and it brings the possibility to use the tool from service. The last chapter describes the testing of implemented service in term of functional and performance tests.

Page generated in 0.0451 seconds