• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 39
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 119
  • 32
  • 31
  • 23
  • 21
  • 21
  • 19
  • 18
  • 16
  • 15
  • 14
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

The Embeddedness of Information Technology in the Workflow of Business Processes : How Can IT Support and Improve the Way Work is Done?

Fischer, Tobias Christian, Lawson, Elin January 2013 (has links)
Wise investments in Information Technology have become increasingly important in staying competitive in today's environment. Massive amounts of people and IT-systems are involved in the process of input becoming output. As these employees and IT-systems must be harmonized, it becomes relevant to study how employees’ routines and habits are related to the usage and embeddedness of these systems. Therefore, the purpose of this paper is to investigate how embedded IT can lead to improved business processes. This is done through exploring how embedded IT is used in workflows as well as to examine what support and hindrance IT can offer. Therefore, extensive theoretical research was conducted within the fields of habits and routines, business processes and embedded IT, developing a framework for analysis. Then, a case study was conducted where a specific process within insurance claims was thoroughly analyzed through interviews and work shadowing. This facilitated a within-case analysis. The results of the study showed the interdependency between the pillars of this study. Workflow habits and routines influences IT usage, whereas IT aims to support through automatization and informatization. However, to enable this and achieve a significant improvement, the processes it aims to support needs to be fully known.
102

Vliv systému HACCP na kvalitu masných výrobků / The influence of the HACCP system on the quality of meat products

KOCINOVÁ, Marie January 2014 (has links)
In this thesis, we performed a detailed analysis of the entire HACCP system in the enterprise Libor Novák - production of meat, sausages and specialties. The HACCP system in that enterprise was compared with the available literature and knowledge have been proposed some changes in the system of critical control points in a system of checks. They were designed diagrams of two new production of selected products (smoked meat, cooked production). Both productions were performed new hazard analysis and critical control newly set points and control points. For newly designed control points were established and possible changes of danger and the consequent monitoring and corrective action. Because of the HACCP system in the company for the last 10 years, well developed and stabilized, it was proposed to reduce the number of critical control points. The HACCP system is carried out in constant evolution and must constantly adapt and production workflows. Therefore, these new proposed changes can be valid only until until it changes anything in production technology.
103

Anotação semantica de dados geoespaciais

Macario, Carla Geovana do Nascimento 15 August 2018 (has links)
Orientador: Claudia Maria Bauzer Medeiros / Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação / Made available in DSpace on 2018-08-15T04:11:30Z (GMT). No. of bitstreams: 1 Macario_CarlaGeovanadoNascimento_D.pdf: 3780981 bytes, checksum: 4b8ad7138779392bff940f1f95ad1f51 (MD5) Previous issue date: 2009 / Resumo: Dados geoespaciais constituem a base para sistemas de decisão utilizados em vários domínios, como planejamento de transito, fornecimento de serviços ou controle de desastres. Entretanto, para serem usados, estes dados precisam ser analisados e interpretados, atividades muitas vezes trabalhosas e geralmente executadas por especialistas. Apesar disso estas interpretacoes nao sao armazenadas e quando o são, geralmente correspondem a alguma informacao textual e em linguagem própria, gravadas em arquivos tecnicos. A ausencia de solucoes eficientes para armazenar estas interpretaçães leva a problemas como retrabalho e dificuldades de compartilhamento de informação. Neste trabalho apresentamos uma soluçao para estes problemas que baseia-se no uso de anotações semânticas, uma abordagem que promove um entendimento comum dos conceitos usados. Para tanto, propomos a adocão de workflows científicos para descricao do processo de anotacão dos dados e tambíem de um esquema de metadados e ontologias bem conhecidas, aplicando a soluçao a problemas em agricultura. As contribuicães da tese envolvem: (i) identificacao de um conjunto de requisitos para busca semantica a dados geoespaciais; (ii) identificacao de características desejóveis para ferramentas de anotacão; (iii) proposta e implementacao parcial de um framework para a anotacão semântica de diferentes tipos de dados geoespaciais; e (iv) identificacao dos desafios envolvidos no uso de workflows para descrever o processo de anotaçcaão. Este framework foi parcialmente validado, com implementação para aplicações em agricultura / Abstract: Geospatial data are a basis for decision making in a wide range of domains, such as traffic planning, consumer services disasters controlling. However, to be used, these kind of data have to be analyzed and interpreted, which constitutes a hard task, prone to errors, and usually performed by experts. Although all of these factors, the interpretations are not stored. When this happens, they correspond to descriptive text, which is stored in technical files. The absence of solutions to efficiently store them leads to problems such as rework and difficulties in information sharing. In this work we present a solution for these problems based on semantic annotations, an approach for a common understanding of concepts being used. We propose the use of scientific workflows to describe the annotation process for each kind of data, and also the adoption of well known metadata schema and ontologies. The contributions of this thesis involves: (i) identification of requirements for semantic search of geospatial data; (ii) identification of desirable features for annotation tools; (iii) proposal, and partial implementation, of a a framework for semantic annotation of different kinds of geospatial data; and (iv) identification of challenges in adopting scientific workflows for describing the annotation process. This framework was partially validated, through an implementation to produce annotations for applications in agriculture / Doutorado / Banco de Dados / Doutora em Ciência da Computação
104

Uma abordagem para linha de produtos de software científico baseada em ontologia e workflow

Costa, Gabriella Castro Barbosa 27 February 2013 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-05-31T17:53:13Z No. of bitstreams: 1 gabriellacastrobarbosacosta.pdf: 2243060 bytes, checksum: 0aef87199975808e0973490875ce39b5 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-01T11:50:00Z (GMT) No. of bitstreams: 1 gabriellacastrobarbosacosta.pdf: 2243060 bytes, checksum: 0aef87199975808e0973490875ce39b5 (MD5) / Made available in DSpace on 2017-06-01T11:50:00Z (GMT). No. of bitstreams: 1 gabriellacastrobarbosacosta.pdf: 2243060 bytes, checksum: 0aef87199975808e0973490875ce39b5 (MD5) Previous issue date: 2013-02-27 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Uma forma de aprimorar a reutilização e a manutenção de uma família de produtos de software é através da utilização de uma abordagem de Linha de Produtos de Software (LPS). Em algumas situações, tais como aplicações científicas para uma determinada área, é vantajoso desenvolver uma coleção de produtos de software relacionados, utilizando uma abordagem de LPS. Linhas de Produtos de Software Científico (LPSC) diferem-se de Li nhas de Produtos de Software pelo fato de que LPSC fazem uso de um modelo abstrato de workflow científico. Esse modelo abstrato de workflow é definido de acordo com o domínio científico e, através deste workflow, os produtos da LPSC serão instanciados. Analisando as dificuldades em especificar experimentos científicos e considerando a necessidade de composição de aplicações científicas para a sua implementação, constata-se a necessidade de um suporte semântico mais adequado para a fase de análise de domínio. Para tanto, este trabalho propõe uma abordagem baseada na associação de modelo de features e onto logias, denominada PL-Science, para apoiar a especificação e a condução de experimentos científicos. A abordagem PL-Science, que considera o contexto de LPSC, visa auxiliar os cientistas através de um workflow que engloba as aplicações científicas de um dado experimento. Usando os conceitos de LPS, os cientistas podem reutilizar modelos que especificam a LPSC e tomar decisões de acordo com suas necessidades. Este trabalho enfatiza o uso de ontologias para facilitar o processo de aplicação de LPS em domínios científicos. Através do uso de ontologia como um modelo de domínio consegue-se fornecer informações adicionais, bem como adicionar mais semântica ao contexto de LPSC. / A way to improve reusability and maintainability of a family of software products is through the Software Product Line (SPL) approach. In some situations, such as scientific applications for a given area, it is advantageous to develop a collection of related software products, using an SPL approach. Scientific Software Product Lines (SSPL) differs from the Software Product Lines due to the fact that SSPL uses an abstract scientific workflow model. This workflow is defined according to the scientific domain and, using this abstract workflow model, the products will be instantiated. Analyzing the difficulties to specify scientific experiments, and considering the need for scientific applications composition for its implementation, an appropriated semantic support for the domain analysis phase is necessary. Therefore, this work proposes an approach based on the combination of feature models and ontologies, named PL-Science, to support the specification and conduction of scientific experiments. The PL-Science approach, which considers the context of SPL and aims to assist scientists to define a scientific experiment, specifying a workflow that encompasses scientific applications of a given experiment, is presented during this disser tation. Using SPL concepts, scientists can reuse models that specify the scientific product line and carefully make decisions according to their needs. This work also focuses on the use of ontologies to facilitate the process of applying Software Product Line to scientific domains. Through the use of ontology as a domain model, we can provide additional information as well as add more semantics in the context of Scientific Software Product Lines.
105

Crowdsourcing in pay-as-you-go data integration

Osorno Gutierrez, Fernando January 2016 (has links)
In pay-as-you-go data integration, feedback can inform the regeneration of different aspects of a data integration system, and as a result, helps to improve the system's quality. However, feedback could be expensive as the amount of feedback required to annotate all the possible integration artefacts is potentially big in contexts where the budget can be limited. Also, feedback could be used in different ways. Feedback of different types and in different orders could have different effects in the quality of the integration. Some feedback types could give rise to more benefit than others. There is a need to develop techniques to collect feedback effectively. Previous efforts have explored the benefit of feedback in one aspect of the integration. However, the contributions have not considered the benefit of different feedback types in a single integration task. We have investigated the annotation of mapping results using crowdsourcing, and implementing techniques for reliability. The results indicate that precision estimates derived from crowdsourcing improve rapidly, suggesting that crowdsourcing can be used as a cost-effective source of feedback. We propose an approach to maximize the improvement of data integration systems given a budget for feedback. Our approach takes into account the annotation of schema matchings, mapping results and pairs of candidate record duplicates. We define a feedback plan, which indicates the type of feedback to collect, the amount of feedback to collect and the order in which different types of feedback are collected. We defined a fitness function and a genetic algorithm to search for the most cost-effective feedback plans. We implemented a framework to test the application of feedback plans and measure the improvement of different data integration systems. In the framework, we use a greedy algorithm for the selection of mappings. We designed quality measures to estimate the quality of a dataspace after the application of a feedback plan. For the evaluation of our approach, we propose a method to generate synthetic data scenarios. We evaluate our approach in scenarios with different characteristics. The results showed that the generated feedback plans achieved higher quality values than the randomly generated feedback plans in several scenarios.
106

Accelerating the Throughput of Mass Spectrometry Analysis by Advanced Workflow and Instrumentation

Zhuoer Xie (9137873) 05 August 2020 (has links)
<div> <div> <div> <p>The exploratory profiling and quantitative bioassays of lipids, small metabolites, and peptides have always been challenging tasks. The most popular instrument platform deployed to solve these problems is chromatography coupled with mass spectrometry. However, it requires large amounts of instrument time, intensive labor, and frequent maintenance, and usually produces results with bias. Thus, the pace of exploratory research is one of poor efficacy and low throughput. The work in this dissertation provides two practical tactics to address these problems. The first solution is multiple reaction monitoring profiling (MRM-profiling), a new concept intended to shift the exploratory research from current identification-centered metabolomics and lipidomics to functional group screening by taking advantage of precursor ion scan and product ion scan. It is also demonstrated that MRM-profiling is capable of quantifying the relative amount of lipids within the same subclass. Besides, an application of the whole workflow to investigate the strain-level differences of bacteria is described. The results have zeroed in on several potential lipid biomarkers and corresponding MRM transitions. The second strategy is aimed to increase the throughput of targeted bioassays by conducting induced nanoelectrospray ionization (nESI) in batch mode. A novel prototype instrument named "Dip-and-Go" system is presented. Characterization of its ability to carry out reaction screening and bioassays exhibits the versatility of the system. The distinct electrophoretic cleaning mechanism contributes to the removal of salt during ionization, which assures the accuracy of measurement.</p></div></div></div>
107

Digital Educational Games: Methodologies for Development and Software Quality

Aslan, Serdar 02 November 2016 (has links)
Development of a game in the form of software for game-based learning poses significant technical challenges for educators, researchers, game designers, and software engineers. The game development consists of a set of complex processes requiring multi-faceted knowledge in multiple disciplines such as digital graphic design, education, gaming, instructional design, modeling and simulation, psychology, software engineering, visual arts, and the learning subject area. Planning and managing such a complex multidisciplinary development project require unifying methodologies for development and software quality evaluation and should not be performed in an ad hoc manner. This dissertation presents such methodologies named: GAMED (diGital educAtional gaMe dEvelopment methoDology) and IDEALLY (dIgital eDucational gamE softwAre quaLity evaLuation methodologY). GAMED consists of a body of methods, rules, and postulates and is embedded within a digital educational game life cycle. The life cycle describes a framework for organization of the phases, processes, work products, quality assurance activities, and project management activities required to develop, use, maintain, and evolve a digital educational game from birth to retirement. GAMED provides a modular structured approach for overcoming the development complexity and guides the developers throughout the entire life cycle. IDEALLY provides a hierarchy of 111 indicators consisting of 21 branch and 90 leaf indicators in the form of an acyclic graph for the measurement and evaluation of digital educational game software quality. We developed the GAMED and IDEALLY methodologies based on the experiences and knowledge we have gained in creating and publishing four digital educational games that run on the iOS (iPad, iPhone, and iPod touch) mobile devices: CandyFactory, CandySpan, CandyDepot, and CandyBot. The two methodologies provide a quality-centered structured approach for development of digital educational games and are essential for accomplishing demanding goals of game-based learning. Moreover, classifications provided in the literature are inadequate for the game designers, engineers and practitioners. To that end, we present a taxonomy of games that focuses on the characterization of games. / Ph. D.
108

Scientific Workflows for Hadoop

Bux, Marc Nicolas 07 August 2018 (has links)
Scientific Workflows bieten flexible Möglichkeiten für die Modellierung und den Austausch komplexer Arbeitsabläufe zur Analyse wissenschaftlicher Daten. In den letzten Jahrzehnten sind verschiedene Systeme entstanden, die den Entwurf, die Ausführung und die Verwaltung solcher Scientific Workflows unterstützen und erleichtern. In mehreren wissenschaftlichen Disziplinen wachsen die Mengen zu verarbeitender Daten inzwischen jedoch schneller als die Rechenleistung und der Speicherplatz verfügbarer Rechner. Parallelisierung und verteilte Ausführung werden häufig angewendet, um mit wachsenden Datenmengen Schritt zu halten. Allerdings sind die durch verteilte Infrastrukturen bereitgestellten Ressourcen häufig heterogen, instabil und unzuverlässig. Um die Skalierbarkeit solcher Infrastrukturen nutzen zu können, müssen daher mehrere Anforderungen erfüllt sein: Scientific Workflows müssen parallelisiert werden. Simulations-Frameworks zur Evaluation von Planungsalgorithmen müssen die Instabilität verteilter Infrastrukturen berücksichtigen. Adaptive Planungsalgorithmen müssen eingesetzt werden, um die Nutzung instabiler Ressourcen zu optimieren. Hadoop oder ähnliche Systeme zur skalierbaren Verwaltung verteilter Ressourcen müssen verwendet werden. Diese Dissertation präsentiert neue Lösungen für diese Anforderungen. Zunächst stellen wir DynamicCloudSim vor, ein Simulations-Framework für Cloud-Infrastrukturen, welches verschiedene Aspekte der Variabilität adäquat modelliert. Im Anschluss beschreiben wir ERA, einen adaptiven Planungsalgorithmus, der die Ausführungszeit eines Scientific Workflows optimiert, indem er Heterogenität ausnutzt, kritische Teile des Workflows repliziert und sich an Veränderungen in der Infrastruktur anpasst. Schließlich präsentieren wir Hi-WAY, eine Ausführungsumgebung die ERA integriert und die hochgradig skalierbare Ausführungen in verschiedenen Sprachen beschriebener Scientific Workflows auf Hadoop ermöglicht. / Scientific workflows provide a means to model, execute, and exchange the increasingly complex analysis pipelines necessary for today's data-driven science. Over the last decades, scientific workflow management systems have emerged to facilitate the design, execution, and monitoring of such workflows. At the same time, the amounts of data generated in various areas of science outpaced hardware advancements. Parallelization and distributed execution are generally proposed to deal with increasing amounts of data. However, the resources provided by distributed infrastructures are subject to heterogeneity, dynamic performance changes at runtime, and occasional failures. To leverage the scalability provided by these infrastructures despite the observed aspects of performance variability, workflow management systems have to progress: Parallelization potentials in scientific workflows have to be detected and exploited. Simulation frameworks, which are commonly employed for the evaluation of scheduling mechanisms, have to consider the instability encountered on the infrastructures they emulate. Adaptive scheduling mechanisms have to be employed to optimize resource utilization in the face of instability. State-of-the-art systems for scalable distributed resource management and storage, such as Apache Hadoop, have to be supported. This dissertation presents novel solutions for these aspirations. First, we introduce DynamicCloudSim, a cloud computing simulation framework that is able to adequately model the various aspects of variability encountered in computational clouds. Secondly, we outline ERA, an adaptive scheduling policy that optimizes workflow makespan by exploiting heterogeneity, replicating bottlenecks in workflow execution, and adapting to changes in the underlying infrastructure. Finally, we present Hi-WAY, an execution engine that integrates ERA and enables the highly scalable execution of scientific workflows written in a number of languages on Hadoop.
109

Uma plataforma de integra??o de middleware para computa??o ub?qua

Lopes, Frederico Ara?jo da Silva 18 November 2011 (has links)
Made available in DSpace on 2014-12-17T15:46:57Z (GMT). No. of bitstreams: 1 FredericoASL_TESE.pdf: 2802303 bytes, checksum: be814c8392c7d14ab8b3a30bbd50da04 (MD5) Previous issue date: 2011-11-18 / One of the current challenges of Ubiquitous Computing is the development of complex applications, those are more than simple alarms triggered by sensors or simple systems to configure the environment according to user preferences. Those applications are hard to develop since they are composed by services provided by different middleware and it is needed to know the peculiarities of each of them, mainly the communication and context models. This thesis presents OpenCOPI, a platform which integrates various services providers, including context provision middleware. It provides an unified ontology-based context model, as well as an environment that enable easy development of ubiquitous applications via the definition of semantic workflows that contains the abstract description of the application. Those semantic workflows are converted into concrete workflows, called execution plans. An execution plan consists of a workflow instance containing activities that are automated by a set of Web services. OpenCOPI supports the automatic Web service selection and composition, enabling the use of services provided by distinct middleware in an independent and transparent way. Moreover, this platform also supports execution adaptation in case of service failures, user mobility and degradation of services quality. The validation of OpenCOPI is performed through the development of case studies, specifically applications of the oil industry. In addition, this work evaluates the overhead introduced by OpenCOPI and compares it with the provided benefits, and the efficiency of OpenCOPI s selection and adaptation mechanism / Um dos principais desafios atuais da computa??o ub?qua ? o desenvolvimento de aplica??es complexas, que consistem em mais do que simples alarmes disparados por sensores ou ferramentas para configurar o ambiente de acordo com prefer?ncias dos usu?rios. Tais aplica??es s?o dif?ceis de desenvolver uma vez que envolve uso de servi?os que s?o providos por diferentes middleware, sendo necess?rio conhecer as peculiaridades de cada um deles, principalmente no que diz respeito ao modelo de comunica??o e ao modelo de representa??o de informa??es de contexto. Essa tese de doutorado apresenta o OpenCOPI, uma plataforma para integra??o de diferentes middleware de provis?o de contexto que fornece um servi?o de contexto unificado e baseado em ontologias, bem como um ambiente que facilita o desenvolvimento das aplica??es ub?quas atrav?s da defini??o de workflows sem?nticos com a descri??o abstrata da aplica??o. Esses workflows sem?nticos s?o transformados em workflows concretos, chamados de planos de execu??o. Um plano de execu??o ? em uma inst?ncia de um workflow contendo atividades que s?o automatizadas por um conjunto de servi?os Web. O OpenCOPI suporta composi??o e sele??o autom?tica de servi?os Web, possibilitando o uso transparente de servi?os de contexto providos por diferentes middleware. Essa plataforma tamb?m fornece suporte para adapta??o da execu??o das aplica??es em caso de falha de servi?os, mobilidade do usu?rio ou degrada??o da qualidade do servi?o. A valida??o do OpenCOPI ? realizada atrav?s de estudos de caso, especificamente aplica??es da ind?stria do petr?leo e g?s (monitoramento de po?os e de dutos de transporte de petr?leo). Al?m disso, esse trabalho avalia o overhead introduzido pelo OpenCOPI, contrastando com os seus benef?cios, e tamb?m avalia a efici?ncia dos mecanismos de sele??o e adapta??o
110

Cloud Integrator: uma plataforma para composi??o de servi?os em ambientes de computa??o em nuvem / Cloud Integrator: a platform for composition of services in cloud computing environments

Cavalcante, Everton Ranielly de Sousa 31 January 2013 (has links)
Made available in DSpace on 2014-12-17T15:48:05Z (GMT). No. of bitstreams: 1 EvertonRSC_DISSERT.pdf: 4653595 bytes, checksum: 83e897be68464555082a55505fd406ea (MD5) Previous issue date: 2013-01-31 / Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico / With the advance of the Cloud Computing paradigm, a single service offered by a cloud platform may not be enough to meet all the application requirements. To fulfill such requirements, it may be necessary, instead of a single service, a composition of services that aggregates services provided by different cloud platforms. In order to generate aggregated value for the user, this composition of services provided by several Cloud Computing platforms requires a solution in terms of platforms integration, which encompasses the manipulation of a wide number of noninteroperable APIs and protocols from different platform vendors. In this scenario, this work presents Cloud Integrator, a middleware platform for composing services provided by different Cloud Computing platforms. Besides providing an environment that facilitates the development and execution of applications that use such services, Cloud Integrator works as a mediator by providing mechanisms for building applications through composition and selection of semantic Web services that take into account metadata about the services, such as QoS (Quality of Service), prices, etc. Moreover, the proposed middleware platform provides an adaptation mechanism that can be triggered in case of failure or quality degradation of one or more services used by the running application in order to ensure its quality and availability. In this work, through a case study that consists of an application that use services provided by different cloud platforms, Cloud Integrator is evaluated in terms of the efficiency of the performed service composition, selection and adaptation processes, as well as the potential of using this middleware in heterogeneous computational clouds scenarios / Com o avan?o do paradigma de Computa??o em Nuvem, um ?nico servi?o oferecido por uma plataforma de nuvem pode n?o ser suficiente para satisfazer todos os requisitos da aplica??o. Para satisfazer tais requisitos, ao inv?s de um ?nico servi?o, pode ser necess?ria uma composi??o que agrega servi?os providos por diferentes plataformas de nuvem. A fim de gerar valor agregado para o usu?rio, essa composi??o de servi?os providos por diferentes plataformas de Computa??o em Nuvem requer uma solu??o em termos de integra??o de plataformas, envolvendo a manipula??o de um vasto n?mero de APIs e protocolos n?o interoper?veis de diferentes provedores. Nesse cen?rio, este trabalho apresenta o Cloud Integrator, uma plataforma de middleware para composi??o de servi?os providos por diferentes plataformas de Computa??o em Nuvem. Al?m de prover um ambiente que facilita o desenvolvimento e a execu??o de aplica??es que utilizam tais servi?os, o Cloud Integrator funciona como um mediador provendo mecanismos para a constru??o de aplica??es atrav?s da composi??o e sele??o de servi?os Web sem?nticos que consideram metadados acerca dos servi?os, como QoS (Quality of Service), pre?os etc. Adicionalmente, a plataforma de middleware proposta prov? um mecanismo de adapta??o que pode ser disparado em caso de falha ou degrada??o da qualidade de um ou mais servi?os utilizados pela aplica??o em quest?o, a fim de garantir sua a qualidade e disponibilidade. Neste trabalho, atrav?s de um estudo de caso que consiste de uma aplica??o que utiliza servi?os providos por diferentes plataformas de nuvem, o Cloud Integrator ? avaliado em termos da efici?ncia dos processos de composi??o de servi?os, sele??o e adapta??o realizados, bem como da potencialidade do seu uso em cen?rios de nuvens computacionais heterog?neas

Page generated in 0.02 seconds