• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 41
  • 39
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 120
  • 32
  • 31
  • 23
  • 21
  • 21
  • 19
  • 18
  • 17
  • 15
  • 14
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

MDAPSP - Uma arquitetura modular distribuída para auxílio à predição de estruturas de proteínas / MDAPSP - A modular distributed architecture to support the protein structure prediction

Edvard Martins de Oliveira 09 May 2018 (has links)
A predição de estruturas de proteínas é um campo de pesquisa que busca simular o enovelamento de cadeias de aminoácidos de forma a descobrir as funções das proteínas na natureza, um processo altamente dispendioso por meio de métodos in vivo. Inserida no contexto da Bioinformática, é uma das tarefas mais computacionalmente custosas e desafiadoras da atualidade. Devido à complexidade, muitas pesquisas se utilizam de gateways científicos para disponibilização de ferramentas de execução e análise desses experimentos, aliado ao uso de workflows científicos para organização de tarefas e disponibilização de informações. No entanto, esses gateways podem enfrentar gargalos de desempenho e falhas estruturais, produzindo resultados de baixa qualidade. Para atuar nesse contexto multifacetado e oferecer alternativas para algumas das limitações, esta tese propõe uma arquitetura modular baseada nos conceitos de Service Oriented Architecture (SOA) para oferta de recursos computacionais em gateways científicos, com foco nos experimentos de Protein Structure Prediction (PSP). A Arquitetura Modular Distribuída para auxílio à Predição de Estruturas de Proteínas (MDAPSP) é descrita conceitualmente e validada em um modelo de simulação computacional, no qual se pode identificar suas capacidades, detalhar o funcionamento de seus módulos e destacar seu potencial. A avaliação experimental demonstra a qualidade dos algoritmos propostos, ampliando a capacidade de atendimento de um gateway científico, reduzindo o tempo necessário para experimentos de predição e lançando as bases para o protótipo de uma arquitetura funcional. Os módulos desenvolvidos alcançam boa capacidade de otimização de experimentos de PSP em ambientes distribuídos e constituem uma novidade no modelo de provisionamento de recursos para gateways científicos. / PSP is a scientific process that simulates the folding of amino acid chains to discover the function of a protein in live organisms, considering that its an expensive process to be done by in vivo methods. PSP is a computationally demanding and challenging effort in the Bioinformatics stateof- the-art. Many works use scientific gateways to provide tools for execution and analysis of such experiments, along with scientific workflows to organize tasks and to share information. However, these gateways can suffer performance bottlenecks and structural failures, producing low quality results. With the goal of offering alternatives to some of the limitations and considering the complexity of the topics involved, this thesis proposes a modular architecture based on SOA concepts to provide computing resources to scientific gateways, with focus on PSP experiments. The Modular Distributed Architecture to support Protein Structure Prediction (MDAPSP) is described conceptually and validated in a computer simulation model that explain its capabilities, detail the modules operation and highlight its potential. The performance evaluation presents the quality of the proposed algorithms, a reduction of response time in PSP experiments and prove the benefits of the novel algorithms, establishing the basis for a prototype. The new modules can optmize the PSP experiments in distributed environments and are a innovation in the resource provisioning model for scientific gateways.
52

Apoiando o reúso em uma plataforma de ecossistema de software científico através do gerenciamento de contexto e de proveniência

Ambrósio, Lenita Martins 14 September 2018 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2018-11-19T12:32:38Z No. of bitstreams: 1 lenitamartinsambrosio.pdf: 4678886 bytes, checksum: a6f09cd96620242b7eeda9443a48e1a5 (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-11-23T13:09:03Z (GMT) No. of bitstreams: 1 lenitamartinsambrosio.pdf: 4678886 bytes, checksum: a6f09cd96620242b7eeda9443a48e1a5 (MD5) / Made available in DSpace on 2018-11-23T13:09:03Z (GMT). No. of bitstreams: 1 lenitamartinsambrosio.pdf: 4678886 bytes, checksum: a6f09cd96620242b7eeda9443a48e1a5 (MD5) Previous issue date: 2018-09-14 / Considerando o cenário de experimentação científica atual e o crescente uso de aplicações em larga escala, o gerenciamento de dados de experimentação está se tornando cada vez mais complexo. O processo de experimentação científica requer suporte para atividades colaborativas e distribuídas. O gerenciamento de informações contextuais e de proveniência desempenha um papel fundamental no domínio neste domínio. O registro detalhado das etapas para produzir resultados, bem como as informações contextuais do ambiente de experimentação, pode permitir que os cientistas reutilizem esses resultados em experimentos futuros e reutilizem o experimento ou partes dele em outro contexto. O objetivo deste trabalho é apresentar uma abordagem de gerenciamento de informações de proveniência e contexto que apoie pesquisadores no reúso de conhecimento sobre experimentos científicos conduzidos em uma plataforma colaborativa e distribuída. Primeiramente, as fases do ciclo de vida do gerenciamento de contexto e proveniência foram analisadas, considerando os modelos existentes. Em seguida, foi proposto um framework conceitual para apoiar a análise de elementos contextuais e dados de proveniência de experimentos científicos. Uma ontologia capaz de extrair conhecimento implícito neste domínio foi especificada. Essa abordagem foi implementada em uma plataforma de ecossistema científico. Uma avaliação realizada por meio de estudos de caso evidenciou que essa arquitetura é capaz de auxiliar os pesquisadores durante a reutilização e reprodução de experimentos científicos. Elementos de contexto e proveniência de dados, associados a mecanismos de inferência, podem ser utilizados para apoiar a reutilização no processo de experimentação científica. / Considering the current experimentation scenario and the increasing use of large-scale applications, the experiment data management is growing complex. The scientific experimentation process requires support for collaborative and distributed activities. Managing contextual and provenance information plays a key role in the scientific domain. Detailed logging of the steps to produce results, as well as the environment context information could allow scientists to reuse these results in future experiments and reuse the experiment or parts of it in another context. The goal of this work is to present a provenance and context metadata management approach that support researchers in the reuse of knowledge about scientific experiments conducted in a collaborative and distributed platform. First, the context and provenance management life cycle phases were analyzed, considering existing models. Then it was proposed a conceptual framework to support the analysis of contextual elements and provenance data of scientific experiments. An ontology capable of extracting implicit knowledge in this domain was specified. This approach was implemented in a scientific ecosystem platform. An evaluation conducted through case studies shown evidences that this architecture is able to help researchers during the reuse and reproduction of scientific experiments. Context elements and data provenance, associated with inference mechanisms, can be used to support the reuse in scientific experimentation process.
53

[en] TEAM: AN ARCHITECTURE FOR E-WORKFLOW MANAGEMENT / [pt] TEAM: UMA ARQUITETURA PARA GERÊNCIA DE E-WORKFLOWS

LUIZ ANTONIO DE MORAES PEREIRA 30 August 2004 (has links)
[pt] Em aplicações colaborativas distribuídas, o uso de repositórios centralizados para armazenamento dos dados e programas compartilhados compromete algumas características importantes desse tipo de aplicações, tais como tolerância a falhas, escalabilidade e autonomia local. Aplicações como Kazaa, Gnutella e Edutella exemplificam o emprego de computação ponto-a-ponto (P2P), que tem se mostrado uma alternativa interessante para solução dos problemas apontados acima, sem impor as restrições típicas de sistemas centralizados ou mesmo distribuídos do tipo mediadores e SGBDHs. Nesse trabalho apresentamos a arquitetura TEAM (Teamwork-support Environment Architectural Model) para gerência de workflows na web. Além de descrevermos os componentes e conectores da arquitetura, que se baseia em computação P2P, tratamos dos aspectos de modelagem de processos, gerência dos dados, metadados e das informações de controle de execução dos processos. Exploramos, também, a estratégia adotada para disseminação das consultas e mensagens encaminhadas aos pares da rede em ambientes baseados na arquitetura. Ilustramos o emprego da arquitetura TEAM em um estudo de caso em e-learning. / [en] In distributed collaborative applications, the use of centralized repositories for storing shared data and programs compromises some important characteristics of this type of applications, such as fault tolerance, scalability and local autonomy. Applications like Kazaa, Gnutella and Edutella exemplify the use of peer-to-peer computing, which is being considered an interesting alternative for the solution of the problems mentioned above, without imposing typical restrictions of centralized or even distributed systems such as mediators and HDBMSs. In this work we present the TEAM (Teamwork-support Environment Architectural Model) architecture for managing workflows in the Web. Besides describing the components and connectors of the architecture, which is based on P2P computing, we address the modelling of processes and management of data, metadata and execution control information.We also discuss the strategy adopted for queries dissemination and messages sent to peers in environments based on the architecture. We illustrate the application of TEAM in a case study in e-learning.
54

RAfEG: Referenz-Systemarchitektur und prototypische Umsetzung -- Ausschnitt aus dem Abschlussbericht zum Projekt "Referenzarchitektur für E-Government" (RAfEG) --

Kunis, Raphael, Rünger, Gudula 07 December 2007 (has links)
Das Ziel des RAfEG-Projektes bestand in der Entwicklung einer Referenzarchitektur "E-Government", die die notwendigen Komponenten zur Realisierung informations- und kommunikationstechnischer Systeme (IuK-Systeme) für typische Prozesse in nachgeordneten Behörden der Innenministerien der Bundesländer bereitstellte. Die Architektur RAfEG stellt einen ganzheitlichen Ansatz dar, der viele wesentliche Aspekte, beginnend mit der formalen Beschreibung der fachlichen Zusammenhänge bis hin zur Entwicklung von verteilt agierenden Softwarekomponenten behördlicher Geschäftsprozesse umfasst. Die Architektur liefert unter Berücksichtigung hardwareseitiger Voraussetzungen die Struktur von Softwarekomponenten zur Verwaltungsautomatisierung. Die Architektur RAfEG wurde als räumlich verteiltes komponentenbasiertes Softwaresystem entworfen. Dabei war es notwendig, Konzepte zur effizienten Nutzung von heterogenen Systemen für interaktive Anwendungen im Bereich E-Government zu entwickeln. Die prototypische Umsetzung der Architektur erfolgte für Planfeststellungsverfahren/Plangenehmigungsprozesse am Beispiel des Regierungspräsidiums Leipzig. Das Vorhaben war geprägt von der Entwicklung eines durchgängigen Konzeptes zur optimalen IuK-technischen Unterstützung von Verwaltungsprozessen. Dies führte von der Modellierung der fachlichen Zusammenhänge (Fachkonzept) über die entwicklungsorientierte, methodische Abbildung der zu implementierenden Sachverhalte (Datenverarbeitungskonzept) bis zur komponentenbasierten Softwareentwicklung (Implementierungskonzept). Dieses Konzept mündete in einer Referenzarchitektur für typische E-Government-Prozesse. Dazu wurden neben den rein fachlichen, aufgabenbezogenen Aspekten insbesondere Sicherheitsaspekte sowie technische und organisatorische Schnittstellen ausführlich betrachtet. Der durchgängige Einsatz von Open Source Software führt hierbei zu einer kosteneffizienten, flexiblen Referenzlösung, die durch ihre komponentenbasierte Struktur als weiteren Aspekt sehr gut an spezielle Anforderungen anpassbar ist.
55

Similarity measures for scientific workflows

Starlinger, Johannes 08 January 2016 (has links)
In Laufe der letzten zehn Jahre haben Scientific Workflows als Werkzeug zur Erstellung von reproduzierbaren, datenverarbeitenden in-silico Experimenten an Aufmerksamkeit gewonnen, in die sowohl lokale Skripte und Anwendungen, als auch Web-Services eingebunden werden können. Über spezialisierte Online-Bibliotheken, sogenannte Repositories, können solche Workflows veröffentlicht und wiederverwendet werden. Mit zunehmender Größe dieser Repositories werden Ähnlichkeitsmaße für Scientific Workflows notwendig, etwa für Duplikaterkennung, Ähnlichkeitssuche oder Clustering von funktional ähnlichen Workflows. Die vorliegende Arbeit untersucht solche Ähnlichkeitsmaße für Scientific Workflows. Als erstes untersuchen wir ähnlichkeitsrelevante Eigenschaften von Scientific Workflows und identifizieren Charakteristika der Wiederverwendung ihrer Komponenten. Als zweites analysieren und reimplementieren wir existierende Lösungen für den Vergleich von Scientific Workflows entlang definierter Teilschritte des Vergleichsprozesses. Wir erstellen einen großen Gold-Standard Corpus von Workflowähnlichkeiten, der über 2400 Bewertungen für 485 Workflowpaare enthält, die von 15 Experten aus 6 Institutionen beigetragen wurden. Zum ersten Mal erlauben diese Vorarbeiten eine umfassende, vergleichende Evaluation verschiedener Ähnlichkeitsmaße für Scientific Workflows, in der wir einige vorige Ergebnisse bestätigen, andere aber revidieren. Als drittes stellen wir ein neue Methode für das Vergleichen von Scientific Workflows vor. Unsere Evaluation zeigt, dass diese neue Methode bessere und konsistentere Ergebnisse liefert und leicht mit anderen Ansätzen kombiniert werden kann, um eine weitere Qualitätssteigerung zu erreichen. Als viertes zweigen wir, wie die Resultate aus den vorangegangenen Schritten genutzt werden können, um aus Standardkomponenten eine Suchmaschine für schnelle, qualitativ hochwertige Ähnlichkeitssuche im Repositorymaßstab zu implementieren. / Over the last decade, scientific workflows have gained attention as a valuable tool to create reproducible in-silico experiments. Specialized online repositories have emerged which allow such workflows to be shared and reused by the scientific community. With increasing size of these repositories, methods to compare scientific workflows regarding their functional similarity become a necessity. To allow duplicate detection, similarity search, or clustering, similarity measures for scientific workflows are an essential prerequisite. This thesis investigates similarity measures for scientific workflows. We carry out four consecutive research tasks: First, we closely investigate the relevant properties of scientific workflows regarding their similarity and identify characteristics of re-use of their components. Second, we review and dissect existing approaches to scientific workflow comparison into a defined set of subtasks necessary in the process of workflow comparison, and re-implement previous approaches to each subtask. We create a large gold-standard corpus of expert-ratings on workflow similarity, with more than 2400 ratings provided for 485 pairs of workflows by 15 workflow experts from 6 institutions. For the first time, this allows comprehensive, comparative evaluation of different scientific workflow similarity measures, confirming some previous findings, but rejecting others. Third, we propose and evaluate a novel method for scientific workflow comparison. We show that this novel method provides results of both higher quality and higher consistency than previous approaches, and can easily be stacked and ensembled with other approaches for still better performance and higher speed. Fourth, we show how our findings can be leveraged to implement a search engine using off-the-shelf tools that performs fast, high quality similarity search for scientific workflows at repository-scale, a premier area of application for similarity measures for scientific workflows.
56

HPC scheduling in a brave new world

Gonzalo P., Rodrigo January 2017 (has links)
Many breakthroughs in scientific and industrial research are supported by simulations and calculations performed on high performance computing (HPC) systems. These systems typically consist of uniform, largely parallel compute resources and high bandwidth concurrent file systems interconnected by low latency synchronous networks. HPC systems are managed by batch schedulers that order the execution of application jobs to maximize utilization while steering turnaround time. In the past, demands for greater capacity were met by building more powerful systems with more compute nodes, greater transistor densities, and higher processor operating frequencies. Unfortunately, the scope for further increases in processor frequency is restricted by the limitations of semiconductor technology. Instead, parallelism within processors and in numbers of compute nodes is increasing, while the capacity of single processing units remains unchanged. In addition, HPC systems’ memory and I/O hierarchies are becoming deeper and more complex to keep up with the systems’ processing power. HPC applications are also changing: the need to analyze large data sets and simulation results is increasing the importance of data processing and data-intensive applications. Moreover, composition of applications through workflows within HPC centers is becoming increasingly important. This thesis addresses the HPC scheduling challenges created by such new systems and applications. It begins with a detailed analysis of the evolution of the workloads of three reference HPC systems at the National Energy Research Supercomputing Center (NERSC), with a focus on job heterogeneity and scheduler performance. This is followed by an analysis and improvement of a fairshare prioritization mechanism for HPC schedulers. The thesis then surveys the current state of the art and expected near-future developments in HPC hardware and applications, and identifies unaddressed scheduling challenges that they will introduce. These challenges include application diversity and issues with workflow scheduling or the scheduling of I/O resources to support applications. Next, a cloud-inspired HPC scheduling model is presented that can accommodate application diversity, takes advantage of malleable applications, and enables short wait times for applications. Finally, to support ongoing scheduling research, an open source scheduling simulation framework is proposed that allows new scheduling algorithms to be implemented and evaluated in a production scheduler using workloads modeled on those of a real system. The thesis concludes with the presentation of a workflow scheduling algorithm to minimize workflows’ turnaround time without over-allocating resources. / <p>Work also supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research (ASCR) and we used resources at the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility, supported by the Officece of Science of the U.S. Department of Energy, both under Contract No. DE-AC02-05CH11231.</p>
57

Simulation komplexer Arbeitsabläufe im Bereich der digitalen Fabrik [Präsentationsfolien]

Kronfeld, Thomas, Brunnett, Guido 20 December 2016 (has links) (PDF)
No description available.
58

Uma arquitetura para processamento de grande volumes de dados integrando sistemas de workflow científicos e o paradigma mapreduce

Zorrilla Coz, Rocío Milagros 13 September 2012 (has links)
Submitted by Maria Cristina (library@lncc.br) on 2017-08-10T17:48:51Z No. of bitstreams: 1 RocioZorrilla_Dissertacao.pdf: 3954121 bytes, checksum: f22054a617a91e44c59cba07b1d97fbb (MD5) / Approved for entry into archive by Maria Cristina (library@lncc.br) on 2017-08-10T17:49:05Z (GMT) No. of bitstreams: 1 RocioZorrilla_Dissertacao.pdf: 3954121 bytes, checksum: f22054a617a91e44c59cba07b1d97fbb (MD5) / Made available in DSpace on 2017-08-10T17:49:17Z (GMT). No. of bitstreams: 1 RocioZorrilla_Dissertacao.pdf: 3954121 bytes, checksum: f22054a617a91e44c59cba07b1d97fbb (MD5) Previous issue date: 2012-09-13 / Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) / With the exponential growth of computational power and generated data from scientific experiments and simulations, it is possible to find today simulations that generate terabytes of data and scientific experiments that gather petabytes of data. The type of processing required for this data is currently known as data-intensive computing. The MapReduce paradigm, which is included in the Hadoop framework, is an alternative parallelization technique for the execution of distributed applications that is being increasingly used. This framework is responsible for scheduling the execution of jobs in clusters, provides fault tolerance and manages all necessary communication between machines. For many types of complex applications, the Scientific Workflow Systems offer advanced functionalities that can be leveraged for the development, execution and evaluation of scientific experiments under different computational environments. In the Query Evaluation Framework (QEF), workflow activities are represented as algebrical operators, and specific application data types are encapsulated in a common tuple structure. QEF aims for the automatization of computational processes and data management, supporting scientists so that they can concentrate on the scientific problem. Nowadays, there are several Scientific Workflow Systems that provide components and task parallelization strategies on a distributed environment. However, scientific experiments tend to generate large sizes of information, which may limit the execution scalability in relation to data locality. For instance, there could be delays in data transfer for process execution or a fault at result consolidation. In this work, I present a proposal for the integration of QEF with Hadoop. The main objective is to manage the execution of a workflow with an orientation towards data locality. In this proposal, Hadoop is responsible for the scheduling of tasks in a distributed environment, while the workflow activities and data sources are managed by QEF. The proposed environment is evaluated using a scientific workflow from the astronomy field as a case study. Then, I describe in detail the deployment of the application in a virtualized environment. Finally, experiments that evaluate the impact of the proposed environment on the perceived performance of the application are presented, and future work discussed. / Com o crescimento exponencial do poder computacional e das fontes de geração de dados em experimentos e simulações científicas, é possível encontrar simulações que usualmente geram terabytes de dados e experimentos científicos que coletam petabytes de dados. O processamento requerido nesses casos é atualmente conhecido como computação de dados intensivos. Uma alternativa para a execução de aplicações distribuídas que atualmente é bastante usada é a técnica de paralelismo baseada no paradigma MapReduce, a qual é incluída no framework Hadoop. Esse framework se encarrega do escalonamento da execução em um conjunto de computadores (cluster), do tratamento de falhas, e do gerenciamento da comunicação necessária entre máquinas. Para diversos tipos de aplicações complexas, os Sistemas de Gerência de Workflows Científicos (SGWf) oferecem funcionalidades avançadas que auxiliam no desenvolvimento, execução e avaliação de experimentos científicos sobre diversos tipos de ambientes computacionais. No Query Evaluation Framework (QEF), as atividades de um workflow são representadas como operadores algébricos e os tipos de dados específicos da aplicação são encapsulados em uma tupla com estrutura comum. O QEF aponta para a automatização de processos computacionais e gerenciamento de dados, ajudando os cientistas a se concentrarem no problema científico. Atualmente, existem vários sistemas de gerência de workflows científicos que fornecem componentes e estratégias de paralelização de tarefas em um ambiente distribuído. No entanto, os experimentos científicos apresentam uma tendência a gerar quantidades de informação que podem representar uma limitação na escalabilidade de execução em relação à localidade dos dados. Por exemplo, é possível que exista um atraso na transferência de dados no processo de execução de determinada tarefa ou uma falha no momento de consolidar os resultados. Neste trabalho, é apresentada uma proposta de integração do QEF com o Hadoop. O objetivo dessa proposta é realizar a execução de um workflow científico orientada a localidade dos dados. Na proposta apresentada, o Hadoop é responsável pelo escalonamento de tarefas em um ambiente distribuído, enquanto que o gerenciamento das atividades e fontes de dados do workflow é realizada pelo QEF. O ambiente proposto é avaliado utilizando um workflow científico da astronomia como estudo de caso. Logo, a disponibilização da aplicação no ambiente virtualizado é descrita em detalhe. Por fim, são realizados experimentos para avaliar o impacto do ambiente proposto no desempenho percebido da aplicação, e discutidos trabalhos futuros.
59

Cost-efficient resource management for scientific workflows on the cloud

Pietri, Ilia January 2016 (has links)
Scientific workflows are used in many scientific fields to abstract complex computations (tasks) and data or flow dependencies between them. High performance computing (HPC) systems have been widely used for the execution of scientific workflows. Cloud computing has gained popularity by offering users on-demand provisioning of resources and providing the ability to choose from a wide range of possible configurations. To do so, resources are made available in the form of virtual machines (VMs), described as a set of resource characteristics, e.g. amount of CPU and memory. The notion of VMs enables the use of different resource combinations which facilitates the deployment of the applications and the management of the resources. A problem that arises is determining the configuration, such as the number and type of resources, that leads to efficient resource provisioning. For example, allocating a large amount of resources may reduce application execution time however at the expense of increased costs. This thesis investigates the challenges that arise on resource provisioning and task scheduling of scientific workflows and explores ways to address them, developing approaches to improve energy efficiency for scientific workflows and meet the user's objectives, e.g. makespan and monetary cost. The motivation stems from the wide range of options that enable to select cost-efficient configurations and improve resource utilisation. The contributions of this thesis are the following. (i) A survey of the issues arising in resource management in cloud computing; The survey focuses on VM management, cost efficiency and the deployment of scientific workflows. (ii) A performance model to estimate the workflow execution time for a different number of resources based on the workflow structure; The model can be used to estimate the respective user and energy costs in order to determine configurations that lead to efficient resource provisioning and achieve a balance between various conflicting goals. (iii) Two energy-aware scheduling algorithms that maximise the number of completed workflows from an ensemble under energy and budget or deadline constraints; The algorithms address the problem of energy-aware resource provisioning and scheduling for scientific workflow ensembles. (iv) An energy-aware algorithm that selects the frequency to be used for each workflow task in order to achieve energy savings without exceeding the workflow deadline; The algorithm takes into account the different requirements and constraints that arise depending on the workflow and system characteristics. (v) Two cost-based frequency selection algorithms that choose the CPU frequency for each provisioned resource in order to achieve cost-efficient resource configurations for the user and complete the workflow within the deadline; Decision making is based on both the workflow characteristics and the pricing model of the provider.
60

Managing system to supervise professional multimedia equipment

Azevedo, Luís Soares de January 2012 (has links)
Tese de Mestrado Integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 2012

Page generated in 0.0255 seconds