• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 780
  • 217
  • 122
  • 65
  • 54
  • 34
  • 32
  • 30
  • 28
  • 21
  • 15
  • 14
  • 9
  • 9
  • 7
  • Tagged with
  • 1601
  • 1601
  • 392
  • 282
  • 244
  • 243
  • 235
  • 231
  • 231
  • 228
  • 218
  • 210
  • 176
  • 175
  • 154
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
511

A collaborative architecture agianst DDOS attacks for cloud computing systems. / Uma arquitetura colaborativa contra ataques distribuídos de negação de serviço para sistemas de computação em nuvem.

Thiago Rodrigues Meira de Almeida 14 December 2018 (has links)
Distributed attacks, such as Distributed Denial of Service (DDoS) ones, require not only the deployment of standalone security mechanisms responsible for monitoring a limited portion of the network, but also distributed mechanisms which are able to jointly detect and mitigate the attack before the complete exhaustion of network resources. This need led to the proposal of several collaborative security mechanisms, covering different phases of the attack mitigation: from its detection to the relief of the system after the attack subsides. It is expected that such mechanisms enable the collaboration among security nodes through the distributed enforcement of security policies, either by installing security rules (e.g., for packet filtering) and/or by provisioning new specialized security nodes on the network. Albeit promising, existing proposals that distribute security tasks among collaborative nodes usually do not consider an optimal allocation of computational resources. As a result, their operation may result in a poor Quality of Service for legitimate packet flows during the mitigation of a DDoS attack. Aiming to tackle this issue, this work proposes a collaborative solution against DDoS attacks with two main goals: (1) ensure an optimal use of resources already available in the attack\'s datapath in a proactive way, and (2) optimize the placement of security tasks among the collaborating security nodes. Regardless the characteristics of each main goal, legitimate traffic must be preserved as packet loss is reduced as much as possible. / Sem resumo
512

Unveiling the interplay between timeliness and scalability in cloud monitoring systems / Desvelando a relação mútua entre escalabilidade e oportunidade em sistemas de monitoramento de nuvens computacionais

Rodrigues, Guilherme da Cunha January 2016 (has links)
Computação em nuvem é uma solução adequada para profissionais, empresas, centros de pesquisa e instituições que necessitam de acesso a recursos computacionais sob demanda. Atualmente, nuvens computacionais confiam no gerenciamento de sua estrutura para fornecer recursos computacionais com qualidade de serviço adequada as expectativas de seus clientes, tal qualidade de serviço é estabelecida através de acordos de nível de serviço. Nesse contexto, o monitoramento é uma função crítica de gerenciamento para se prover tal qualidade de serviço. Requisitos de monitoramento em nuvens computacionais são propriedades que um sistema de monitoramento de nuvem precisa reunir para executar suas funções de modo adequado e atualmente existem diversos requisitos definidos pela literatura, tais como: oportunidade, elasticidade e escalabilidade. Entretanto, tais requisitos geralmente possuem influência mútua entre eles, que pode ser positiva ou negativa, e isso impossibilita o desenvolvimento de soluções de monitoramento completas. Dado o cenario descrito acima, essa tese tem como objetivo investigar a influência mútua entre escalabilidade e oportunidade. Especificamente, essa tese propõe um modelo matemático para estimar a influência mútua entre tais requisitos de monitoramento. A metodologia utilizada por essa tese para construir tal modelo matemático baseia-se em parâmetros de monitoramento tais como: topologia de monitoramento, quantidade de dados de monitoramento e frequencia de amostragem. Além destes, a largura de banda de rede e o tempo de resposta também são importantes métricas do modelo matemático. A avaliação dos resultados obtidos foi realizada através da comparação entre os resultados do modelo matemático e de uma simulação. As maiores contribuições dessa tese são divididas em dois eixos, estes são denominados: Básico e Chave. As contribuições do eixo básico são: (i) a discussão a respeito da estrutura de monitoramento de nuvem e introdução do conceito de foco de monitoramento (ii) o exame do conceito de requisito de monitoramento e a proposição do conceito de abilidade de monitoramento (iii) a análise dos desafios e tendências a respeito de monitoramento de nuvens computacionais. As contribuições do eixo chave são: (i) a discussão a respeito de oportunidade e escalabilidade incluindo métodos para lidar com a mútua influência entre tais requisitos e a relação desses requisitos com parâmetros de monitoramento (ii) a identificação dos parâmetros de monitoramento que são essenciais na relação entre oportunidade e escalabilidade (iii) a proposição de um modelo matemático baseado em parâmetros de monitoramento que visa estimar a relação mútua entre oportunidade e escalabilidade. / Cloud computing is a suitable solution for professionals, companies, research centres, and institutions that need to have access to computational resources on demand. Nowadays, clouds have to rely on proper management of its structure to provide such computational resources with adequate quality of service, which is established by Service Level Agreements (SLAs), to customers. In this context, cloud monitoring is a critical management function to achieve it. Cloud monitoring requirements are properties that a cloud monitoring system need to meet to perform its functions properly, and currently there are several of them such as timeliness, elasticity and scalability. However, such requirements usually have mutual influence, which is either positive or negative, among themselves, and it has prevented the development of complete cloud monitoring solutions. From the above, this thesis investigates the mutual influence between timeliness and scalability. This thesis proposes a mathematical model to estimate such mutual influence to enhance cloud monitoring systems. The methodology used in this thesis is based on monitoring parameters such as monitoring topologies, the amount of monitoring data, and frequency sampling. Besides, it considers as important metrics network bandwidth and response time. Finally, the evaluation is based on a comparison of the mathematical model results and outcomes obtained via simulation. The main contributions of this thesis are divided into two axes, namely, basic and key. Basic contributions of this thesis are: (i) it discusses the cloud monitoring structure and introduced the concept of cloud monitoring focus (ii) it examines the concept of cloud monitoring requirement and proposed to divide them into two groups defined as cloud monitoring requirements and cloud monitoring abilities (iii) it analysed challenges and trends in cloud monitoring pointing research gaps that include the mutual influence between cloud monitoring requirements which is core to the key contributions. The key contributions of this thesis are: (i) it presents a discussion of timeliness and scalability that include: the methods currently used to cope with the mutual influence between them, and the relation between such requirements and monitoring parameters (ii) it identifies the monitoring parameters that are essential in the relation between timeliness and scalability (iii) it proposes a mathematical model based on monitoring parameters to estimate the mutual influence between timeliness and scalability.
513

Uppe bland molnen : Hur affärssystemleverantörer hanterar de huvudsakliga riskerna med Cloud Computing. / In the clouds : how ERP providers manage the main risks with Cloud Computing.

Johansson, Tina, Bergström, Johannes January 2012 (has links)
IT-världen är under ständig förändring, en lösning som var tänkbar för tio år sedan är nukanske föråldrad. Ett alternativ som de senaste åren har växt sig större inom IT-världen ärCloud Computing. Att lägga ut delar eller hela affärssystemet i en molnbaserad tjänst viainternet är i dagsläget fullt möjligt. Det kan leda till kostnadsreduceringar gentemot etttraditionellt affärssystem då företaget inte behöver spendera pengar på egna servrar förlagring. Det är även flexibelt i det hänseendet att företaget kan bestämma själv hurmycket lagringutrymme de vill ha. Samtidigt så har forskning visat på att det finns ettantal risker med molnbaserade tjänster, exempelvis hur prestanda påverkas genom attanvända sig av en tjänst placerad på internet istället för internt på företaget. Det finnsockså en osäkerhet gällande hur lagringen sker när det gäller konfidentiell information.Då det finns mycket information om att det upplevs risk med användande av CloudComputing så finns det inte definierat vilka som är de mest oroande riskerna. Det stårockså oklart hur affärssystemleverantörer hanterar risker med molnbaserade tjänster.Syftet med denna uppsats är därmed att identifiera de huvudsakliga riskerna som finnsmed Cloud Computing samt att låta affärssystemleverantörer svara på hur de hanterardessa risker.Utgångspunkten har varit befintlig litteratur och undersökningar för att skapa denteoretiska referensramen. I teorin har också de huvudsakliga riskerna med CloudComputing identifierats. De har sedan använts som grund för att formulera deintervjufrågor som har använts för insamling av empiri. Detta har gjorts genom intervjuermed fem affärssystemleverantörer som har fått berätta om hur de hanterar dehuvudsakliga riskerna med Cloud Computing. Svaren har sedan dokumenterats för attrelatera de med varandra för att skapa en samlad bild hur risker med Cloud Computinghanteras.Studiens resultat visar på att det finns olika tjänster inom Cloud Computing. Software asa Service, Platform as a Service och Infrastructure as a Service är tre olika tjänsteformersom finns inom Cloud Computing. Inom Cloud Computing finns det olikasäkerhetsaspekter att ta hänsyn till på olika nivåer. Dessa aspekter finns på nätverk-,värd- och applikationsnivå. Företag kan även nyttja Service-level agreement vidinförandet av en molntjänst, vilket är ett avtal som reglerar vad kunden samt leverantörenhar för krav på sig vid användandet av en molntjänst. Det har identifierats femhuvudsakliga risker med Cloud Computing. Sekretess och integritet, juridiska krav,tillgänglighet, kontroll och lagring samt leverantörsberoende. Under den empiriskastudien så har fem affärssystemleverantörer besvarat hur dessa risker hanteras. Sättet dehanterar riskerna visar på både likheter och skillnader affärssystemleverantörernaemellan. / Program: Dataekonomutbildningen
514

Leverans av IT-system : Tentativa faktorer som påverkar valet / Delivery of IT systems : Tentative factors affecting the choice

Berggren, Theresé January 2012 (has links)
Cloud computing diskuteras överallt trots det finns det fortfarande ingen gemensam syn på vad det egentligen är. Om tio specialister inom området fick frågan vad cloud computing innebär skulle tio olika svar ges. Om en definition på cloud görs har den inte en likadan betydelse som begreppet cloud computing. Ordet cloud är en vanlig metafor för Internet men när cloud kombineras med computing blir innebörden större och otydligare. Termen cloud i en striktare mening innebär användandet av datorresurser online snarare än över hela Internet. Specialister inom området är överens om att termen cloud computing innefattar att programvaror, datalagring och processorkraft finns tillgängligt över Internet. Den nya trenden cloud computing har inte längre ett datacenter lokalt i företaget som det finns i traditional computing. Begreppet traditional computing är inget vanligt begrepp som används men nu när cloud computing är ett aktuellt diskussionsämne behöver de ha någon annan form av computing för att kunna jämföra med, det för att förstå cloud computing bättre. Traditional computing innebär att företagen själva äger och driver sina applikationer på deras egen infrastruktur. Leverantören installerar mjukvara lokalt hos kundens datorenheter och kundens data lagras på plats. Många diskussioner rör sig omkring hur användarna upplever det och inte hur leverantörer inom de både områdena uppfattar det. Att kunder upplever många för- och nackdelar med områdena gjorde det intressant att göra en studie kring hur leverantörer upplever att leverera tjänster via cloud computing och traditional computing. För att studien skulle få ett resultat genomfördes en teoretisk undersökning för att få en förståelse för områdena och som sedan skulle vara till hjälp inför den empiriska undersökningen. Den empiriska undersökningen var dessutom en grund för att tolka och förstå hur systemutvecklingsföretag uppfattar vilka faktorer leverantörer bör uppmärksamma på. Att samla in empirimaterial genomfördes med kvalitativa intervjuer och teorimaterial via litteraturstudier. Forskaren har efter denna undersökning kunna presentera vilka tentativa faktorer en IT-leverantör bör uppmärksamma vid val av leveranssätt. / Program: Dataekonomutbildningen
515

Cloud Computing Adoption in Iran as a Developing Country : A Tentative Framework Based on Experiences from Iran.

Mousavi Shoshtari, Seyed Farid January 2013 (has links)
The employment of the right technology in an organisation can provide major competitiveadvantages. Not only in organisations, but at a higher level, governments are seeking for newtechnologies to enhance their services while minimising the costs. Although, there might beno precise definition for cloud computing, the tremendous advantages and benefits of this newtechnology has turn cloud computing to the hottest topic in Information Technology.The remarkable effects of cloud computing in economy have already stimulated thedeveloped countries to deploy this technology in national level. Nonetheless, the adoption ofcloud computing could transform the workflow in the organisations. Therefore, in order toensure the smooth transition with minimal casualties, preparations needs to be done and aclear road map has to be followed.However, the approach to cloud adoption process in developing countries can be entirelydifferent. While it has been pointed out that cloud computing can bring more advantages todeveloping countries, it adoption can be profoundly challenging. Consequently, a set offundamental and yet vital preparation are required to facilitate the process of cloud adoption.Moreover, a definite framework that is formed based on the current state of the country isabsolutely necessary.In this research, we focus on the process of cloud adoption in Iran as a developing country.We start by providing a comprehensive background on cloud computing by studying itsaspects, features, advantages and disadvantages and continue to identify the vital cloudreadiness criteria. Next, we conduct an empirical study in order to assess the state of cloudreadiness in Iran by performing interviews, observations and discussions. Finally, after weanalyse our data from the empirical study, we present our results by presenting a clear and definitive framework for cloud adoption in Iran. / Program: Masterutbildning i Informatik
516

Design and Optimization of Mobile Cloud Computing Systems with Networked Virtual Platforms

Jung, Young Hoon January 2016 (has links)
A Mobile Cloud Computing (MCC) system is a cloud-based system that is accessed by the users through their own mobile devices. MCC systems are emerging as the product of two technology trends: 1) the migration of personal computing from desktop to mobile devices and 2) the growing integration of large-scale computing environments into cloud systems. Designers are developing a variety of new mobile cloud computing systems. Each of these systems is developed with different goals and under the influence of different design constraints, such as high network latency or limited energy supply. The current MCC systems rely heavily on Computation Offloading, which however incurs new problems such as scalability of the cloud, privacy concerns due to storing personal information on the cloud, and high energy consumption on the cloud data centers. In this dissertation, I address these problems by exploring different options in the distribution of computation across different computing nodes in MCC systems. My thesis is that "the use of design and simulation tools optimized for design space exploration of the MCC systems is the key to optimize the distribution of computation in MCC." For a quantitative analysis of mobile cloud computing systems through design space exploration, I have developed netShip, the first generation of an innovative design and simulation tool, that offers large scalability and heterogeneity support. With this tool system designers and software programmers can efficiently develop, optimize, and validate large-scale, heterogeneous MCC systems. I have enhanced netShip to support the development of ever-evolving MCC applications with a variety of emerging needs including the fast simulation of new devices, e.g., Internet-of-Things devices, and accelerators, e.g., mobile GPUs. Leveraging netShip, I developed three new MCC systems where I applied three variations of a new computation distributing technique, called Reverse Offloading. By more actively leveraging the computational power on mobile devices, the MCC systems can reduce the total execution times, the burden of concentrated computations on the cloud, and the privacy concerns about storing personal information available in the cloud. This approach also creates opportunities for new services by utilizing the information available on the mobile device instead of accessing the cloud. Throughout my research I have enabled the design optimization of mobile applications and cloud-computing platforms. In particular, my design tool for MCC systems becomes a vehicle to optimize not only the performance but also the energy dissipation, an aspect of critical importance for any computing system.
517

On SIP Server Clusters and the Migration to Cloud Computing Platforms

Kim, Jong Yul January 2016 (has links)
This thesis looks in depth at telephony server clusters, the modern switchboards at the core of a packet-based telephony service. The most widely used de facto standard protocols for telecommunications are the Session Initiation Protocol (SIP) and the Real Time Protocol (RTP). SIP is a signaling protocol used to establish, maintain, and tear down communication channel between two or more parties. RTP is a media delivery protocol that allows packets to carry digitized voice, video, or text. SIP telephony server clusters that provide communications services, such as an emergency calling service, must be scalable and highly available. We evaluate existing commercial and open source telephony server clusters to see how they differ in scalability and high availability. We also investigate how a scalable SIP server cluster can be built on a cloud computing platform. Elasticity of resources is an attractive property for SIP server clusters because it allows the cluster to grow or shrink organically based on traffic load. However, simply deploying existing clusters to cloud computing platforms is not good enough to take full advantage of elasticity. We explore the design and implementation of clusters that scale in real-time. The database tier of our cluster was modified to use a scalable key-value store so that both the SIP proxy tier and the database tier can scale separately. Load monitoring and reactive threshold-based scaling logic is presented and evaluated. Server clusters also need to reduce processing latency. Otherwise, subscribers experience low quality of service such as delayed call establishment, dropped calls, and inadequate media quality. Cloud computing platforms do not guarantee latency on virtual machines due to resource contention on the same physical host. These extra latencies from resource contention are temporary in nature. Therefore, we propose and evaluate a mechanism that temporarily distributes more incoming calls to responsive SIP proxies, based on measurements of the processing delay in proxies. Availability of SIP server clusters is also a challenge on platforms where a node may fail anytime. We investigated how single component failures in a cluster can lead to a complete system outage. We found that for single component failures, simply having redundant components of the same type are enough to mask those failures. However, for client-facing components, smarter clients and DNS resolvers are necessary. Throughout the thesis, a prototype SIP proxy cluster is re-used, with variations in the architecture or configuration, to demonstrate and address issues mentioned above. This allows us to tie all of our approaches for different issues into one coherent system that is dynamically scalable, is responsive despite latency varations of virtual machines, and is tolerant of single component failures in cloud platforms.
518

Essays on Cloud Pricing and Causal Inference

Kilcioglu, Cinar January 2016 (has links)
In this thesis, we study economics and operations of cloud computing, and we propose new matching methods in observational studies that enable us to estimate the effect of green building practices on market rents. In the first part, we study a stylized revenue maximization problem for a provider of cloud computing services, where the service provider (SP) operates an infinite capacity system in a market with heterogeneous customers with respect to their valuation and congestion sensitivity. The SP offers two service options: one with guaranteed service availability, and one where users bid for resource availability and only the "winning" bids at any point in time get access to the service. We show that even though capacity is unlimited, in several settings, depending on the relation between valuation and congestion sensitivity, the revenue maximizing service provider will choose to make the spot service option stochastically unavailable. This form of intentional service degradation is optimal in settings where user valuation per unit time increases sub-linearly with respect to their congestion sensitivity (i.e., their disutility per unit time when the service is unavailable) -- this is a form of "damaged goods." We provide some data evidence based on the analysis of price traces from the biggest cloud service provider, Amazon Web Services. In the second part, we study the competition on price and quality in cloud computing. The public "infrastructure as a service" cloud market possesses unique features that make it difficult to predict long-run economic behavior. On the one hand, major providers buy their hardware from the same manufacturers, operate in similar locations and offer a similar menu of products. On the other hand, the competitors use different proprietary "fabric" to manage virtualization, resource allocation and data transfer. The menus offered by each provider involve a discrete number of choices (virtual machine sizes) and allow providers to locate in different parts of the price-quality space. We document this differentiation empirically by running benchmarking tests. This allows us to calibrate a model of firm technology. Firm technology is an input into our theoretical model of price-quality competition. The monopoly case highlights the importance of competition in blocking "bad equilibrium" where performance is intentionally slowed down or options are unduly limited. In duopoly, price competition is fierce, but prices do not converge to the same level because of price-quality differentiation. The model helps explain market trends, such the healthy operating profit margin recently reported by Amazon Web Services. Our empirically calibrated model helps not only explain price cutting behavior but also how providers can manage a profit despite predictions that the market "should be" totally commoditized. The backbone of cloud computing is datacenters, whose energy consumption is enormous. In the past years, there has been an extensive effort on making the datacenters more energy efficient. Similarly, buildings are in the process going "green" as they have a major impact on the environment through excessive use of resources. In the last part of this thesis, we revisit a previous study about the economics of environmentally sustainable buildings and estimate the effect of green building practices on market rents. For this, we use new matching methods that take advantage of the clustered structure of the buildings data. We propose a general framework for matching in observational studies and specific matching methods within this framework that simultaneously achieve three goals: (i) maximize the information content of a matched sample (and, in some cases, also minimize the variance of a difference-in-means effect estimator); (ii) form the matches using a flexible matching structure (such as a one-to-many/many-to-one structure); and (iii) directly attain covariate balance as specified ---before matching--- by the investigator. To our knowledge, existing matching methods are only able to achieve, at most, two of these goals simultaneously. Also, unlike most matching methods, the proposed methods do not require estimation of the propensity score or other dimensionality reduction techniques, although with the proposed methods these can be used as additional balancing covariates in the context of (iii). Using these matching methods, we find that green buildings have 3.3% higher rental rates per square foot than otherwise similar buildings without green ratings ---a moderately larger effect than the one previously found.
519

O conhecimento e a pesquisa nas nuvens: uma pesquisa social sobre a aplicação das práticas de gestão do conhecimento associadas às tecnologias de computação em nuvem nos ambientes de pesquisas. / Knowledge and research in the clouds: a social survey on the implementation of knowledge management practices associated with cloud computing technologies in research environments.

Santos, Domingos Bernardo Gomes 17 March 2016 (has links)
As tecnologias de computação em nuvem estão se tornando uma tendência na indústria de TI. Tratam-se de tecnologias que buscam um melhor aproveitamento dos recursos computacionais que são utilizados no âmbito empresarial e que passaram a ser adotadas pelas universidades e instituições de pesquisas. A gestão do conhecimento está se transformando em um valioso recurso estratégico para as empresas e tem sido apontada por estudiosos, pesquisadores e cientistas como relevante e obrigatória para o crescimento das organizações nas mais variadas áreas de atuação. Este estudo teve como objetivo investigar em que medida as práticas de gestão do conhecimento associadas com as tecnologias de computação em nuvem podem contribuir com a produção do conhecimento nos ambientes de pesquisas. O estudo foi realizado através de uma pesquisa social, cujo instrumento de pesquisa foi um questionário aplicado a alunos de pós graduação, pesquisadores mestres e doutores e professores nas áreas da computação e engenharia de universidades públicas no Brasil, cuja taxa de respostas obtida foi de 37.80%. Esta pesquisa social avaliou quais são os impactos causados pela adoção das práticas de gestão do conhecimento sobre a produção do conhecimento científico. Para tanto, optou-se por empregar como referência o modelo de gestão do conhecimento de Nonaka e Takeuchi (1997) para identificar e classificar ações de socialização, externalização, combinação e internalização dos conhecimentos científicos produzidos no ambiente de pesquisas. Conclui-se, que a adoção das práticas de gestão do conhecimento podem estabelecer uma cultura organizacional com enfoque no conhecimento onde são valorizadas todas as ações que venham contribuir com a produção do conhecimento científico. Esta pesquisa social também avaliou como as tecnologias de computação em nuvem podem favorecer o desenvolvimento das atividades relacionadas a pesquisa científica. Apurou-se que as tecnologias computacionais se tornaram indispensáveis e a maioria dos entrevistados informaram que utilizam ou já utilizaram as tecnologias de computação em nuvem no desenvolvimento das atividades relacionadas com a pesquisa científica. Os resultados obtidos sugerem que a adoção das práticas de gestão do conhecimento associadas a utilização de tecnologias de computação em nuvem podem trazer diversos benefícios e contribuições aos ambientes de pesquisas e consequentemente a produção do conhecimento científico. Por fim, espera-se também que os gestores de grupos de pesquisas possam utilizar as informações apresentadas neste trabalho para apoiar a adoção de práticas de gestão do conhecimento e incentivar a utilização das tecnologias computacionais em nuvem que encontram-se disponíveis nos ambientes de pesquisas científicas. / Cloud computing technologies are becoming a trend in the IT industry. These are technologies that seek to make better use of computing resources that are used in the business sector and that began to be adopted by universities and research institutions. Knowledge management is becoming a valuable strategic resource for companies and has been pointed out by scholars, researchers and scientists as relevant and required for the growth of organizations in various areas of expertise. This study aimed to investigate the extent to which knowledge management practices associated with cloud computing technologies can contribute to the production of knowledge in research environments. The study was conducted through a social research, whose research instrument was a questionnaire applied to graduate students, researchers and teachers in the areas of computer and engineer of public universities in Brazil, whose reply rate was 37.80%. This social research assesses what are the impacts caused by the adoption of knowledge management practices on the production of scientific knowledge. It was decided to use as a reference the model of knowledge management Nonaka and Takeuchi (1997) to identify and classify actions socialization, externalization, combination and internalization of scientific knowledge produced in the research environment. In conclusion, that the adoption of knowledge management practices can establish an organizational culture focused on knowledge which is valued all actions that contribute to the production of scientific knowledge. This social research also assessed how cloud computing technologies can foster the development of activities related to scientific research. It was found that computer technologies have become indispensable and the majority of respondents reported that they use or have used cloud computing technologies in the development of activities related to scientific research. The results suggest that the adoption of knowledge management practices associated with the use of cloud computing technology can bring many benefits and contributions to research environments and consequently the production of scientific knowledge. Finally, it is also expected that managers of research groups can use the information presented in this work to support the adoption of knowledge management practices and encourage the use of computer technologies in cloud that had been available in scientific research environments.
520

Oferecimento de QoS para computação em nuvens por meio de metaescalonamento / Providing QoS to cloud computing by means metascheduling

Peixoto, Maycon Leone Maciel 13 August 2012 (has links)
Este projeto apresenta a proposta de uma arquitetura de Metaescalonador que leva em consideração o emprego de qualidade de serviço (QoS) para o ambiente de Computação em Nuvem. O Metaescalonador é capaz de realizar a alocação dos recursos dinamicamente, procurando atender as restrições temporais. Em resposta a esse dilema de escalonamento aplicado a Computação em Nuvem, este projeto propõe uma abordagem chamado MACC: Metascheduler Architecture to provide QoS in Cloud Computing. A função principal do MACC é distribuir e gerenciar o processamento das requisições de serviços entre os recursos disponíveis, cumprindo os termos agregados na SLA - Service Level Agreement. São apresentados resultados obtidos considerando-se diferentes algoritmos de roteamento e de alocação de máquinas virtuais. Os resultados apresentados são discutidos e analisados de acordo com as técnicas de planejamento de experimentos / This project proposes a Metascheduler architecture that takes into account the use of quality of service (QoS) to cloud computing environment. The Metascheduler is capable of dynamically allocating resources, trying to meet the timing constraints. In response to this scheduling dilemma applied to cloud computing, this project proposes an approach called MACC - Metascheduler Architecture to Provide QoS in Cloud Computing. The main function of the MACC is to distribute and to manage the service requests among the available resources, meeting the aggregate terms in the SLA - Service Level Agreement. Results are presented considering different routing algorithms and allocation of virtual machines. The results are discussed and analyzed in accordance with the techniques of experimental design

Page generated in 0.0718 seconds