661 |
Uso de mecanismos de elasticidade como solução de tolerância a falhas para ambiente de computação em nuvemDivino, Kleber da Silva January 2014 (has links)
Orientador: Prof. Dr. Carlos Kamienski / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia da Informação, 2014. / Cada vez mais as empresas optam por utilizar serviços baseados em
computação em nuvem, tanto serviços fornecidos por terceiros como a criação
de sua própria infraestrutura de nuvem privada. Nesse contexto, cada vez mais
essas empresas dependem de aplicações que são executadas nesses tipos de
ambiente. Devido a essa dependência, as soluções baseadas em computação
em nuvem devem fornecer soluções de alta disponibilidade, que não
comprometerão o fornecimento do serviço em caso de falhas. Em um ambiente
de computação em nuvem, todas as aplicações devem possuir um nível
mínimo de disponibilidade, porém devido ao modelo de cobrança de serviços
de computação em nuvem, um cliente pode necessitar que algumas aplicações
possuam um maior nível de confiabilidade, onde em caso de falha o cliente não
tenha nenhum período de indisponibilidade. Assim, existem mecanismos de
tolerância a falhas com técnicas já consolidadas, que podem ser utilizados
também em ambientes de computação em nuvem, porém também existe o
conceito de elasticidade que em situações especificas dispensa o uso de
mecanismo de proteção para determinados componentes do ambiente,
podendo o próprio mecanismo de elasticidade realizar de um mecanismo de
tolerância a falhas.
Este trabalho tem como principais contribuições a possibilidade de
compreender a ocorrência de falhas em um ambiente de elasticidade,
possibilitando entender quais as melhores formas de lidar com falhas nesse
tipo de ambiente e também conhecer quais as consequências da ocorrência de
falhas. / More and more companies choose to use services based in Cloud
Computing, either provided by a third party or by its own private cloud
infrastructure. In this context, companies are increasing their dependence in
applications running in this kind of environment. Consequently, solutions based
on Cloud Computing must deliver high availability, not compromising the
services in case of failure. In a Cloud Computing environment, all applications
must have high availability standard levels, although due to pricing models, a
client may require that some applications have greater level of reliability,
therefore, in case of failure, the cloud services would still be available. There
are consolidated fault tolerance mechanisms with already established
techniques that can be used in a cloud computing environment. However, there
is also the elasticity concept, which, in a specific situation, eliminates the use of
a protection mechanism for some environment components, making the proper
elasticity mechanism the one to provide the fault tolerance protection.
This paper main contribution is the understanding of failures occurrence
in elasticity environment, which lead us to understand the best way to deal with
these failures, and also allows us to recognize the consequences generated
due to these failures.
|
662 |
Algoritmo de escalonamento de instância de máquina virtual na computação em nuvemBachiega, Naylor Garcia [UNESP] 19 May 2014 (has links) (PDF)
Made available in DSpace on 2014-11-10T11:09:41Z (GMT). No. of bitstreams: 0
Previous issue date: 2014-05-19Bitstream added on 2014-11-10T11:58:47Z : No. of bitstreams: 1
000790282.pdf: 1551632 bytes, checksum: 1c51e3c52479631d4efb320e548d592b (MD5) / Na tentativa de reduzir custos aproveitando de maneira eficiente recursos computacionais, novas tecnologias e arquiteturas desenvolvidas estão conquistando grande aceitação do mercado. Uma dessas tecnologias é a Computação em Nuvem, que tenta resolver problemas como consumo energético e alocação de espaço físico em centros de dados ou grandes empresas. A nuvem é um ambiente compartilhado por diversos clientes e permite um crescimento elástico, onde novos recursos como hardware ou software, podem ser contratados ou vendidos a qualquer momento. Nesse modelo, os clientes pagam por recursos que utilizam e não por toda a arquitetura envolvida. Sendo assim, é importante determinar de forma eficiente como esses recursos são distribuídos na nuvem. Portanto, esse trabalho teve como objetivo desenvolver um algoritmo de escalonamento para nuvem que determinasse de maneira eficiente a distribuição de recursos dentro da arquitetura. Para alcançar esse objetivo, foram realizados experimentos com gestores de nuvem open-source, detectando a deficiência dos algoritmos atuais. O algoritmo desenvolvido foi comparado com o algoritmo atual do gestor OpenStack Essex, um gestor de nuvem open-source. Os resultados experimentais demonstraram que o novo algoritmo conseguiu determinar as máquinas menos sobrecarregadas da nuvem, conseguindo desse modo, distribuir a carga de processamento dentro do ambiente privado / In an attempt to reduce costs by taking advantage of efficient computing resources, new technologies and architectures developed are gaining wide acceptance in the market. One such technology is cloud computing, which tries to solve problems like energy consumption and allocation of space in data centers or large companies. The cloud is an environment shared by multiple clients and enables elastic growth, where new features such as hardware or software, can be hired or sold at any time. In this model, customers pay for the resources they use and not for all the architecture involved. Therefore, it is important to determine how efficiently those resources are distributed in the cloud. Therefore, this study aimed to develop a scheduling algorithm for cloud efficiently determine the distribution of resources within the architecture. To achieve this goal, experiments were conducted with managers of open-source cloud, detecting the deficiency of current algorithms. This algorithm was compared with the algorithm of the OpenStack Essex manager, a manager of open-source cloud. The experimental results show that the new algorithm could determine the machines less the cloud overloaded, achieving thereby distribute the processing load within the private environment
|
663 |
Extending the battery life of mobile device by computation offloadingQian, Hao January 1900 (has links)
Doctor of Philosophy / Computing and Information Sciences / Daniel A. Andresen / The need for increased performance of mobile device directly conflicts with the desire for longer battery life. Offloading computation to resourceful servers is an effective method to reduce energy consumption and enhance performance for mobile applications. Today, most mobile devices have fast wireless link such as 4G and Wi-Fi, making computation offloading a reasonable solution to extend battery life of mobile device. Android provides mechanisms for creating mobile applications but lacks a native scheduling system for determining where code should be executed. We present Jade, a system that adds sophisticated energy-aware computation offloading capabilities to Android applications. Jade monitors device and application status and automatically decides where code should be executed. Jade dynamically adjusts offloading strategy by adapting to workload variation, communication costs, and device status. Jade minimizes the burden on developers to build applications with computation offloading ability by providing easy-to-use Jade API. Evaluation shows that Jade can effectively reduce up to 37% of average power consumption for mobile device while improving application performance.
|
664 |
Sustainable Cloud ComputingJanuary 2014 (has links)
abstract: Energy consumption of the data centers worldwide is rapidly growing fueled by ever-increasing demand for Cloud computing applications ranging from social networking to e-commerce. Understandably, ensuring energy-efficiency and sustainability of Cloud data centers without compromising performance is important for both economic and environmental reasons. This dissertation develops a cyber-physical multi-tier server and workload management architecture which operates at the local and the global (geo-distributed) data center level. We devise optimization frameworks for each tier to optimize energy consumption, energy cost and carbon footprint of the data centers. The proposed solutions are aware of various energy management tradeoffs that manifest due to the cyber-physical interactions in data centers, while providing provable guarantee on the solutions' computation efficiency and energy/cost efficiency. The local data center level energy management takes into account the impact of server consolidation on the cooling energy, avoids cooling-computing power tradeoff, and optimizes the total energy (computing and cooling energy) considering the data centers' technology trends (servers' power proportionality and cooling system power efficiency). The global data center level cost management explores the diversity of the data centers to minimize the utility cost while satisfying the carbon cap requirement of the Cloud and while dealing with the adversity of the prediction error on the data center parameters. Finally, the synergy of the local and the global data center energy and cost optimization is shown to help towards achieving carbon neutrality (net-zero) in a cost efficient manner. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2014
|
665 |
SDN-based Proactive Defense Mechanism in a Cloud SystemJanuary 2015 (has links)
abstract: Cloud computing is known as a new and powerful computing paradigm. This new generation of network computing model delivers both software and hardware as on-demand resources and various services over the Internet. However, the security concerns prevent users from adopting the cloud-based solutions to fulfill the IT requirement for many business critical computing. Due to the resource-sharing and multi-tenant nature of cloud-based solutions, cloud security is especially the most concern in the Infrastructure as a Service (IaaS). It has been attracting a lot of research and development effort in the past few years.
Virtualization is the main technology of cloud computing to enable multi-tenancy.
Computing power, storage, and network are all virtualizable to be shared in an IaaS system. This important technology makes abstract infrastructure and resources available to users as isolated virtual machines (VMs) and virtual networks (VNs). However, it also increases vulnerabilities and possible attack surfaces in the system, since all users in a cloud share these resources with others or even the attackers. The promising protection mechanism is required to ensure strong isolation, mediated sharing, and secure communications between VMs. Technologies for detecting anomalous traffic and protecting normal traffic in VNs are also needed. Therefore, how to secure and protect the private traffic in VNs and how to prevent the malicious traffic from shared resources are major security research challenges in a cloud system.
This dissertation proposes four novel frameworks to address challenges mentioned above. The first work is a new multi-phase distributed vulnerability, measurement, and countermeasure selection mechanism based on the attack graph analytical model. The second work is a hybrid intrusion detection and prevention system to protect VN and VM using virtual machines introspection (VMI) and software defined networking (SDN) technologies. The third work further improves the previous works by introducing a VM profiler and VM Security Index (VSI) to keep track the security status of each VM and suggest the optimal countermeasure to mitigate potential threats. The final work is a SDN-based proactive defense mechanism for a cloud system using a reconfiguration model and moving target defense approaches to actively and dynamically change the virtual network configuration of a cloud system. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2015
|
666 |
Modeling, Simulation and Analysis for Software-as-Service in CloudJanuary 2015 (has links)
abstract: Software-as-a-Service (SaaS) has received significant attention in recent years as major computer companies such as Google, Microsoft, Amazon, and Salesforce are adopting this new approach to develop software and systems. Cloud computing is a computing infrastructure to enable rapid delivery of computing resources as a utility in a dynamic, scalable, and virtualized manner. Computer Simulations are widely utilized to analyze the behaviors of software and test them before fully implementations. Simulation can further benefit SaaS application in a cost-effective way taking the advantages of cloud such as customizability, configurability and multi-tendency.
This research introduces Modeling, Simulation and Analysis for Software-as-Service in Cloud. The researches cover the following topics: service modeling, policy specification, code generation, dynamic simulation, timing, event and log analysis. Moreover, the framework integrates current advantages of cloud: configurability, Multi-Tenancy, scalability and recoverability.
The following chapters are provided in the architecture:
Multi-Tenancy Simulation Software-as-a-Service.
Policy Specification for MTA simulation environment.
Model Driven PaaS Based SaaS modeling.
Dynamic analysis and dynamic calibration for timing analysis.
Event-driven Service-Oriented Simulation Framework.
LTBD: A Triage Solution for SaaS. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2015
|
667 |
Toward Customizable Multi-tenant SaaS ApplicationsJanuary 2016 (has links)
abstract: Nowadays, Computing is so pervasive that it has become indeed the 5th utility (after water, electricity, gas, telephony) as Leonard Kleinrock once envisioned. Evolved from utility computing, cloud computing has emerged as a computing infrastructure that enables rapid delivery of computing resources as a utility in a dynamically scalable, virtualized manner. However, the current industrial cloud computing implementations promote segregation among different cloud providers, which leads to user lockdown because of prohibitive migration cost. On the other hand, Service-Orented Computing (SOC) including service-oriented architecture (SOA) and Web Services (WS) promote standardization and openness with its enabling standards and communication protocols. This thesis proposes a Service-Oriented Cloud Computing Architecture by combining the best attributes of the two paradigms to promote an open, interoperable environment for cloud computing development. Mutil-tenancy SaaS applicantions built on top of SOCCA have more flexibility and are not locked down by a certain platform. Tenants residing on a multi-tenant application appear to be the sole owner of the application and not aware of the existence of others. A multi-tenant SaaS application accommodates each tenant’s unique requirements by allowing tenant-level customization. A complex SaaS application that supports hundreds, even thousands of tenants could have hundreds of customization points with each of them providing multiple options, and this could result in a huge number of ways to customize the application. This dissertation also proposes innovative customization approaches, which studies similar tenants’ customization choices and each individual users behaviors, then provides guided semi-automated customization process for the future tenants. A semi-automated customization process could enable tenants to quickly implement the customization that best suits their business needs. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2016
|
668 |
Study of Knowledge Transfer Techniques For Deep Learning on Edge DevicesJanuary 2018 (has links)
abstract: With the emergence of edge computing paradigm, many applications such as image recognition and augmented reality require to perform machine learning (ML) and artificial intelligence (AI) tasks on edge devices. Most AI and ML models are large and computational heavy, whereas edge devices are usually equipped with limited computational and storage resources. Such models can be compressed and reduced in order to be placed on edge devices, but they may loose their capability and may not generalize and perform well compared to large models. Recent works used knowledge transfer techniques to transfer information from a large network (termed teacher) to a small one (termed student) in order to improve the performance of the latter. This approach seems to be promising for learning on edge devices, but a thorough investigation on its effectiveness is lacking.
The purpose of this work is to provide an extensive study on the performance (both in terms of accuracy and convergence speed) of knowledge transfer, considering different student-teacher architectures, datasets and different techniques for transferring knowledge from teacher to student.
A good performance improvement is obtained by transferring knowledge from both the intermediate layers and last layer of the teacher to a shallower student. But other architectures and transfer techniques do not fare so well and some of them even lead to negative performance impact. For example, a smaller and shorter network, trained with knowledge transfer on Caltech 101 achieved a significant improvement of 7.36\% in the accuracy and converges 16 times faster compared to the same network trained without knowledge transfer. On the other hand, smaller network which is thinner than the teacher network performed worse with an accuracy drop of 9.48\% on Caltech 101, even with utilization of knowledge transfer. / Dissertation/Thesis / Masters Thesis Computer Science 2018
|
669 |
An investigation of readiness assessments for e-government information system and cloud computing using Saudi Arabia as a case studyKurdi, Rabea F. January 2013 (has links)
In the on-going ICT world revolution, e-government applications are considered as one of the modern, growing, and important applications delivered over the Internet. These applications, enabling citizens to interact with government, have emerged in recent years, and are likely to have a positive impact on citizens, government, business and society. It is known that e-government is a new concept. Therefore, much effort is needed in achieving its prime objectives assessment strategies for both the public and private sectors. In this context, new technologies provide several benefits to government over traditional technologies. The literature review, completed by the researcher, indicated that there is a gap between practice and theory identified by the absence of a comprehensive assessment framework for e-government systems and readiness. Most of the assessment frameworks, reviewed for the study, are varied in terms of philosophies, objectives, methodologies, approaches. This implies that there is no assessment framework that is likely to cover all e-government readiness aspects. This research proposed to develop a comprehensive framework of associated guidelines and tools to support e-government Information Systems Readiness (EGISR) and Cloud Computing. The developed framework contains the internal as well as external factors affecting e-government readiness and has been categorised into four main layers namely i.e. technology readiness, organisation readiness, people/stakeholders readiness, and environment readiness. It is important to mention that the developed framework has been empirically tested and validated in a real environment taken the Kingdom of Saudi Arabia as a case study, surveying 600 citizens, 125 staff, and 25 officials. This research is one of the first studies in the Arab world which has focused on these three samples/perspectives and Cloud Computing. The finalised framework provides a comprehensive structure for the e-government readiness assessment process and Cloud Computing to help decision makers, in government, in setting up vision and a strategic action plan for the future of e-government. In addition it identities key elements and stages needed to implement such action plans. We believe that the assessment framework establishes an appropriate tool to assess e-government readiness. It can also be used as an effecting evaluation framework to determine the degree of progress already made, by government organisations, towards e-government implementation and maintenance.
|
670 |
Industry 4.0: An Opportunity or a Threat? : A Qualitative Study Among Manufacturing CompaniesVenema, Sven, Anger Bergström, Albin January 2018 (has links)
Manufacturing companies are currently going through exciting times. Technological developments follow each other up in high pace and many opportunities occur for companies to be smarter than their competitors. The disruptive notion of these developments is so big that people talk about a new, fourth, industrial revolution. This industrial revolution, that is being characterized and driven by seven drivers is called industry 4.0. The popularity of this industrial revolution is seemingly apparent everywhere and is being described, by some, as “manufacturing its next act”. Even though this sounds promising and applicable to every company, the practical consequences and feasibility are, most of the times, being overlooked. Especially a theoretical foundation on differences in feasibility between small and medium - sized enterprises (SMEs) and large firms is missing. In this thesis, we are going to take the reader through a journey that will help readers understand the positioning and perspective of firms regarding industry 4.0 and eventually the practical effects of industry 4.0 on business models of manufacturing firms will be presented. This research provides enough clarity on the topic to answer the follow research questions. This thesis aims to fill the gap in available research in which business model change is being linked to industry 4.0. Due to the novelty of industry 4.0 few researches on the practical effects are not yet fully explored in the literature. Business models, a more traditional area of research, has not yet touched upon the effects industry 4.0 has on the business models of company. Our purpose is to combine these two topics and provide both SMEs and large firms an overview on what the effects of industry 4.0 are in practice. Furthermore, the perspectives and positioning of our sample firms can provide clarity for potential implementers, since wide range of participants provide different insights on the topic and therefore give clarity on the practical use of industry 4.0. During this, the researchers, by converting observations and findings into theory, follow an inductive approach. The study uses a qualitative design and semi-structured interviews has been conducted to collect the data. Our sample firms consist of both SMEs and large firms and are all located within Europe. The researchers found that there are some key differences in the positivity on industry 4.0 between the academic and business world. Companies might be highly automated and have implemented some of the drivers of industry 4.0, but the definition itself is not popular. Where some of our sample firms are convinced industry 4.0 is the new way of working, most of them are using the technologies simply because it is the best in the market and helps them to follow their strategy. Industry 4.0 can be seen as an interesting tool for firms to become smarter and achieve better results, but not at all costs. Especially for SMEs implementing industry 4.0 should not be the sole goal of the company, since it is decided by many factors whether or not industry 4.0 will succeed in the company. In terms of business models, industry 4.0 causes many changes. The role of industry 4.0 can be seen as an enabler for change, rather than the reason to build a business model around. / Social science; Business Administratiom
|
Page generated in 0.3382 seconds