• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 570
  • 207
  • 117
  • 53
  • 43
  • 32
  • 28
  • 25
  • 25
  • 17
  • 12
  • 10
  • 9
  • 7
  • 6
  • Tagged with
  • 1272
  • 1272
  • 284
  • 229
  • 224
  • 220
  • 214
  • 190
  • 189
  • 176
  • 173
  • 151
  • 147
  • 134
  • 126
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Cloud Based Point-of-Sale / Molnbaserad kassalösning för butik

Lehndal, Anders January 2013 (has links)
A point-of-sale (POS) system is a transaction system allowing retail transactions to be completed. The purpose of this degree project was to examine the potential benefits and risks of implementing a POS as a cloud-based application in comparison to a traditional on-site POS solution. The main focus of this project was put on identifying the benefits and possibilities of using a cloud solution but effort was also geared towards exploring the functionality of an existing web-based POS prototype as a cloud client. A further aspect of this project was to address some of the problems associated with the use of peripheral equipment while maintaining a thin client. This has been achieved by studying cloud computing theory in modern day literature, analysing and working with traditional POS solutions and doing hands-on testing with peripheral equipment. The project results supports the notion that a cloud based POS solution is not only feasible but may perhaps provide some benefits in comparison to a traditional POS solution. However, great care must be taken, both to avoid vendor lock-in and in designing a cloud based system to allow continued operation of client and eventu-al peripheral equipment in an Internet Service Provider (ISP) or Cloud Service Provider (CSP) outage situation. / Ett Point-of-sale (POS) system är ett system som möjliggör fullförande av transaktioner. Syftet med detta examensarbetet var att undersöka möjliga fördelar och risker med att implementera ett POS system som en molnbaserad applikation i jämförelse med en traditionell POS lösning. Huvudfokus lades på att identifiera de fördelar och möjligheter en molnlösning medför men arbete riktades även mot att utforska funktionaliteten hos en existerade webbaserad POS prototyp i rollen som molnklient för detta projekt. En ytterligare aspekt var att adressera några av de problem som associeras med användandet av tredjeparts periferiutrustning tillsammans med en tunn klient. Detta har gjorts genom att studera cloud computing i modern litteratur, analysera och arbeta med traditionella POS lösningar och genom praktisk testning av periferiutrustning. Projektets slutsatser stödjer iden att en molnbaserad POS inte bara är genomförbart utan kan under vissa förhållanden medföra vissa fördelar gentemot en traditionell POS. Men, viss eftertänksamhet är på sin plats vid övervägandet att bygga och använda ett molnbaserat system för att garantera fortsatt operation av klient och periferiutrustning vid förlorad kontakt med internetleverantör och/eller molntjänstleverantör.
2

An Overview of Virtualization Technologies for Cloud Computing

Chen, Wei-Min 07 September 2012 (has links)
Cloud computing is a new concept that incorporates many existing technologies, such as virtualization. Virtualization is important for the establishment of cloud computing. With virtualization, cloud computing can virtualize the hardware resources into a huge resource pool for users to utilize. This thesis begins with an introduction to how a widely used service model classifies cloud computing into three layers. From the bottom up, they are IaaS, PaaS, and SaaS. Some service provides are taken as examples for each service model, such as Amazon Beanstalk and Google App Engine for PaaS; Amazon CloudFormation and Microsoft mCloud for IaaS. Next, we turn our discussion to the hypervisors and the technologies for virtualizing hardware resources, such as CPUs, memory, and devices. Then, storage and network virtualization techniques are discussed. Finally, the conclusions and the future directions of virtualization are drawn.
3

Cloud-assisted multimedia content delivery

Wu, Yu, 吴宇 January 2013 (has links)
Cloud computing, which is among the trendiest computing paradigms in recent years, is believed to be most suitable for supporting network-centric applications by providing elastic amounts of bandwidth for accessing a wide range of resources on the y. In particular, geo-distributed cloud systems are widely in construction nowadays. They span multiple data centers at different geographical locations, thus offering many advantages to large-scale multimedia applications because of the abundance of on-demand storage/bandwidth capacities and their geographical proximity to different groups of users. In this thesis, we investigate the common fundamental challenges in how to efficiently leverage the power of cloud resources to facilitate multimedia content delivery in various modern real world applications, from different perspectives. First, from the perspective of application providers, we propose tractable procedures for both model analysis and system designs of supporting representative large scale multimedia applications in a cloud system, i.e., VoD streaming applications and social media applications, respectively. We further verify the effectiveness of these algorithms and the feasibility of their deployment under dynamic realistic settings in real-life cloud systems. Second, from the perspective of end users, we target our focus at mobile users. The rapidly increasing power of personal mobile devices, dwarfing even high-end devices, is providing much richer contents and social interactions to users on the move, and many more challenging applications are on the horizon. We explore the tough challenges of how to effectively exploit cloud resources to facilitate mobile services by introducing two cloud-assisted mobile systems (i.e., CloudMoV and vSky-Conf), and explain in details their design philosophies and implementation. Finally, from the perspective of the cloud providers, we realize existing data center networks lack the flexibility to support many core services, given our hands-on experiences from working with public cloud systems. One of the specific problem is, “bulk data transfers across geo-distributed datacenters". After formulating a novel and well-formed optimization model for treating the data migration problem, we design and implement a Delay Tolerant Migration (DTM) system based on the Beacon platform and standard OpenFlow APIs. The system realizes a reliable Datacenter to Datacenter (D2D) network by applying the software defined networking (SDN) paradigm. Real-world experiments under realistic network traffic demonstrate the efficiency of the design. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy
4

Accounting for stewardship in the cloud

Duncan, Robert A. K. January 2016 (has links)
Managing information security in the cloud is a challenge. Traditional checklist approaches to standards compliance might well provide compliance, but may not provide adequate security assurance. The complexity of cloud relationships must be acknowledged and explicitly managed by recognising the implications of the self-interest of each party involved. We develop a conceptual modelling framework for cloud security assurance that can be used as a starting point for achieving effective continuous security assurance, together with a high level of compliance.
5

Rozšíření SOA do platformy cloud computing / Rozšíření SOA do platformy Cloud Computing

Qylafku, Denis January 2010 (has links)
The aim of my diploma thesis is to introduce cloud computing as an alternative to traditional internal information technology and its benefits for a company. Diploma thesis focuses on three main goals. The first one concerns advantages and disadvantages of cloud computing in comparison to internal information technology. The second one is identification of possible processes and services available for migration into cloud computing. The third goal of the diploma thesis is development of investment analysis which compares not only initial costs on internal information technology and cloud computing, but also costs of both variants within three years. The main contribution of the diploma thesis is to define whether the cloud computing is economically beneficial for the company or not. The argument for categorizing cloud computing is in the reason that the company does not have to use all services within the cloud computing but only these, which the company considers as the most beneficial from cost and operation point of view. Another contribution of the diploma thesis is deployment of data, services and processes into a chosen cloud computing platform. Investment analysis allows through cost comparison of both options understand whether it is more beneficial to choose cloud computing or internal Information Technology platform. During this decision making the company also considers its business character and the fact whether the company operates locally or globally.
6

The adoption of cloud-based Software as a Service (SaaS): a descriptive and empirical study of South African SMEs

Maserumule, Mabuke Dorcus 31 October 2019 (has links)
A research report submitted to the Faculty of Commerce, Law and Management, University of the Witwatersrand, Johannesburg, in partial fulfilment of the requirements for the degree of Master of Commerce by (MCom) in the field of Information Systems, 2019 / The purpose of this study was to describe the state of cloud-based software (SaaS) adoption among South African SMEs and to investigate the factors affecting their adoption of SaaS solutions. The technological, organisational and environmental (TOE) factors influencing cloud-based software adoption within SMEs were identified through a review of existing TOE literature. In addition, institutional theory and diffusion of innovation theory were also used to underpin the study. A research model hypothesising the outcome of the identified TOE factors on the adoption of cloud-based software was developed and tested. Specifically, factors hypothesised to influence SaaS adoption were compatibility, security concern, top management support and coercive pressures. This study employed a relational, quantitative research approach. A structured questionnaire was developed and administered as an online survey. Data was collected from a sample of 134 small and medium enterprises (SMEs) that provided usable responses. The collected data was used to firstly describe the state of adoption. Secondly, the extent to which various TOE factors impact on adoption was examined through the use of multiple regression. It was found that compatibility, security concern, top management support and coercive pressures influence adoption while trust, cost, relative advantage, complexity, geographic dispersion, normative and mimetic pressures did not have significant effects. This study adds value to the Information Systems literature as it uses the TOE framework alongside institutional theory and diffusion of innovation theory to explain the adoption of cloud-based software solutions by South African SMEs. This study provides information on the current state of adoption for cloud-based software within SMEs in South Africa. Organisations can also learn about the factors contributing to this adoption. Organisations can also be informed that for adoption to be successful, technological, organisational and environmental factors must be taken into consideration. Results assist organisations wanting to implement cloud-based software solutions. Specifically, results provide a benchmark for SMEs on where their organisations stand compared to other organisations with regards to SaaS adoption (for example whether they are lagging behind, they are on par, or whether they are innovators). This could inform their IT procurement decisions, e.g. to consider whether cloud-based software solutions are strategic and necessary to keep abreast with peers and competitors. / PH2020
7

Improving energy efficiency of virtualized datacenters / Améliorer l'efficacité énergétique des datacenters virtualisés

Nitu, Vlad-Tiberiu 28 September 2018 (has links)
De nos jours, de nombreuses entreprises choisissent de plus en plus d'adopter le cloud computing. Plus précisément, en tant que clients, elles externalisent la gestion de leur infrastructure physique vers des centres de données (ou plateformes de cloud computing). La consommation d'énergie est une préoccupation majeure pour la gestion des centres de données (datacenter, DC). Son impact financier représente environ 80% du coût total de possession et l'on estime qu'en 2020, les DCs américains dépenseront à eux seuls environ 13 milliards de dollars en factures énergétiques. Généralement, les serveurs de centres de données sont conçus de manière à atteindre une grande efficacité énergétique pour des utilisations élevées. Pour diminuer le coût de calcul, les serveurs de centre de données devraient maximiser leur utilisation. Afin de lutter contre l'utilisation historiquement faible des serveurs, le cloud computing a adopté la virtualisation des serveurs. Cette dernière permet à un serveur physique d'exécuter plusieurs serveurs virtuels (appelés machines virtuelles) de manière isolée. Avec la virtualisation, le fournisseur de cloud peut regrouper (consolider) l'ensemble des machines virtuelles (VM) sur un ensemble réduit de serveurs physiques et ainsi réduire le nombre de serveurs actifs. Même ainsi, les serveurs de centres de données atteignent rarement des utilisations supérieures à 50%, ce qui signifie qu'ils fonctionnent avec des ensembles de ressources majoritairement inutilisées (appelés «trous»). Ma première contribution est un système de gestion de cloud qui divise ou fusionne dynamiquement les machines virtuelles de sorte à ce qu'elles puissent mieux remplir les trous. Cette solution n'est efficace que pour des applications élastiques, c'est-à-dire des applications qui peuvent être exécutées et reconfigurées sur un nombre arbitraire de machines virtuelles. Cependant, la fragmentation des ressources provient d'un problème plus fondamental. On observe que les applications cloud demandent de plus en plus de mémoire, tandis que les serveurs physiques fournissent plus de CPU. Dans les DC actuels, les deux ressources sont fortement couplées puisqu'elles sont liées à un serveur physique. Ma deuxième contribution est un moyen pratique de découpler la paire CPU-mémoire, qui peut être simplement appliquée à n'importe quel serveur. Ainsi, les deux ressources peuvent varier indépendamment, en fonction de leur demande. Ma troisième et ma quatrième contribution montrent un système pratique qui exploite la deuxième contribution. La sous-utilisation observée sur les serveurs physiques existe également pour les machines virtuelles. Il a été démontré que les machines virtuelles ne consomment qu'une petite fraction des ressources allouées car les clients du cloud ne sont pas en mesure d'estimer correctement la quantité de ressources nécessaire à leurs applications. Ma troisième contribution est un système qui estime la consommation de mémoire (c'est-à-dire la taille du working set) d'une machine virtuelle, avec un surcoût faible et une grande précision. Ainsi, nous pouvons maintenant consolider les machines virtuelles en fonction de la taille de leur working set (plutôt que leur mémoire réservée). Cependant, l'inconvénient de cette approche est le risque de famine de mémoire. Si une ou plusieurs machines virtuelles ont une forte augmentation de la demande en mémoire, le serveur physique peut manquer de mémoire. Cette situation n'est pas souhaitable, car la plate-forme cloud est incapable de fournir au client la mémoire qu'il a payée. Finalement, ma quatrième contribution est un système qui permet à une machine virtuelle d'utiliser la mémoire distante fournie par un autre serveur du rack. Ainsi, dans le cas d'un pic de la demande en mémoire, mon système permet à la VM d'allouer de la mémoire sur un serveur physique distant. / Nowadays, many organizations choose to increasingly implement the cloud computing approach. More specifically, as customers, these organizations are outsourcing the management of their physical infrastructure to data centers (or cloud computing platforms). Energy consumption is a primary concern for datacenter (DC) management. Its cost represents about 80% of the total cost of ownership and it is estimated that in 2020, the US DCs alone will spend about $13 billion on energy bills. Generally, the datacenter servers are manufactured in such a way that they achieve high energy efficiency at high utilizations. Thereby for a low cost per computation all datacenter servers should push the utilization as high as possible. In order to fight the historically low utilization, cloud computing adopted server virtualization. The latter allows a physical server to execute multiple virtual servers (called virtual machines) in an isolated way. With virtualization, the cloud provider can pack (consolidate) the entire set of virtual machines (VMs) on a small set of physical servers and thereby, reduce the number of active servers. Even so, the datacenter servers rarely reach utilizations higher than 50% which means that they operate with sets of longterm unused resources (called 'holes'). My first contribution is a cloud management system that dynamically splits/fusions VMs such that they can better fill the holes. This solution is effective only for elastic applications, i.e. applications that can be executed and reconfigured over an arbitrary number of VMs. However the datacenter resource fragmentation stems from a more fundamental problem. Over time, cloud applications demand more and more memory but the physical servers provide more an more CPU. In nowadays datacenters, the two resources are strongly coupled since they are bounded to a physical sever. My second contribution is a practical way to decouple the CPU-memory tuple that can simply be applied to a commodity server. Thereby, the two resources can vary independently, depending on their demand. My third and my forth contribution show a practical system which exploit the second contribution. The underutilization observed on physical servers is also true for virtual machines. It has been shown that VMs consume only a small fraction of the allocated resources because the cloud customers are not able to correctly estimate the resource amount necessary for their applications. My third contribution is a system that estimates the memory consumption (i.e. the working set size) of a VM, with low overhead and high accuracy. Thereby, we can now consolidate the VMs based on their working set size (not the booked memory). However, the drawback of this approach is the risk of memory starvation. If one or multiple VMs have an sharp increase in memory demand, the physical server may run out of memory. This event is undesirable because the cloud platform is unable to provide the client with the booked memory. My fourth contribution is a system that allows a VM to use remote memory provided by a different rack server. Thereby, in the case of a peak memory demand, my system allows the VM to allocate memory on a remote physical server.
8

Cloud Services Brokerage for Mobile Ubiquitous Computing

2015 June 1900 (has links)
Recently, companies are adopting Mobile Cloud Computing (MCC) to efficiently deliver enterprise services to users (or consumers) on their personalized devices. MCC is the facilitation of mobile devices (e.g., smartphones, tablets, notebooks, and smart watches) to access virtualized services such as software applications, servers, storage, and network services over the Internet. With the advancement and diversity of the mobile landscape, there has been a growing trend in consumer attitude where a single user owns multiple mobile devices. This paradigm of supporting a single user or consumer to access multiple services from n-devices is referred to as the Ubiquitous Cloud Computing (UCC) or the Personal Cloud Computing. In the UCC era, consumers expect to have application and data consistency across their multiple devices and in real time. However, this expectation can be hindered by the intermittent loss of connectivity in wireless networks, user mobility, and peak load demands. Hence, this dissertation presents an architectural framework called, Cloud Services Brokerage for Mobile Ubiquitous Cloud Computing (CSB-UCC), which ensures soft real-time and reliable services consumption on multiple devices of users. The CSB-UCC acts as an application middleware broker that connects the n-devices of users to the multi-cloud services. The designed system determines the multi-cloud services based on the user's subscriptions and the n-devices are determined through device registration on the broker. The preliminary evaluations of the designed system shows that the following are achieved: 1) high scalability through the adoption of a distributed architecture of the brokerage service, 2) providing soft real-time application synchronization for consistent user experience through an enhanced mobile-to-cloud proximity-based access technique, 3) reliable error recovery from system failure through transactional services re-assignment to active nodes, and 4) transparent audit trail through access-level and context-centric provenance.
9

Le phénomène de circulation des données à caractère personnel dans le cloud : étude de droit matériel dans le contexte de l'Union européenne / The flow of personal data in the cloud : a study of substantive law within the European Union context

Tourne, Elise 11 June 2018 (has links)
Le régime juridique applicable à la collecte et à l’exploitation par les fournisseurs de services de cloud computing des données à caractère personnel de leurs utilisateurs constitue une source d’interrogation pour ces derniers. De fait, aucun régime juridique organisé ne permet aujourd’hui de réguler de manière globale, au niveau de l’Union européenne, le phénomène de circulation des données à caractère personnel dans le cloud, que ce soit de manière directe ou indirecte. Il apparaît, dès lors, nécessaire de s’interroger sur la manière dont le droit s’est organisé en conséquence et d’analyser les traitements complémentaires et/ou alternatifs actuellement offerts par le droit, certes moins structurellement organisés et mosaïques, mais plus pragmatiques, réalistes et politiquement viables. Historiquement, le phénomène de circulation a été presque exclusivement traité via le droit spécifique à la protection des données à caractère personnel découlant de l’Union européenne. Ce droit, souvent considéré par opposition au droit à la libre circulation des données, constituait initialement une émanation du droit à la protection de la vie privée avant d’être consacré en tant que droit fondamental de l’Union européenne. Le traitement offert par le droit à la protection des données, s’il cible directement les données au cœur du phénomène de circulation dans le cloud, ne couvre que partiellement ledit phénomène. De surcroît, malgré l’entrée en vigueur du Règlement 2016/679 relatif à la protection des personnes physiques à l’égard du traitement des données à caractère personnel et à la libre circulation de ces données, il possède une efficacité contestable, ne proposant pas de solution harmonisée au sein de l’Union européenne et étant dépendant de la bonne volonté et des moyens financiers, organisationnels et humains des Etats Membres. Les traitements alternatifs ou complémentaires au droit à la protection des données qui existent au sein de l’Union européenne, qui peuvent être répartis entre outils techniques, contractuels et législatifs, n’offrent qu’une appréhension indirecte du phénomène de circulation via un encadrement de son environnement cloud. Individuellement, ils ne permettent d’appréhender qu’un aspect très réduit du phénomène de circulation, de surcroît avec une efficacité plus ou moins grande. En outre, les outils techniques et contractuels n’ont pas la légitimité attachée aux outils législatifs. Néanmoins, associés les uns aux autres, ils permettent de cibler le phénomène de circulation des données de manière plus globale et efficace. / The legal framework applicable to the gathering and processing by cloud service providers of the personal data of their users raises questions for such users. De facto, there does not now exist an organized legal framework allowing for the regulation, at the European Union level and as a whole, of the flow of personal data in the cloud, whether directly or indirectly. It thus seems necessary to question the way law organized itself consequently and analyze the complementary and/or alternative treatments offered by law, which are less structurally organized and are mosaical, but are more pragmatic, realistic and politically sustainable. Historically, the flow of personal data has been dealt almost exclusively via the specific right to the protection of personal data, which derives from the European Union. Such right, often considered in opposition to the right to the free circulation of data, was initially an emanation of the right to privacy before being established as a fundamental right of the European Union. The treatment provided by the right to the protection of personal data, if it targets directly the data within the flow phenomena, only partly covers such phenomena. In addition, despite the entry into force of the Regulation 2016/679 on the protection of individuals with regard to the processing of personal data and on the free movement of such data, its effectiveness is questionable, not offering any harmonized solution within the European Union and being highly dependent on the goodwill and the financial, organizational and human means of the Member States. The complementary and/or alternative treatments to the right to the protection of personal data that exist within the European Union, which may be allocated among technical, contractual and regulatory tools, only approach the data flow phenomena indirectly by providing a framework to its environment. Individually, they only target one very limited aspect of the data flow phenomena, with more or less effectiveness. Furthermore, technical and contractual tools have not the legitimacy attached to the regulatory tools. However, associated one with another, they allow a more global and efficient targeting of the data flow phenomena.
10

Wireless Distributed Computing in Cloud Computing Networks

Datla, Dinesh 25 October 2013 (has links)
The explosion in growth of smart wireless devices has increased the ubiquitous presence of computational resources and location-based data. This new reality of numerous wireless devices capable of collecting, sharing, and processing information, makes possible an avenue for new enhanced applications. Multiple radio nodes with diverse functionalities can form a wireless cloud computing network (WCCN) and collaborate on executing complex applications using wireless distributed computing (WDC). Such a dynamically composed virtual cloud environment can offer services and resources hosted by individual nodes for consumption by user applications. This dissertation proposes an architectural framework for WCCNs and presents the different phases of its development, namely, development of a mathematical system model of WCCNs, simulation analysis of the performance benefits offered by WCCNs, design of decision-making mechanisms in the architecture, and development of a prototype to validate the proposed architecture. The dissertation presents a system model that captures power consumption, energy consumption, and latency experienced by computational and communication activities in a typical WCCN. In addition, it derives a stochastic model of the response time experienced by a user application when executed in a WCCN. Decision-making and resource allocation play a critical role in the proposed architecture. Two adaptive algorithms are presented, namely, a workload allocation algorithm and a task allocation - scheduling algorithm. The proposed algorithms are analyzed for power efficiency, energy efficiency, and improvement in the execution time of user applications that are achieved by workload distribution. Experimental results gathered from a software-defined radio network prototype of the proposed architecture validate the theoretical analysis and show that it is possible to achieve 80 % improvement in execution time with the help of just three nodes in the network. / Ph. D.

Page generated in 0.1045 seconds