• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 777
  • 220
  • 122
  • 65
  • 54
  • 33
  • 32
  • 30
  • 28
  • 21
  • 15
  • 14
  • 9
  • 9
  • 7
  • Tagged with
  • 1598
  • 1598
  • 390
  • 281
  • 243
  • 243
  • 240
  • 236
  • 231
  • 225
  • 215
  • 209
  • 176
  • 173
  • 152
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

The Cloud Marketplace : A Capability-Based Framework for Cloud Ecosystem Governance

Falk, Sebastian, Shyshka, Andriy January 2014 (has links)
Within the last five years, the market of cloud computing has shown rapid growth. However, despite the increasing popularity, researchers highlight numerous concerns regarding limited interoperability of systems hosted by different cloud providers as well as restricted customization of cloud solutions. In order to counter aforemen-tioned challenges, this study investigates the idea of introducing a marketplace for cloud services that leverage the service-oriented architecture (SOA) paradigm and of-fers software solutions, computing capabilities from cloud providers, components developed by third parties, as well as access to integration and audit services. The goal of the study lies in conceptualizing the idea and the evaluation of demand it may raise from the key cloud actors. In this regard, existing frameworks of cloud compu-ting and SOA contributed to the development of an initial model that was further improved through the interviewing process. The results of this study include a capa-bility-based framework for the cloud marketplace which not only clarifies the role and activities of the different actors but also contains the necessary features of the marketplace that are needed to ensure the proper workflow. In addition to that, the actors’ incentives and concerns regarding the marketplace were analyzed by applying SWOT-analysis. While the analysis revealed both positive interest and present de-mand among the actors, the identified weaknesses and threats highlight the need for further investigations in order to put the idea into practice.
372

Combiner la programmation par contraintes et l’apprentissage machine pour construire un modèle éco-énergétique pour petits et moyens data centers / Combining constraint programming and machine learning to come up with an energy aware model for small/medium size data centers

Madi wamba, Gilles 27 October 2017 (has links)
Au cours de la dernière décennie les technologies de cloud computing ont connu un essor considérable se traduisant par la montée en flèche de la consommation électrique des data center. L’ampleur du problème a motivé de nombreux travaux de recherche s’articulant autour de solutions de réduction statique ou dynamique de l’enveloppe globale de consommation électrique d’un data center. L’objectif de cette thèse est d’intégrer les sources d’énergie renouvelables dans les modèles d’optimisation dynamique d’énergie dans un data center. Pour cela nous utilisons la programmation par contraintes ainsi que des techniques d’apprentissage machine. Dans un premier temps, nous proposons une contrainte globale d’intersection de tâches tenant compte d’une ressource à coûts variables. Dans un second temps, nous proposons deux modèles d’apprentissage pour la prédiction de la charge de travail d’un data center et pour la génération de telles courbes. Enfin, nous formalisons le problème de planification énergiquement écologique (PPEE) et proposons un modèle global à base de PPC ainsi qu’une heuristique de recherche pour le résoudre efficacement. Le modèle proposé intègre les différents aspects inhérents au problème de planification dynamique dans un data center : machines physiques hétérogènes, types d’applications variés (i.e., applications interactives et applications par lots), opérations et coûts énergétiques de mise en route et d’extinction des machines physiques, interruption/reprise des applications par lots, consommation des ressources CPU et RAM des applications, migration des tâches et coûts énergétiques relatifs aux migrations, prédiction de la disponibilité d’énergie verte, consommation énergétique variable des machines physiques. / Over the last decade, cloud computing technologies have considerably grown, this translates into a surge in data center power consumption. The magnitude of the problem has motivated numerous research studies around static or dynamic solutions to reduce the overall energy consumption of a data center. The aim of this thesis is to integrate renewable energy sources into dynamic energy optimization models in a data center. For this we use constraint programming as well as machine learning techniques. First, we propose a global constraint for tasks intersection that takes into account a ressource with variable cost. Second, we propose two learning models for the prediction of the work load of a data center and for the generation of such curves. Finally, we formalize the green energy aware scheduling problem (GEASP) and propose a global model based on constraint programming as well as a search heuristic to solve it efficiently. The proposed model integrates the various aspects inherent to the dynamic planning problem in a data center : heterogeneous physical machines, various application types (i.e., ractive applications and batch applications), actions and energetic costs of turning ON/OFF physical machine, interrupting/resuming batch applications, CPU and RAM ressource consumption of applications, migration of tasks and energy costs related to the migrations, prediction of green energy availability, variable energy consumption of physical machines.
373

Reaching High Availability in Connected Car Backend Applications

Yadav, Arpit 08 September 2017 (has links) (PDF)
The connected car segment has high demands on the exchange of data between the car on the road, and a variety of services in the backend. By the end of 2020, connected services will be mainstream automotive offerings, according to Telefónica - Connected Car Industry Report 2014 the overall number of vehicles with built-in internet connectivity will increase from 10% of the overall market today to 90% by the end of the decade [1]. Connected car solutions will soon become one of the major business drivers for the industry; they already have a significant impact on existing solutions development and aftersales market. It has been more than three decades since the introduction of the first software component in cars, and since then a vast amount of different services has been introduced, creating an ecosystem of complex applications, architectures, and platforms. The complexity of the connected car ecosystem results into a range of new challenges. The backend applications must be scalable and flexible enough to accommodate loads created by the random user and device behavior. To deliver superior uptime, back-end systems must be highly integrated and automated to guarantee lowest possible failure rate, high availability, and fastest time-to-market. Connected car services increasingly rely on cloud-based service delivery models for improving user experiences and enhancing features for millions of vehicles and their users on a daily basis. Nowadays, the software applications become more complex, and the number of components that are involved and interact with each other is extremely large. In such systems, if a fault occurs, it can easily propagate and can affect other components resulting in a complex problem which is difficult to detect and debugg, therefore a robust and resilient architecture is needed which ensures the continuous availability of system in the wake of component failures, making the overall system highly available. The goal of the thesis is to gain insight into the development of highly available applications and to explore the area of fault tolerance. This thesis outlines different design patterns and describes the capabilities of fault tolerance libraries for Java platform, and design the most appropriate solution for developing a highly available application and evaluate the behavior with stress and load testing using Chaos Monkey methodologies.
374

Energy Efficient Cloud Computing: Techniques and Tools

Knauth, Thomas 22 April 2015 (has links) (PDF)
Data centers hosting internet-scale services consume megawatts of power. Mainly for cost reasons but also to appease environmental concerns, data center operators are interested to reduce their use of energy. This thesis investigates if and how hardware virtualization helps to improve the energy efficiency of modern cloud data centers. Our main motivation is to power off unused servers to save energy. The work encompasses three major parts: First, a simulation-driven analysis to quantify the benefits of known reservation times in infrastructure clouds. Virtual machines with similar expiration times are co-located to increase the probability to power down unused physical hosts. Second, we propose and prototyped a system to deliver truly on-demand cloud services. Idle virtual machines are suspended to free resources and as a first step to power off the physical server. Third, a novel block-level data synchronization tool enables fast and efficient state replication. Frequent state synchronization is necessary to prevent data unavailability: powering down a server disables access to the locally attached disks and any data stored on them. The techniques effectively reduce the overall number of required servers either through optimized scheduling or by suspending idle virtual machines. Fewer live servers translate into proportional energy savings, as the unused servers must no longer be powered.
375

Contributions à la mise en place d'une infrastructure de Cloud Computing à large échelle / Contributions to massively distributed Cloud Computing infrastructures

Pastor, Jonathan 18 October 2016 (has links)
La croissance continue des besoins en puissance de calcul a conduit au triomphe du modèle de Cloud Computing. Des clients demandeurs en puissance de calcul vont s’approvisionner auprès de fournisseurs d’infrastructures de Cloud Computing, mises à disposition via Internet. Pour réaliser des économies d’échelles, ces infrastructures sont toujours plus grandes et concentrées en quelques endroits, conduisant à des problèmes tels que l’approvisionnement en énergie, la tolérance aux pannes et l’éloignement des utilisateurs. Cette thèse s’est intéressée à la mise en place d’un système d’IaaS massivement distribué et décentralisé exploitant un réseau de micros centres de données déployés sur la dorsale Internet, utilisant une version d’OpenStack revisitée pendant cette thèse autour du support non intrusif de bases de données non relationnelles. Des expériences sur Grid’5000 ont montré des résultats intéressants sur le plan des performances, toutefois limités par le fait qu’OpenStack ne tirait pas avantage nativement d’un fonctionnement géographiquement réparti. Nous avons étudié la prise en compte de la localité réseau pour améliorer les performances des services distribués en favorisant les collaborations proches. Un prototype de l’algorithme de placement de machines virtuelles DVMS, fonctionnant sur une topologie non structurée basée sur l’algorithme Vivaldi, a été validé sur Grid’5000. Ce prototype a fait l’objet d’un prix scientifique lors de l’école de printemps Grid’50002014. Enfin, ces travaux nous ont amenés à participer au développement du simulateur VMPlaceS. / The continuous increase of computing power needs has favored the triumph of the Cloud Computing model. Customers asking for computing power will receive supplies via Internet resources hosted by providers of Cloud Computing infrastructures. To make economies of scale, Cloud Computing that are increasingly large and concentrated in few attractive places, leading to problems such energy supply, fault tolerance and the fact that these infrastructures are far from most of their end users. During this thesis we studied the implementation of an fully distributed and decentralized IaaS system operating a network of micros data-centers deployed in the Internet backbone, using a modified version of OpenStack that leverages non relational databases. A prototype has been experimentally validated onGrid’5000, showing interesting results, however limited by the fact that OpenStack doesn’t take advantage of a geographically distributed functioning. Thus, we focused on adding the support of network locality to improve performance of Cloud Computing services by favoring collaborations between close nodes. A prototype of the DVMS algorithm, working with an unstructured topology based on the Vivaldi algorithm, has been validated on Grid’5000. This prototype got the first prize at the large scale challenge of the Grid’5000 spring school in 2014. Finally, the work made with DVMS enabled us to participate at the development of the VMPlaceS simulator.
376

Sampling, qualification and analysis of data streams / Échantillonnage, qualification et analyse des flux de données

El Sibai, Rayane 04 July 2018 (has links)
Un système de surveillance environnementale collecte et analyse continuellement les flux de données générés par les capteurs environnementaux. L'objectif du processus de surveillance est de filtrer les informations utiles et fiables et d'inférer de nouvelles connaissances qui aident l'exploitant à prendre rapidement les bonnes décisions. L'ensemble de ce processus, de la collecte à l'analyse des données, soulève deux problèmes majeurs : le volume de données et la qualité des données. D'une part, le débit des flux de données générés n'a pas cessé d'augmenter sur les dernières années, engendrant un volume important de données continuellement envoyées au système de surveillance. Le taux d'arrivée des données est très élevé par rapport aux capacités de traitement et de stockage disponibles du système de surveillance. Ainsi, un stockage permanent et exhaustif des données est très coûteux, voire parfois impossible. D'autre part, dans un monde réel tel que les environnements des capteurs, les données sont souvent de mauvaise qualité, elles contiennent des valeurs bruitées, erronées et manquantes, ce qui peut conduire à des résultats défectueux et erronés. Dans cette thèse, nous proposons une solution appelée filtrage natif, pour traiter les problèmes de qualité et de volume de données. Dès la réception des données des flux, la qualité des données sera évaluée et améliorée en temps réel en se basant sur un modèle de gestion de la qualité des données que nous proposons également dans cette thèse. Une fois qualifiées, les données seront résumées en utilisant des algorithmes d'échantillonnage. En particulier, nous nous sommes intéressés à l'analyse de l'algorithme Chain-sample que nous comparons à d'autres algorithmes de référence comme l'échantillonnage probabiliste, l'échantillonnage déterministe et l'échantillonnage pondéré. Nous proposons aussi deux nouvelles versions de l'algorithme Chain-sample améliorant sensiblement son temps d'exécution. L'analyse des données du flux est également abordée dans cette thèse. Nous nous intéressons particulièrement à la détection des anomalies. Deux algorithmes sont étudiés : Moran scatterplot pour la détection des anomalies spatiales et CUSUM pour la détection des anomalies temporelles. Nous avons conçu une méthode améliorant l'estimation de l'instant de début et de fin de l'anomalie détectée dans CUSUM. Nos travaux ont été validés par des simulations et aussi par des expérimentations sur deux jeux de données réels et différents : Les données issues des capteurs dans le réseau de distribution de l'eau potable fournies dans le cadre du projet Waves et les données relatives au système de vélo en libre-service (Velib). / An environmental monitoring system continuously collects and analyzes the data streams generated by environmental sensors. The goal of the monitoring process is to filter out useful and reliable information and to infer new knowledge that helps the network operator to make quickly the right decisions. This whole process, from the data collection to the data analysis, will lead to two keys problems: data volume and data quality. On the one hand, the throughput of the data streams generated has not stopped increasing over the last years, generating a large volume of data continuously sent to the monitoring system. The data arrival rate is very high compared to the available processing and storage capacities of the monitoring system. Thus, permanent and exhaustive storage of data is very expensive, sometimes impossible. On the other hand, in a real world such as sensor environments, the data are often dirty, they contain noisy, erroneous and missing values, which can lead to faulty and defective results. In this thesis, we propose a solution called native filtering, to deal with the problems of quality and data volume. Upon receipt of the data streams, the quality of the data will be evaluated and improved in real-time based on a data quality management model that we also propose in this thesis. Once qualified, the data will be summarized using sampling algorithms. In particular, we focus on the analysis of the Chain-sample algorithm that we compare against other reference algorithms such as probabilistic sampling, deterministic sampling, and weighted sampling. We also propose two new versions of the Chain-sample algorithm that significantly improve its execution time. Data streams analysis is also discussed in this thesis. We are particularly interested in anomaly detection. Two algorithms are studied: Moran scatterplot for the detection of spatial anomalies and CUSUM for the detection of temporal anomalies. We have designed a method that improves the estimation of the start time and end time of the anomaly detected in CUSUM. Our work was validated by simulations and also by experimentation on two real and different data sets: The data issued from sensors in the water distribution network provided as part of the Waves project and the data relative to the bike sharing system (Velib).
377

Formal verification of business process configuration in the Cloud / Vérification formelle de la configuration des processus métiers dans le Cloud

Boubaker, Souha 14 May 2018 (has links)
Motivé par le besoin de la « Conception par Réutilisation », les modèles de processus configurables ont été proposés pour représenter de manière générique des modèles de processus similaires. Ils doivent être configurés en fonction des besoins d’une organisation en sélectionnant des options. Comme les modèles de processus configurables peuvent être larges et complexes, leur configuration sans assistance est sans doute une tâche difficile, longue et source d'erreurs.De plus, les organisations adoptent de plus en plus des environnements Cloud pour déployer et exécuter leurs processus afin de bénéficier de ressources dynamiques à la demande. Néanmoins, en l'absence d'une description explicite et formelle de la perspective de ressources dans les processus métier existants, la correction de la gestion des ressources du Cloud ne peut pas être vérifiée.Dans cette thèse, nous visons à (i) fournir de l’assistance et de l’aide à la configuration aux analystes avec des options correctes, et (ii) améliorer le support de la spécification et de la vérification des ressources Cloud dans les processus métier. Pour ce faire, nous proposons une approche formelle pour aider à la configuration étape par étape en considérant des contraintes structurelles et métier. Nous proposons ensuite une approche comportementale pour la vérification de la configuration tout en réduisant le problème bien connu de l'explosion d'espace d'état. Ce travail permet d'extraire les options de configuration sans blocage d’un seul coup. Enfin, nous proposons une spécification formelle pour le comportement d'allocation des ressources Cloud dans les modèles de processus métier. Cette spécification est utilisée pour valider et vérifier la cohérence de l'allocation des ressources Cloud en fonction des besoins des utilisateurs et des capacités des ressources / Motivated by the need for the “Design by Reuse”, Configurable process models are proposed to represent in a generic manner similar process models. They need to be configured according to an organization needs by selecting design options. As the configurable process models may be large and complex, their configuration with no assistance is undoubtedly a difficult, time-consuming and error-prone task.Moreover, organizations are increasingly adopting cloud environments for deploying and executing their processes to benefit from dynamically scalable resources on demand. Nevertheless, due to the lack of an explicit and formal description of the resource perspective in the existing business processes, the correctness of Cloud resources management cannot be verified.In this thesis, we target to (i) provide guidance and assistance to the analysts in process model configuration with correct options, and to (ii) improve the support of Cloud resource specification and verification in business processes. To do so, we propose a formal approach for assisting the configuration step-by-step with respect to structural and business domain constraints. We thereafter propose a behavioral approach for configuration verification while reducing the well-known state space explosion problem. This work allows to extract configuration choices that satisfy the deadlock-freeness property at one time. Finally, we propose a formal specification for Cloud resource allocation behavior in business process models. This specification is used to formally validate and check the consistency of the Cloud resource allocation in process models according to user requirements and resource capabilities
378

Cloud Computing Adoption in Afghanistan: A Quantitative Study Based on the Technology Acceptance Model

Nassif, George T. 01 January 2019 (has links)
Cloud computing emerged as an alternative to traditional in-house data centers that businesses can leverage to increase the operation agility and employees' productivity. IT solution architects are tasked with presenting to IT managers some analysis reflecting cloud computing adoption critical barriers and challenges. This quantitative correlational study established an enhanced technology acceptance model (TAM) with four external variables: perceived security (PeS), perceived privacy (PeP), perceived connectedness (PeN), and perceived complexity (PeC) as antecedents of perceived usefulness (PU) and perceived ease of use (PEoU) in a cloud computing context. Data collected from 125 participants, who responded to the invitation through an online survey focusing on Afghanistan's main cities Kabul, Mazar, and Herat. The analysis showed that PEoU was a predictor of the behavioral intention of cloud computing adoption, which is consistent with the TAM; PEoU with an R2 = .15 had a stronger influence than PU with an R2 = .023 on cloud computing behavior intention of adoption and use. PeN, PeS, and PeP significantly influenced the behavioral intentions of IT architects to adopt and use the technology. This study showed that PeC was not a significant barrier to cloud computing adoption in Afghanistan. By adopting cloud services, employees can have access to various tools that can help increase business productivity and contribute to improving the work environment. Cloud services, as an alternative solution to home data centers, can help businesses reduce power consumption and consecutively decrease in carbon dioxide emissions due to less power demand.
379

Addressing the Data Location Assurance Problem of Cloud Storage Environments

Noman, Ali 09 April 2018 (has links)
In a cloud storage environment, providing geo-location assurance of data to a cloud user is very challenging as the cloud storage provider physically controls the data and it would be challenging for the user to detect if the data is stored in different datacenters/storage servers other than the one where it is supposed to be. We name this problem as the “Data Location Assurance Problem” of a Cloud Storage Environment. Aside from the privacy and security concerns, the lack of geo-location assurance of cloud data involved in the cloud storage has been identified as one of the main reasons why organizations that deal with sensitive data (e.g., financial data, health-related data, and data related to Personally Identifiable Infor-mation, PII) cannot adopt a cloud storage solution even if they might wish to. It might seem that cryptographic techniques such as Proof of Data Possession (PDP) can be a solution for this problem; however, we show that those cryptographic techniques alone cannot solve that. In this thesis, we address the data location assurance (DLA) problem of the cloud storage environment which includes but is not limited to investigating the necessity for a good data location assurance solution as well as challenges involved in providing this kind of solution; we then come up with efficient solutions for the DLA problem. Note that, for the totally dis-honest cloud storage server attack model, it may be impossible to offer a solution for the DLA problem. So the main objective of this thesis is to come up with solutions for the DLA problem for different system and attack models (from less adversarial system and attack models to more adversarial ones) available in existing cloud storage environments so that it can meet the need for cloud storage applications that exist today.
380

Evaluation of Cloud Native Solutions for Trading Activity Analysis / Evaluering av cloud native lösningar för analys av transaktionsbaserad börshandel

Johansson, Jonas January 2021 (has links)
Cloud computing has become increasingly popular over recent years, allowing computing resources to be scaled on-demand. Cloud Native applications are specifically created to run on the cloud service model. Currently, there is a research gap regarding the design and implementation of cloud native applications, especially regarding how design decisions affect metrics such as execution time and scalability of systems. The problem investigated in this thesis is whether the execution time and quality scalability, ηt of cloud native solutions are affected when housing the functionality of multiple use cases within the same cloud native application. In this work, a cloud native application for trading data analysis is presented, where the functionality of 3 use cases are implemented to the application: (1) creating reports of trade prices, (2) anomaly detection, and (3) analysis of relation diagram of trades. The execution time and scalability of the application are evaluated and compared to readily available solutions, which serve as a baseline for the evaluation. The results of use cases 1 and 2 are compared to Amazon Athena, while use case 3 is compared to Amazon Neptune. The results suggest that having functionalities combined into the same application could improve both execution time and scalability of the system. The impact depends on the use case and hardware configuration. When executing the use cases in a sequence, the mean execution time of the implemented system was decreased up to 17.2% while the quality scalability score was improved by 10.3% for use case 2. The implemented application had significantly lower execution time than Amazon Neptune but did not surpass Amazon Athena for respective use cases. The scalability of the systems varied depending on the use case. While not surpassing the baseline in all use cases, the results show that the execution time of a cloud native system could be improved by having functionality of multiple use cases within one system. However, the potential performance gains differ depending on the use case and might be smaller than the performance gains of choosing another solution. / Cloud computing har de senaste åren blivit alltmer populärt och möjliggör att skala beräkningskapacitet och resurser på begäran. Cloud native-applikationer är specifikt skapade för att köras på distribuerad infrastruktur. För närvarande finns det luckor i forskningen gällande design och implementering av cloud native applikationer, särskilt angående hur designbeslut påverkar mätbara värden som exekveringstid och skalbarhet. Problemet som undersöks i denna uppsats är huruvida exekveringstiden och måttet av kvalitetsskalbarhet, ηt påverkas när funktionaliteten av flera användningsfall intregreras i samma cloud native applikation. I det här arbetet skapades en cloud native applikation som kombinerar flera användningsfall för att analysera transaktionsbaserad börshandelsdata. Funktionaliteten av 3 användningsfall implementeras i applikationen: (1) generera rapporter över handelspriser, (2) detektering av avvikelser och (3) analys av relations-grafer. Applikationens exekveringstid och skalbarhet utvärderas och jämförs med kommersiella cloudtjänster, vilka fungerar som en baslinje för utvärderingen. Resultaten från användningsfall 1 och 2 jämförs med Amazon Athena, medan användningsfall 3 jämförs med Amazon Neptune. Resultaten antyder att systemets exekveringstid och skalbarhet kan förbättras genom att funktionalitet för flera användningsfall implementeras i samma system. Effekten varierar beroende på användningsfall och hårdvarukonfiguration. När samtliga användningsfall körs i en sekvens, minskar den genomsnittliga körtiden för den implementerade applikationen med upp till 17,2% medan kvalitetsskalbarheten ηt förbättrades med 10,3%för användningsfall 2. Den implementerade applikationen har betydligt kortare exekveringstid än Amazon Neptune men överträffar inte Amazon Athena för respektive användningsfall. Systemens skalbarhet varierade beroende på användningsfall. Även om det inte överträffar baslinjen i alla användningsfall, visar resultaten att exekveringstiden för en cloud native applikation kan förbättras genom att kombinera funktionaliteten hos flera användningsfall inom ett system. De potentiella prestandavinsterna varierar dock beroende på användningsfallet och kan vara mindre än vinsterna av att välja en annan lösning.

Page generated in 0.4748 seconds