791 |
The impact of network related factors on Internet based technology in South Africa : a cloud computing perspectiveRamagoffu, Madisa Modisaotsile 24 February 2013 (has links)
Outsourcing, consolidation and cost savings of IT services, are increasingly becoming an imperative source of competitive advantage and a great challenge for most local and global businesses. These challenges not only affect consumers, but also the service providers’ community. As IT is slowly becoming commoditised, consumers, such as business organisations, are increasingly expecting IT services that will mimic other utility services such as water, electricity, and telecommunications.To this end, no one model has been able to emulate these utilities in the computing arena.Cloud Computing is the recent computing phenomenon that attempts to be the answer to most business IT requirements. This phenomenon is gaining traction in the IT industry, with a promise of advantages such as cost reduction, elimination of upfront capital outlay, pay per use models, shared infrastructure, and high flexibility allowing users and providers to handle high elasticity of demand. The critical success factor that remains unanswered for most IT organisations and its management is: What is the effect of the communication network factors on Internet based technology such as Cloud Computing, given the emerging market context.This study therefore, investigates the effect of four communication network factors (price, availability, reliability and security) in the adoption of Cloud Computing by IT managers in a South African context, including their propensity to adopt the technology. The study investigates numerous technology adoption theories, in which Technology, Organisation and Environment (TOE) framework is selected due to it having an organisational focus as opposed to an individual focus.Based on the results, this study proposes that Bandwidth (Pricing and Security) should be included into any adoption model that involves services running on the Internet. The study makes an attempt to contribute to the emerging literature of Cloud Computing, Internet in South Africa, in addition to offering organisations considering adoption and Cloud Providers’ significant ideas to consider for Cloud Computing adoption. / Dissertation (MBA)--University of Pretoria, 2012. / Gordon Institute of Business Science (GIBS) / unrestricted
|
792 |
Distributed Orchestration Framework for Fog ComputingRahafrouz, Amir January 2019 (has links)
The rise of IoT-based system is making an impact on our daily lives and environment. Fog Computing is a paradigm to utilize IoT data and process them at the first hop of access network instead of distant clouds, and it is going to bring promising applications for us. A mature framework for fog computing still lacks until today. In this study, we propose an approach for monitoring fog nodes in a distributed system using the FogFlow framework. We extend the functionality of FogFlow by adding the monitoring capability of Docker containers using cAdvisor. We use Prometheus for collecting distributed data and aggregate them. The monitoring data of the entire distributed system of fog nodes is accessed via an API from Prometheus. Furthermore, the monitoring data is used to perform the ranking of fog nodes to choose the place to place the serverless functions (Fog Function). The ranking mechanism uses Analytical Hierarchy Processes (AHP) to place the fog function according to resource utilization and saturation of fog nodes’ hardware. Finally, an experiment test-bed is set up with an image-processing application to detect faces. The effect of our ranking approach on the Quality of Service is measured and compared to the current FogFlow.
|
793 |
Propuesta de Transformación digital para la consulta de contenidos en microformas considerando la evaluación de procesos TI para una empresa del rubro de procesamiento y difusión de noticias / Digital transformation proposal for the consultation of contents in microforms considering the evaluation of IT processes for a company of the processing and dissemination of newsChois Pimentel, Carlos Arturo, Portocarrero Lino, José Luis 09 November 2019 (has links)
El presente trabajo tiene como objetivo proponer una solución de Transformación Digital para una Entidad Estatal, del rubro de procesamiento y difusión de noticias, con el propósito de solucionar un problema de acceso y consulta de la amplia información histórica contenida en microformas, las cuales son necesitadas y requeridas por el Estado y la ciudadanía.
El enfoque de la presente propuesta está en el análisis de la organización objetivo, identificando información respecto a su plan estratégico, objetivos estratégicos y procesos, dando énfasis a los procesos que componen la cadena de valor del negocio. Por lo cual, también se analizó los procesos de Tecnologías de la Información (TI) los cuales finalmente soportarán y ejecutarán la implementación de la propuesta de transformación digital.
En ese sentido, mediante un análisis y evaluación de los procesos actuales de TI, en base al marco de trabajo COBIT 5 PAM, se identificó el nivel de capacidad actual y la brecha existente para lograr al nivel de capacidad necesario en TI para alinearse con los objetivos estratégicos de la organización.
Finalmente, la propuesta es con respecto al proceso “Producción de Microformas con Valor Legal” el cual posee el archivo de todos los diarios producidos desde el año 1825, los cuales en base a soluciones de Cloud Computing y procesamiento a través de algoritmos de Deep Learning, se pondrán a disposición del público a través de herramientas de consulta Web. / The objective of this work is to propose a Digital Transformation solution for a State Entity, in the area of news processing and dissemination, with the purpose of solving a problem of access and consultation of the broad information contained in microforms, which are needed and required by the State and the citizenry.
The focus of this proposal is on the analysis of the target organization, identifying information regarding its strategic plan, strategic objectives and processes, emphasizing the processes that make up the business value chain. Therefore, the Information Technology (IT) processes were also analyzed, which will finally support and execute the implementation of the digital transformation proposal.
In that sense, through an analysis and evaluation of the current IT processes, based on the COBIT 5 PAM framework, the current capacity level and the existing gap to achieve the necessary IT capacity level to align with the strategic objectives of the organization were identified.
Finally, the proposal is with respect to the process "Production of Microforms with Legal Value" which has the file of all the journals produced since 1825, which based on Cloud Computing solutions and processing through Deep Learning algorithms, will be made available to the public through Web consultation tools. / Tesis
|
794 |
Secure Distributed MapReduce Protocols : How to have privacy-preserving cloud applications? / Protocoles distribués et sécurisés pour le paradigme MapReduce : Comment avoir des applications dans les nuages respectueuses de la vie privée ?Giraud, Matthieu 24 September 2019 (has links)
À l’heure des réseaux sociaux et des objets connectés, de nombreuses et diverses données sont produites à chaque instant. L’analyse de ces données a donné lieu à une nouvelle science nommée "Big Data". Pour traiter du mieux possible ce flux incessant de données, de nouvelles méthodes de calcul ont vu le jour. Les travaux de cette thèse portent sur la cryptographie appliquée au traitement de grands volumes de données, avec comme finalité la protection des données des utilisateurs. En particulier, nous nous intéressons à la sécurisation d’algorithmes utilisant le paradigme de calcul distribué MapReduce pour réaliser un certain nombre de primitives (ou algorithmes) indispensables aux opérations de traitement de données, allant du calcul de métriques de graphes (e.g. PageRank) aux requêtes SQL (i.e. intersection d’ensembles, agrégation, jointure naturelle). Nous traitons dans la première partie de cette thèse de la multiplication de matrices. Nous décrivons d’abord une multiplication matricielle standard et sécurisée pour l’architecture MapReduce qui est basée sur l’utilisation du chiffrement additif de Paillier pour garantir la confidentialité des données. Les algorithmes proposés correspondent à une hypothèse spécifique de sécurité : collusion ou non des nœuds du cluster MapReduce, le modèle général de sécurité étant honnête mais curieux. L’objectif est de protéger la confidentialité de l’une et l’autre matrice, ainsi que le résultat final, et ce pour tous les participants (propriétaires des matrices, nœuds de calcul, utilisateur souhaitant calculer le résultat). D’autre part, nous exploitons également l’algorithme de multiplication de matrices de Strassen-Winograd, dont la complexité asymptotique est O(n^log2(7)) soit environ O(n^2.81) ce qui est une amélioration par rapport à la multiplication matricielle standard. Une nouvelle version de cet algorithme adaptée au paradigme MapReduce est proposée. L’hypothèse de sécurité adoptée ici est limitée à la non-collusion entre le cloud et l’utilisateur final. La version sécurisée utilise comme pour la multiplication standard l’algorithme de chiffrement Paillier. La seconde partie de cette thèse porte sur la protection des données lorsque des opérations d’algèbre relationnelle sont déléguées à un serveur public de cloud qui implémente à nouveau le paradigme MapReduce. En particulier, nous présentons une solution d’intersection sécurisée qui permet à un utilisateur du cloud d’obtenir l’intersection de n > 1 relations appartenant à n propriétaires de données. Dans cette solution, tous les propriétaires de données partagent une clé et un propriétaire de données sélectionné partage une clé avec chacune des clés restantes. Par conséquent, alors que ce propriétaire de données spécifique stocke n clés, les autres propriétaires n’en stockent que deux. Le chiffrement du tuple de relation réelle consiste à combiner l’utilisation d’un chiffrement asymétrique avec une fonction pseudo-aléatoire. Une fois que les données sont stockées dans le cloud, chaque réducteur (Reducer) se voit attribuer une relation particulière. S’il existe n éléments différents, des opérations XOR sont effectuées. La solution proposée reste donc très efficace. Par la suite, nous décrivons les variantes des opérations de regroupement et d’agrégation préservant la confidentialité en termes de performance et de sécurité. Les solutions proposées associent l’utilisation de fonctions pseudo-aléatoires à celle du chiffrement homomorphe pour les opérations COUNT, SUM et AVG et à un chiffrement préservant l’ordre pour les opérations MIN et MAX. Enfin, nous proposons les versions sécurisées de deux protocoles de jointure (cascade et hypercube) adaptées au paradigme MapReduce. Les solutions consistent à utiliser des fonctions pseudo-aléatoires pour effectuer des contrôles d’égalité et ainsi permettre les opérations de jointure lorsque des composants communs sont détectés.(...) / In the age of social networks and connected objects, many and diverse data are produced at every moment. The analysis of these data has led to a new science called "Big Data". To best handle this constant flow of data, new calculation methods have emerged.This thesis focuses on cryptography applied to processing of large volumes of data, with the aim of protection of user data. In particular, we focus on securing algorithms using the distributed computing MapReduce paradigm to perform a number of primitives (or algorithms) essential for data processing, ranging from the calculation of graph metrics (e.g. PageRank) to SQL queries (i.e. set intersection, aggregation, natural join).In the first part of this thesis, we discuss the multiplication of matrices. We first describe a standard and secure matrix multiplication for the MapReduce architecture that is based on the Paillier’s additive encryption scheme to guarantee the confidentiality of the data. The proposed algorithms correspond to a specific security hypothesis: collusion or not of MapReduce cluster nodes, the general security model being honest-but-curious. The aim is to protect the confidentiality of both matrices, as well as the final result, and this for all participants (matrix owners, calculation nodes, user wishing to compute the result). On the other hand, we also use the matrix multiplication algorithm of Strassen-Winograd, whose asymptotic complexity is O(n^log2(7)) or about O(n^2.81) which is an improvement compared to the standard matrix multiplication. A new version of this algorithm adapted to the MapReduce paradigm is proposed. The safety assumption adopted here is limited to the non-collusion between the cloud and the end user. The version uses the Paillier’s encryption scheme.The second part of this thesis focuses on data protection when relational algebra operations are delegated to a public cloud server using the MapReduce paradigm. In particular, we present a secureintersection solution that allows a cloud user to obtain the intersection of n > 1 relations belonging to n data owners. In this solution, all data owners share a key and a selected data owner sharesa key with each of the remaining keys. Therefore, while this specific data owner stores n keys, the other owners only store two keys. The encryption of the real relation tuple consists in combining the use of asymmetric encryption with a pseudo-random function. Once the data is stored in the cloud, each reducer is assigned a specific relation. If there are n different elements, XOR operations are performed. The proposed solution is very effective. Next, we describe the variants of grouping and aggregation operations that preserve confidentiality in terms of performance and security. The proposed solutions combine the use of pseudo-random functions with the use of homomorphic encryption for COUNT, SUM and AVG operations and order preserving encryption for MIN and MAX operations. Finally, we offer secure versions of two protocols (cascade and hypercube) adapted to the MapReduce paradigm. The solutions consist in using pseudo-random functions to perform equality checks and thus allow joining operations when common components are detected. All the solutions described above are evaluated and their security proven.
|
795 |
Droits de propriété intellectuelle, Cloud Computing et e-performances des entreprises / Intellectual property rights, Cloud Computing and e-performance of firmsMaherzi Zahar, Teja 19 May 2017 (has links)
L’objectif de cette thèse consiste à analyser de quelle manière l’usage du Cloud Computing (CC), présenté comme une nouvelle forme de droit de propriété intellectuelle (DPI), peut modifier l’intensité et l’usage des Technologies de l’Information et de la Communication (TIC) au sein des entreprises. Parmi les résultats novateurs de cette thèse sont les suivants nous en citons trois : Premièrement, dès lors que les firmes cherchent à innover, l’adoption du CC dépend de la capacité d’absorption technologique. La capacité d’absorption technologique, telle que nous l’avons redéfinie, construite par l’accumulation de connaissances et de procédés en matière de gestion des innovations et des technologies permet une intégration plus facile des nouvelles technologies dans les schémas de fonctionnement de l’entreprise. Deuxièmement, les compétences numériques sont fondamentales dans la décision d’adoption afin d’insérer le CC dans la continuité des anciennes TIC, de gérer la perception de la complexité de la technologie et des risque associés. Enfin, la diffusion du CC dépend en grande partie de la perception des consommateurs de cette nouvelle technologie. Plus les consommateurs font confiance à la sécurité du CC et plus la concurrence entre les entreprises permet de relâcher la concurrence en prix. Les perceptions des consommateurs concernant la sécurité du CC impactent la concurrence en prix et en qualité des firmes et déterminent ainsi le degré de diffusion. Ces perceptions des consommateurs à propos du CC jouent un rôle important dans la pénétration du Cloud. / The objective of this thesis is to analyze in what way the use of cloud computing (CC) presented as a new form of intellectual property right (IPR), can modify the intensity and use of information and communication technology (ICT) within companies. Among the innovative results of this thesis, three ‘aspects’ will be mentioned as follows: In the first place and since the firms are looking for innovation, the adoption of cloud computing depends on the technological absorptive capacity as we have redefined it, built on the accumulation of knowledge and processes related to the management of innovations and technologies allows an easier integration of the new technologies in the plan of the company’ functioning. Secondly, the digital skills are fundamental in the decision of adoption in order to insert the cloud computing in the continuity of former technologies, to manage the perception of the complexity of this technology and the associated risks. Finally, the distribution of the cloud computing depends largely on the consumers perception of this new technology. The more the consumers trust the security of the cloud computing, the more the competition among the companies allows to the competition in prices. The perceptions of the consumers about the security of the cloud computing have an impact on the competition for both price and quality among the firms (service providers) and determine the level of the distribution. These perceptions play an important role.
|
796 |
Generische Anbindung von Testfahrtdatenquellen an ein Automotive-Cloud-SystemMühlmann, Isabel 15 August 2019 (has links)
Die Verwaltung großer Mengen von Testfahrtdaten verschiedener Fahrzeuge stellt eine Herausforderung bei der Entwicklung autonomer Fahrassistenzsysteme dar. Um eine zentrale Anlaufstelle zu haben, auf die von überallher zugegriffen werden kann,
ist die Verwendung einer Cloudplattform eine zukunftsweisende Methode. Der Aufbau einer solchen Plattform, deren Schnittstellen, sowie die Client-Programme, welche für die Verbindung der Testfahrtdatenquellen zu dieser Cloudplattform benötigt werden, werden in dieser Arbeit konzeptionell entwickelt, an konkreten Beispielen umgesetzt und diese Implementierungen ausgewertet.
|
797 |
Resource management in the cloud: An end-to-end ApproachMa, Kun January 2020 (has links)
Philosophiae Doctor - PhD / Cloud Computing enables users achieve ubiquitous on-demand , and convenient access to a variety of shared computing resources, such as serves network, storage ,applications and more. As a business model, Cloud Computing has been openly welcomed by users and has become one of the research hotspots in the field of information and communication technology. This is because it provides users with on-demand customization and pay-per-use resource acquisition methods.
|
798 |
JOB SCHEDULING FOR STREAMING APPLICATIONS IN HETEROGENEOUS DISTRIBUTED PROCESSING SYSTEMSAl-Sinayyid, Ali 01 December 2020 (has links)
The colossal amounts of data generated daily are increasing exponentially at a never-before-seen pace. A variety of applications—including stock trading, banking systems, health-care, Internet of Things (IoT), and social media networks, among others—have created an unprecedented volume of real-time stream data estimated to reach billions of terabytes in the near future. As a result, we are currently living in the so-called Big Data era and witnessing a transition to the so-called IoT era. Enterprises and organizations are tackling the challenge of interpreting the enormous amount of raw data streams to achieve an improved understanding of data, and thus make efficient and well-informed decisions (i.e., data-driven decisions). Researchers have designed distributed data stream processing systems that can directly process data in near real-time. To extract valuable information from raw data streams, analysts need to create and implement data stream processing applications structured as a directed acyclic graphs (DAG). The infrastructure of distributed data stream processing systems, as well as the various requirements of stream applications, impose new challenges. Cluster heterogeneity in a distributed environment results in different cluster resources for task execution and data transmission, which make the optimal scheduling algorithms an NP-complete problem. Scheduling streaming applications plays a key role in optimizing system performance, particularly in maximizing the frame-rate, or how many instances of data sets can be processed per unit of time. The scheduling algorithm must consider data locality, resource heterogeneity, and communicational and computational latencies. The latencies associated with the bottleneck from computation or transmission need to be minimized when mapped to the heterogeneous and distributed cluster resources. Recent work on task scheduling for distributed data stream processing systems has a number of limitations. Most of the current schedulers are not designed to manage heterogeneous clusters. They also lack the ability to consider both task and machine characteristics in scheduling decisions. Furthermore, current default schedulers do not allow the user to control data locality aspects in application deployment.In this thesis, we investigate the problem of scheduling streaming applications on a heterogeneous cluster environment and develop the maximum throughput scheduler algorithm (MT-Scheduler) for streaming applications. The proposed algorithm uses a dynamic programming technique to efficiently map the application topology onto a heterogeneous distributed system based on computing and data transfer requirements, while also taking into account the capacity of underlying cluster resources. The proposed approach maximizes the system throughput by identifying and minimizing the time incurred at the computing/transfer bottleneck. The MT-Scheduler supports scheduling applications that are structured as a DAG, such as Amazon Timestream, Google Millwheel, and Twitter Heron. We conducted experiments using three Storm microbenchmark topologies in both simulated and real Apache Storm environments. To evaluate performance, we compared the proposed MT-Scheduler with the simulated round-robin and the default Storm scheduler algorithms. The results indicated that the MT-Scheduler outperforms the default round-robin approach in terms of both average system latency and throughput.
|
799 |
Metadata Management in Multi-Grids and Multi-CloudsEspling, Daniel January 2011 (has links)
Grid computing and cloud computing are two related paradigms used to access and use vast amounts of computational resources. The resources are often owned and managed by a third party, relieving the users from the costs and burdens of acquiring and managing a considerably large infrastructure themselves. Commonly, the resources are either contributed by different stakeholders participating in shared projects (grids), or owned and managed by a single entity and made available to its users with charging based on actual resource consumption (clouds). Individual grid or cloud sites can form collaborations with other sites, giving each site access to more resources that can be used to execute tasks submitted by users. There are several different models of collaborations between sites, each suitable for different scenarios and each posing additional requirements on the underlying technologies. Metadata concerning the status and resource consumption of tasks are created during the execution of the task on the infrastructure. This metadata is used as the primary input in many core management processes, e.g., as a base for accounting and billing, as input when prioritizing and placing incoming task, and as a base for managing the amount of resources allocated to different tasks. Focusing on management and utilization of metadata, this thesis contributes to a better understanding of the requirements and challenges imposed by different collaboration models in both grids and clouds. The underlying design criteria and resulting architectures of several software systems are presented in detail. Each system addresses different challenges imposed by cross-site grid and cloud architectures: The LUTSfed approach provides a lean and optional mechanism for filtering and management of usage data between grid or cloud sites. An accounting and billing system natively designed to support cross-site clouds demonstrates usage data management despite unknown placement and dynamic task resource allocation. The FSGrid system enables fairshare job prioritization across different grid sites, mitigating the problems of heterogeneous scheduling software and local management policies. The results and experiences from these systems are both theoretical and practical, as full scale implementations of each system has been developed and analyzed as a part of this work. Early theoretical work on structure-based service management forms a foundation for future work on structured-aware service placement in cross- site clouds.
|
800 |
Definition of a methodology to analyze the Product Portfolio Management : Example analysis of the cloud computing market PPMMenéndez Torre, Carlos Alberto, Yadav, Rahul Kumar January 2021 (has links)
Companies invest their resources into different products. That constellation of products, how they interact with each other, and how they are positioned defines the company's Product Portfolio. Moreover, that constellation of products is critical for the company's financial success. The Product Portfolio evaluation is essential to assess if the company's resources are invested in the most efficient way or if there could be some optimizations that would improve the results. A key outcome is that in order to optimize the Product Portfolio, a company must first evaluate and characterize that portfolio. This work aims to define a methodology for holistically evaluating a company's portfolio by analyzing different parameters. This new methodology will be used in an example market. In this work, we ran the evaluation in the cloud computing market, a new market that is still growing but with few remarkable players that account for more than 50% of the market's total revenues. In the analysis of the cloud computing market and the main suppliers in the market, we will apply the suggested methodology. That would enable to summarize the main characteristics of the leading players' portfolios and provide optimization recommendations that would improve the portfolios' quality and ultimately the results of those companies.
|
Page generated in 0.0769 seconds