81 |
Разработка приложения потоковой передачи непрерывно генерируемых данных для облачной инфраструктуры : магистерская диссертация / Development of an application for cloud-based data streaming infrastructureПетров, С. Н., Petrov, S. N. January 2024 (has links)
Интернет вещей (IoT) становится все более популярным по мере того, как становятся известны ценные варианты использования. Однако ключевой проблемой является интеграция устройств и машин для обработки данных в режиме реального времени и в большом масштабе. Промышленные компании интегрируют машины и роботов для оптимизации своих бизнес-процессов и снижения затрат. Цель работы – разработка распределенной системы обмена сообщениями, которая подходит для современных приложений с интенсивным использованием данных и предоставляет облачную инфраструктуру потоковой передачи данных. / The Internet of Things is becoming increasingly popular as valuable use cases become known. However, the key issue is the integration of devices and machines for processing data in real time and on a large scale. Industrial companies integrate machines and robots to optimize their business processes and reduce costs. The aim of the work is to develop a distributed messaging system that is suitable for modern data–intensive applications and provides a cloud-based data streaming infrastructure.
|
82 |
Efficient techniques for large-scale Web data management / Techniques efficaces de gestion de données Web à grande échelleCamacho Rodriguez, Jesus 25 September 2014 (has links)
Le développement récent des offres commerciales autour du cloud computing a fortement influé sur la recherche et le développement des plateformes de distribution numérique. Les fournisseurs du cloud offrent une infrastructure de distribution extensible qui peut être utilisée pour le stockage et le traitement des données.En parallèle avec le développement des plates-formes de cloud computing, les modèles de programmation qui parallélisent de manière transparente l'exécution des tâches gourmandes en données sur des machines standards ont suscité un intérêt considérable, à commencer par le modèle MapReduce très connu aujourd'hui puis par d'autres frameworks plus récents et complets. Puisque ces modèles sont de plus en plus utilisés pour exprimer les tâches de traitement de données analytiques, la nécessité se fait ressentir dans l'utilisation des langages de haut niveau qui facilitent la charge de l'écriture des requêtes complexes pour ces systèmes.Cette thèse porte sur des modèles et techniques d'optimisation pour le traitement efficace de grandes masses de données du Web sur des infrastructures à grande échelle. Plus particulièrement, nous étudions la performance et le coût d'exploitation des services de cloud computing pour construire des entrepôts de données Web ainsi que la parallélisation et l'optimisation des langages de requêtes conçus sur mesure selon les données déclaratives du Web.Tout d'abord, nous présentons AMADA, une architecture d'entreposage de données Web à grande échelle dans les plateformes commerciales de cloud computing. AMADA opère comme logiciel en tant que service, permettant aux utilisateurs de télécharger, stocker et interroger de grands volumes de données Web. Sachant que les utilisateurs du cloud prennent en charge les coûts monétaires directement liés à leur consommation de ressources, notre objectif n'est pas seulement la minimisation du temps d'exécution des requêtes, mais aussi la minimisation des coûts financiers associés aux traitements de données. Plus précisément, nous étudions l'applicabilité de plusieurs stratégies d'indexation de contenus et nous montrons qu'elles permettent non seulement de réduire le temps d'exécution des requêtes mais aussi, et surtout, de diminuer les coûts monétaires liés à l'exploitation de l'entrepôt basé sur le cloud.Ensuite, nous étudions la parallélisation efficace de l'exécution de requêtes complexes sur des documents XML mis en œuvre au sein de notre système PAXQuery. Nous fournissons de nouveaux algorithmes montrant comment traduire ces requêtes dans des plans exprimés par le modèle de programmation PACT (PArallelization ConTracts). Ces plans sont ensuite optimisés et exécutés en parallèle par le système Stratosphere. Nous démontrons l'efficacité et l'extensibilité de notre approche à travers des expérimentations sur des centaines de Go de données XML.Enfin, nous présentons une nouvelle approche pour l'identification et la réutilisation des sous-expressions communes qui surviennent dans les scripts Pig Latin. Notre algorithme, nommé PigReuse, agit sur les représentations algébriques des scripts Pig Latin, identifie les possibilités de fusion des sous-expressions, sélectionne les meilleurs à exécuter en fonction du coût et fusionne d'autres expressions équivalentes pour partager leurs résultats. Nous apportons plusieurs extensions à l'algorithme afin d’améliorer sa performance. Nos résultats expérimentaux démontrent l'efficacité et la rapidité de nos algorithmes basés sur la réutilisation et des stratégies d'optimisation. / The recent development of commercial cloud computing environments has strongly impacted research and development in distributed software platforms. Cloud providers offer a distributed, shared-nothing infrastructure, that may be used for data storage and processing.In parallel with the development of cloud platforms, programming models that seamlessly parallelize the execution of data-intensive tasks over large clusters of commodity machines have received significant attention, starting with the MapReduce model very well known by now, and continuing through other novel and more expressive frameworks. As these models are increasingly used to express analytical-style data processing tasks, the need for higher-level languages that ease the burden of writing complex queries for these systems arises.This thesis investigates the efficient management of Web data on large-scale infrastructures. In particular, we study the performance and cost of exploiting cloud services to build Web data warehouses, and the parallelization and optimization of query languages that are tailored towards querying Web data declaratively.First, we present AMADA, an architecture for warehousing large-scale Web data in commercial cloud platforms. AMADA operates in a Software as a Service (SaaS) approach, allowing users to upload, store, and query large volumes of Web data. Since cloud users support monetary costs directly connected to their consumption of resources, our focus is not only on query performance from an execution time perspective, but also on the monetary costs associated to this processing. In particular, we study the applicability of several content indexing strategies, and show that they lead not only to reducing query evaluation time, but also, importantly, to reducing the monetary costs associated with the exploitation of the cloud-based warehouse.Second, we consider the efficient parallelization of the execution of complex queries over XML documents, implemented within our system PAXQuery. We provide novel algorithms showing how to translate such queries into plans expressed in the PArallelization ConTracts (PACT) programming model. These plans are then optimized and executed in parallel by the Stratosphere system. We demonstrate the efficiency and scalability of our approach through experiments on hundreds of GB of XML data.Finally, we present a novel approach for identifying and reusing common subexpressions occurring in Pig Latin scripts. In particular, we lay the foundation of our reuse-based algorithms by formalizing the semantics of the Pig Latin query language with extended nested relational algebra for bags. Our algorithm, named PigReuse, operates on the algebraic representations of Pig Latin scripts, identifies subexpression merging opportunities, selects the best ones to execute based on a cost function, and merges other equivalent expressions to share its result. We bring several extensions to the algorithm to improve its performance. Our experiment results demonstrate the efficiency and effectiveness of our reuse-based algorithms and optimization strategies.
|
83 |
Cloud Integrator: uma plataforma para composi??o de servi?os em ambientes de computa??o em nuvem / Cloud Integrator: a platform for composition of services in cloud computing environmentsCavalcante, Everton Ranielly de Sousa 31 January 2013 (has links)
Made available in DSpace on 2014-12-17T15:48:05Z (GMT). No. of bitstreams: 1
EvertonRSC_DISSERT.pdf: 4653595 bytes, checksum: 83e897be68464555082a55505fd406ea (MD5)
Previous issue date: 2013-01-31 / Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico / With the advance of the Cloud Computing paradigm, a single service offered by a
cloud platform may not be enough to meet all the application requirements. To fulfill
such requirements, it may be necessary, instead of a single service, a composition of
services that aggregates services provided by different cloud platforms. In order to
generate aggregated value for the user, this composition of services provided by
several Cloud Computing platforms requires a solution in terms of platforms
integration, which encompasses the manipulation of a wide number of noninteroperable
APIs and protocols from different platform vendors. In this scenario,
this work presents Cloud Integrator, a middleware platform for composing services
provided by different Cloud Computing platforms. Besides providing an
environment that facilitates the development and execution of applications that use
such services, Cloud Integrator works as a mediator by providing mechanisms for
building applications through composition and selection of semantic Web services
that take into account metadata about the services, such as QoS (Quality of Service),
prices, etc. Moreover, the proposed middleware platform provides an adaptation
mechanism that can be triggered in case of failure or quality degradation of one or
more services used by the running application in order to ensure its quality and
availability. In this work, through a case study that consists of an application that use
services provided by different cloud platforms, Cloud Integrator is evaluated in terms
of the efficiency of the performed service composition, selection and adaptation
processes, as well as the potential of using this middleware in heterogeneous
computational clouds scenarios / Com o avan?o do paradigma de Computa??o em Nuvem, um ?nico servi?o
oferecido por uma plataforma de nuvem pode n?o ser suficiente para satisfazer
todos os requisitos da aplica??o. Para satisfazer tais requisitos, ao inv?s de um ?nico
servi?o, pode ser necess?ria uma composi??o que agrega servi?os providos por
diferentes plataformas de nuvem. A fim de gerar valor agregado para o usu?rio, essa
composi??o de servi?os providos por diferentes plataformas de Computa??o em
Nuvem requer uma solu??o em termos de integra??o de plataformas, envolvendo a
manipula??o de um vasto n?mero de APIs e protocolos n?o interoper?veis de
diferentes provedores. Nesse cen?rio, este trabalho apresenta o Cloud Integrator, uma
plataforma de middleware para composi??o de servi?os providos por diferentes
plataformas de Computa??o em Nuvem. Al?m de prover um ambiente que facilita o
desenvolvimento e a execu??o de aplica??es que utilizam tais servi?os, o Cloud
Integrator funciona como um mediador provendo mecanismos para a constru??o de
aplica??es atrav?s da composi??o e sele??o de servi?os Web sem?nticos que
consideram metadados acerca dos servi?os, como QoS (Quality of Service), pre?os etc.
Adicionalmente, a plataforma de middleware proposta prov? um mecanismo de
adapta??o que pode ser disparado em caso de falha ou degrada??o da qualidade de
um ou mais servi?os utilizados pela aplica??o em quest?o, a fim de garantir sua a
qualidade e disponibilidade. Neste trabalho, atrav?s de um estudo de caso que
consiste de uma aplica??o que utiliza servi?os providos por diferentes plataformas
de nuvem, o Cloud Integrator ? avaliado em termos da efici?ncia dos processos de
composi??o de servi?os, sele??o e adapta??o realizados, bem como da potencialidade
do seu uso em cen?rios de nuvens computacionais heterog?neas
|
84 |
Scalable algorithms for cloud-based Semantic Web data management / Algorithmes passant à l’échelle pour la gestion de données du Web sémantique sur les platformes cloudZampetakis, Stamatis 21 September 2015 (has links)
Afin de construire des systèmes intelligents, où les machines sont capables de raisonner exactement comme les humains, les données avec sémantique sont une exigence majeure. Ce besoin a conduit à l’apparition du Web sémantique, qui propose des technologies standards pour représenter et interroger les données avec sémantique. RDF est le modèle répandu destiné à décrire de façon formelle les ressources Web, et SPARQL est le langage de requête qui permet de rechercher, d’ajouter, de modifier ou de supprimer des données RDF. Être capable de stocker et de rechercher des données avec sémantique a engendré le développement des nombreux systèmes de gestion des données RDF.L’évolution rapide du Web sémantique a provoqué le passage de systèmes de gestion des données centralisées à ceux distribués. Les premiers systèmes étaient fondés sur les architectures pair-à-pair et client-serveur, alors que récemment l’attention se porte sur le cloud computing.Les environnements de cloud computing ont fortement impacté la recherche et développement dans les systèmes distribués. Les fournisseurs de cloud offrent des infrastructures distribuées autonomes pouvant être utilisées pour le stockage et le traitement des données. Les principales caractéristiques du cloud computing impliquent l’évolutivité́, la tolérance aux pannes et l’allocation élastique des ressources informatiques et de stockage en fonction des besoins des utilisateurs.Cette thèse étudie la conception et la mise en œuvre d’algorithmes et de systèmes passant à l’échelle pour la gestion des données du Web sémantique sur des platformes cloud. Plus particulièrement, nous étudions la performance et le coût d’exploitation des services de cloud computing pour construire des entrepôts de données du Web sémantique, ainsi que l’optimisation de requêtes SPARQL pour les cadres massivement parallèles.Tout d’abord, nous introduisons les concepts de base concernant le Web sémantique et les principaux composants des systèmes fondés sur le cloud. En outre, nous présentons un aperçu des systèmes de gestion des données RDF (centralisés et distribués), en mettant l’accent sur les concepts critiques de stockage, d’indexation, d’optimisation des requêtes et d’infrastructure.Ensuite, nous présentons AMADA, une architecture de gestion de données RDF utilisant les infrastructures de cloud public. Nous adoptons le modèle de logiciel en tant que service (software as a service - SaaS), où la plateforme réside dans le cloud et des APIs appropriées sont mises à disposition des utilisateurs, afin qu’ils soient capables de stocker et de récupérer des données RDF. Nous explorons diverses stratégies de stockage et d’interrogation, et nous étudions leurs avantages et inconvénients au regard de la performance et du coût monétaire, qui est une nouvelle dimension importante à considérer dans les services de cloud public.Enfin, nous présentons CliqueSquare, un système distribué de gestion des données RDF basé sur Hadoop. CliqueSquare intègre un nouvel algorithme d’optimisation qui est capable de produire des plans massivement parallèles pour des requêtes SPARQL. Nous présentons une famille d’algorithmes d’optimisation, s’appuyant sur les équijointures n- aires pour générer des plans plats, et nous comparons leur capacité à trouver les plans les plus plats possibles. Inspirés par des techniques de partitionnement et d’indexation existantes, nous présentons une stratégie de stockage générique appropriée au stockage de données RDF dans HDFS (Hadoop Distributed File System). Nos résultats expérimentaux valident l’effectivité et l’efficacité de l’algorithme d’optimisation démontrant également la performance globale du système. / In order to build smart systems, where machines are able to reason exactly like humans, data with semantics is a major requirement. This need led to the advent of the Semantic Web, proposing standard ways for representing and querying data with semantics. RDF is the prevalent data model used to describe web resources, and SPARQL is the query language that allows expressing queries over RDF data. Being able to store and query data with semantics triggered the development of many RDF data management systems. The rapid evolution of the Semantic Web provoked the shift from centralized data management systems to distributed ones. The first systems to appear relied on P2P and client-server architectures, while recently the focus moved to cloud computing.Cloud computing environments have strongly impacted research and development in distributed software platforms. Cloud providers offer distributed, shared-nothing infrastructures that may be used for data storage and processing. The main features of cloud computing involve scalability, fault-tolerance, and elastic allocation of computing and storage resources following the needs of the users.This thesis investigates the design and implementation of scalable algorithms and systems for cloud-based Semantic Web data management. In particular, we study the performance and cost of exploiting commercial cloud infrastructures to build Semantic Web data repositories, and the optimization of SPARQL queries for massively parallel frameworks.First, we introduce the basic concepts around Semantic Web and the main components and frameworks interacting in massively parallel cloud-based systems. In addition, we provide an extended overview of existing RDF data management systems in the centralized and distributed settings, emphasizing on the critical concepts of storage, indexing, query optimization, and infrastructure. Second, we present AMADA, an architecture for RDF data management using public cloud infrastructures. We follow the Software as a Service (SaaS) model, where the complete platform is running in the cloud and appropriate APIs are provided to the end-users for storing and retrieving RDF data. We explore various storage and querying strategies revealing pros and cons with respect to performance and also to monetary cost, which is a important new dimension to consider in public cloud services. Finally, we present CliqueSquare, a distributed RDF data management system built on top of Hadoop, incorporating a novel optimization algorithm that is able to produce massively parallel plans for SPARQL queries. We present a family of optimization algorithms, relying on n-ary (star) equality joins to build flat plans, and compare their ability to find the flattest possibles. Inspired by existing partitioning and indexing techniques we present a generic storage strategy suitable for storing RDF data in HDFS (Hadoop’s Distributed File System). Our experimental results validate the efficiency and effectiveness of the optimization algorithm demonstrating also the overall performance of the system.
|
85 |
Digital curation of records in the cloud to support e-government services in South AfricaShibambu, Badimuni Amos 05 January 2021 (has links)
Many scholars lament of poor infrastructure to manage and preserve digital records
within the public sector in South Africa to support electronic government (egovernment).
For example, in South Africa, the national archives’ repository and its
subsidiary provincial archives do not have infrastructure to ingest digital records into
archival custody. As a result, digital records are left to the creating agencies to manage
and preserve. The problem is compounded by the fact that very few public sector
organisations in South Africa have procured systems to manage digital records.
Therefore, a question is how are digital records managed and stored in these
organisations to support e-government? Do public organisations entrust their records to
the cloud as an alternative storage given the fact that both physical and virtual storages
are a problem? If they do, how do they ensure accessibility, governance, security and
long-term preservation of records in the cloud? Utilising the Digital Curation Centre
(DCC) Lifecycle Model as a guiding framework, this qualitative study sought to
explore digital curation of records in the cloud to support e-government services in
South Africa with the view to propose a framework that would guide the public sector
to migrate records to the cloud storage. Semi-structured interviews were employed to
collect data from the purposively selected Chief Information Officers in the national
government departments that have implemented some of the electronic services such
as the Department of Arts and Culture, Department of Home Affairs, Department of
Higher Education and Training and the Department of Basic Education.
Furthermore, the National Archives and Records Services of South Africa was also
chosen as it is charged with the statutory regulatory role of records management in
governmental bodies. So is the State Information Technology Agency (SITA), a public
sector ICT company established in 1999 to consolidate and coordinate the state’s
information technology resources in order to achieve cost savings through scale,
increase delivery capabilities and enhance interoperability. Interview data were
augmented through document analysis of legislation and policies pertaining to data
storage. Data were analysed thematically and interpreted in accordance with the
objectives of the study. The key finding suggests that although public servants
informally and unconsciously put some records in the clouds, government departments in South Africa are sceptical to entrust their records to the cloud due to a number of
reasons, such as lack of policy and legislative framework, lack of trust to the cloud
storage, jurisdiction, legal implications, privacy, ownership and security risks. This
study recommends that given the evolution of technology, the government should
regulate cloud storage through policy and legislative promulgation, as well as
developing a government-owned cloud managed through SITA in order for all
government departments to use it. This study suggests a framework to migrate paperbased
records to cloud storage that is controlled by the government. / Information Science / D.Lit. et Phil. (Information Science)
|
86 |
SaaS-baserade affärsmodeller : Utvecklingen av en SaaS-baserad affärsmodell och dess fundamentala komponenter / SaaS Business Models : The Development of a SaaS Business Model and its Fundamental ComponentsKamil, Maryam, Nordenback, Isak January 2023 (has links)
Digital teknik används alltmer av företag för att utveckla sina erbjudanden, och framstegen inom området har haft en betydande inverkan på att företag omprövar och förändrar sina affärsmodeller. En viktig teknologi inom detta område är molntjänster (eng. Cloud Computing), som möjliggör lagring och tillhandahållande av information som en tjänst via internet. Inom molntjänster finns olika tjänstemodeller, och en av dem är Software-as-a-Service (SaaS). SaaS utgör en tredjedel av den totala mjukvarumarknaden och förväntas fortsätta växa inom industrin för internetbaserad mjukvara och tjänster. SaaS har potentialen att främja transformationen av affärsmodeller genom att introducera en ny logik för att skapa, leverera och fånga värde. För att lyckas med användningen av SaaS behöver tjänsteleverantören implementera och införa nya affärsmodeller. Trots att många företag strävar efter att utveckla digitala tjänster, upplever många företag svårigheter med att skapa verkligt kundvärde med sina digitala tjänster. Dessutom möter företagen utmaningar när det gäller att generera en lönsam intäktsström och bedöma vilken intäktsmodell som är lämpligast för tjänsten. Med grund i detta formulerades studiens syfte: att undersöka hur ett företag kan styra affärsmodellens utveckling mot ett SaaS-baserat erbjudande. En litteraturstudie genomfördes inom områdena affärsmodeller och affärsmodellsinnovation. I denna studie betraktas affärsmodellen som en modell bestående av tre komponenter: skapa, fånga och leverera värde, där varje komponent byggs upp av aktiviteter. Baserat på dessa teoretiska områden utvecklades en analysmodell. I litteraturstudien presenteras även tjänstefiering och digital tjänstefiering för att ge förståelse för drivkraften och relevansen av molntjänster. Studien utfördes som en flerfallsstudie av åtta företag som erbjuder en SaaS-lösning. Dessa företag är verksamma inom olika branscher och befinner sig på olika stadier i utvecklingen av sitt SaaS-erbjudande. Empirisk data samlades in genom 15 semistrukturerade intervjuer, där respondenterna var personer med relevant kompetens inom studieområdet. Studien visade att kunderna spelar en central roll i utvecklingen av affärsmodellen för en SaaS-baserad lösning. Kunden har särskild betydelse i de tidiga faserna av affärsmodellens utveckling, och deras påverkan på styrningen av affärsmodellens utveckling är av stor vikt. När det gäller affärsmodellens komponenter framkom det att komponenten för att fånga värde spelar en betydande roll i styrningen av affärsmodellens utveckling. Vidare har studien visat att mognadsgraden och vidareutvecklingen av affärsmodellens komponenter sker i en särskild ordning (skapa värde→leverera värde→fånga värde), där företagets mognadsgrad styr vilken affärsmodellskomponent deras utvecklingsaktiviteter fokuseras på. Studien visade också att utvecklingsprocessen mot en SaaS-baserad affärsmodell är en iterativ process, där det är svårt att undvika behovet av att aktivt iterera affärsmodellen mot kunder för att fortsätta utveckla den. / Digital technology is increasingly being used by companies to develop their offerings, and advances in the field have had a significant impact on businesses reconsidering and changing their business models. One important technology in this area is cloud computing, which enable the storage and provision of information as a service over the internet to customers. Within cloud services, there are different service models, and one of them is Software-as- a-Service (SaaS). SaaS accounts for one-third of the total software market and is expected to continue growing in the industry of internet-based software and services. SaaS has the potential to promote the transformation of business models by introducing a new logic for creating, delivering, and capturing value. To succeed in the use of SaaS, the service provi- der needs to implement and adopt new business models. Despite many companies striving to develop digital services, many of them struggle to create real customer value with their digital services. Additionally, companies face challenges in generating a profitable revenue stream and choosing the appropriate revenue model for their service. Based on this, the following purpose of the study was formulated: to investigate how a company can steer the development of its business model towards a SaaS-based offering. A literature review was conducted in the areas of business models and business model innovation. In this study, the business model is considered as a model consisting of three components: creating, capturing, and delivering value, where these components consist of activities. Based on these theoretical areas, an analytical model was developed. The litera- ture review also presents servitization and digital servitization to provide an understanding of the driving force and the relevance of cloud services. The study was conducted as a multiple case study of eight companies offering a SaaS so- lution. These companies operate in different industries and are at different stages in the development of their SaaS offering. Empirical data was collected through 15 semi-structured interviews, where the respondents were individuals with relevant expertise in the study area. The study showed that customers play a central role in the development of the business model for a SaaS-based solution. Customers have particular significance in the early stages of business model development, and their influence on the development of the business model is of great importance. Regarding the components of the business model, it emerged that the value capture component plays a significant role in guiding the development of the business model. However, the study has shown that the maturity and further develop- ment of the business model components occur in a specific order (value creation→value delivery→value capture), where the company’s maturity level determines which business model component their development activities focus on. The study also demonstrated that the development of a SaaS-based business model is an iterative process, where companies need to actively iterate their business model with customers to continue its development.
|
87 |
En jämförelse i kostnad och prestanda för molnbaserad datalagring / A comparison in cost and performance for cloud-based data storageBurgess, Olivia, Oucif, Sara January 2024 (has links)
I takt med att datakvantiteter växer och kraven på skalbarhet och tillgänglighet inom molntjänster växer, framhävs behovet av undersökningar kring dess prestanda och kostnadseffektivitet. Dessa analyser är avgörande för att optimera tjänster och bistå företag med värdefulla rekommendationer för att fatta välgrundade beslut om datalagring i molnet. Detta examensarbete undersöker kostnad samt prestanda hos relationella och icke-relationella datalagringslösningar implementerade på Microsoft Azure och Google Cloud Platform. Verktyget Hyperfine används för att mäta latens och tjänsternas kostnadseffektivitet beräknas baserat på detta resultat samt dess beräknade månadskostnader. Studiens resultat indikerar att för de utvärderade relationella databastjänsterna uppvisar Azure SQL Database initialt en låg latens som sedan ökar proportionellt med datamängden, medan Google Cloud SQL visar en något högre latens vid lägre datamängder men mer konstant latens vid högre datamängder. Azure SQL visar sig vara mer kostnadseffektiv i förhållande till Google Cloud SQL, vilket gör den till ett mer fördelaktigt alternativ för företag som eftersträvar hög prestanda till lägre kostnader. Vid jämförelse mellan de två icke-relationella databastjänsterna Azure Cosmos DB och Google Cloud Datastore uppvisar Azure Cosmos DB genomgående jämförelsevis lägre latens och överlägsen kostnadseffektivitet. Detta gör Azure Cosmos DB till en fördelaktig lösning för företag som prioriterar ekonomisk effektivitet i sin databashantering. / As data volumes grow and the demands for scalability and availability within cloud services increase, the need for studies on their performance and cost-effectiveness is emphasized. These analyses are crucial for optimizing services and providing businesses with valuable recommendations to make well-grounded decisions about cloud data storage. This thesis examines cost and performance for relational and non-relational data storage solutions implemented on Microsoft Azure and Google Cloud Platform. The tool Hyperfine is used to evaluate latency and the cloud services cost efficiency is calculated using this result as well as their monthly cost. The study's results regarding relational data storage indicate that Azure SQL Database initially exhibits low latency, which then increases proportionally with the data volume, while Google Cloud SQL shows slightly higher latency at smaller data volumes but more consistent latency with more data. Azure SQL Database is more cost-effective, making it a more favorable option than Google Cloud SQL for companies seeking high performance at lower costs. Regarding evaluated services for non-relational data storage Azure Cosmos DB consistently demonstrates lower latency and superior cost efficiency compared to Google Cloud Datastore, making it the preferred solution for companies prioritizing economic efficiency in their database management.
|
88 |
Systém pro automatickou správu serverů / System for Automated Server AdministrationPavelka, Martin January 2019 (has links)
The goal of this diploma thesis is to design the user interface and implement the information system as a web application. Using the custom implemented library the system communicates with GraphQL server which manages the client data. The thesis describes possible solutions for physical servers automatization. The application provides the application interface to manage virtual servers. Automatization is possible without human interaction. Connection to the virtualization technologies is handled by web interface APIs or custom scripts running in the virtual system terminal. There is a monitoring system built over project components. The thesis also describes the continuous integration using Gitlab tools. Running the configuration task is solved using the Unix CRON system.
|
89 |
Jämförelse av cache-tjänster: WSUS Och LanCache / Comparison of cache services: WSUS and LanCacheShammaa, Mohammad Hamdi, Aldrea, Sumaia January 2023 (has links)
Inom nätverkstekniken och datakommunikationen råder idag en tro på tekniken nätverkscache som kan spara data för att senare kunna hämta hem det snabbare. Tekniken har genom åren visat att den effektivt kan skicka den önskade data till sina klienter. Det finns flera cache-tjänster som använder tekniken för Windows-uppdateringar. Bland dessa finns Windows Server Update Services (WSUS) och LanCache. På uppdrag från företaget TNS Gaming AB jämförs dessa tjänster med varandra under examensarbetet. Nätverkscache är ett intressant forskningsområde för framtida kommunikationssystem och nätverk tack vare sina fördelar. Likaså är uppgiften om att jämföra cache-tjänsterna WSUS och LanCache intressant i och med det öppnar upp insikt om vilken tjänst är bättre för företaget eller andra intressenter. Både forskningsområdet och uppgiften är viktiga och intressanta när användare vill effektivisera användningen av internetanslutningen och bespara nätverksresurser. Därmed kan tekniken minska nedladdningstiden. Till det här arbetet besvaras frågor om vilken nätverksprestanda, resursanvändning och administrationstid respektive cache-tjänst har, och vilken cache-tjänst som lämpar sig bättre för företagets behov. I arbetet genomförs experiment, som omfattar tre huvudmättningar, och följs av en enfallstudie. Syftet med arbetet är att med hjälp av experimentets mätningar få en jämförelse mellan WSUS och LanCache. Resultatet av arbetet utgör sedan ett underlag för det framtida lösningsvalet. Resultaten består av två delar. Den första visar att båda cache-tjänsterna bidrar till kortare nedladdningstider. Den andra är att LanCache är bättre än WSUS när det gäller nätverksprestanda och resursanvändning, samt mindre administrationstid jämfört med WSUS. Givet resultat dras slutsatsen att LanCache är cache-tjänsten som är mest lämpad i det här fallet. / In the field of network technology and data communication, there is a current belief in the technology of network caching, which can store data to later retrieve it more quickly. Over the years, this technology has proven its ability to efficiently deliver the desired data to its clients. There are several caching services that utilize this technology for Windows updates, among them are Windows Server Update Services (WSUS) and LanCache. On behalf of the company TNS Gaming AB, these services are compared to each other in this thesis. Network caching is an interesting area of research for future communication systems and networks due to its benefits. Likewise, the task of comparing the cache services WSUS and LanCache is interesting as it provides insights into which service is better suited for the company or other stakeholders. Both the research area and the task are important and intriguing when users seek to streamline the use of their internet connection and conserve network resources. Thus, the technology can reduce download times. For this work, questions about the network performance, resource usage, and administration time of each cache service are answered, as well as which cache service that is better suited to the company's needs. The work involves conducting experiments, including three main measurements, followed by a single case study. The purpose of the work is to compare WSUS and LanCache using the measurements from the experiment. The outcome of the work then forms a basis for future solution choice. The results consist of two parts. The first shows that both cache services contribute to shorter download times. The second is that LanCache outperforms WSUS in terms of network performance and resource usage, and also requires less administration time than WSUS. Given the results, the conclusion is drawn that LanCache is the most suitable caching service in this case.
|
Page generated in 0.139 seconds