Spelling suggestions: "subject:"cloud cervices"" "subject:"cloud dervices""
1 |
InCloud-Towards Infotainment Services For VANETsGuo, Haolin January 2014 (has links)
In order to realize effective infotainment systems for vehicles, we need to have context-aware applications that use the latest (live) information for an enhanced user experience. Such up-to-date information is now abundantly available on the Internet, due to the explosive growth of the Web 2.0. In earlier times, it was difficult and expensive for vehicles to connect to the Internet. Recent advances in vehicular ad-hoc networks (VANETs) have enabled vehicles to connect to the Internet through road side infrastructures, with little to no additional cost. However, there are several problems with directly using Internet data in a vehicle, such as (1) Internet data sources have their own interfaces, which keep changing over time, needing frequent application updates, (2) information provided by multiple data sources needs to be preprocessed and fused before use, and (3) vehicles employ propriety platforms for infotainment systems, which makes an application update even more cumbersome. Furthermore, accessing multiple Internet sources may cause unnecessary overhead over the VANET bandwidth. In this thesis, we propose a cloud-based middleware framework for vehicular infotainment application development. The proposed framework follows service oriented architecture in which data filtering and fusion functionalities are delegated to the cloud. Data filtering and fusion reduce the data flow over VANET. Furthermore, because most the the processing is done on the cloud, the client becomes lightweight and loosely coupled with Internet resources and underlying platforms. We also propose a class based fusion method to combine information from multiple sources. The efficacy of the proposed framework is validated by developing an enhanced navigation (eDirection) application for the vehicle, as well as three infotainment applications: context-aware music, news, and weather.
|
2 |
Evaluation of a backend for computer games using a cloud serviceLundberg, Malin January 2017 (has links)
Cloud services are popular for hosting applications because they offer simplicity and cost efficiency. There are a lot of different providers offering many different services, which can make it hard to find the one most suitable for you. To make informed decisions you need to evaluate different services. This project evaluates a cloud service, called Amazon Lambda, from one of the biggest cloud service providers. Amazon Lambda is a simple service which runs one function in response to an event. In this project it is evaluated on suitability, performance and cost. To evaluate suitability, a certain kind of applications, games, were selected. The game industry is innovative and put high requirements on performance. A few simple Lambda functions were implemented and integrated in a prototype game. Some calculations were made for determining the cost of hosting such a game on Amazon Lambda. A few tests were implemented and run in order to further evaluate the performance.
|
3 |
A comparison of image and object level annotation performance of image recognition cloud services and custom Convolutional Neural Network modelsNilsson, Kristian, Jönsson, Hans-Eric January 2019 (has links)
Recent advancements in machine learning has contributed to an explosive growth of the image recognition field. Simultaneously, multiple Information Technology (IT) service providers such as Google and Amazon have embraced cloud solutions and software as a service. These factors have helped mature many computer vision tasks from scientific curiosity to practical applications. As image recognition is now accessible to the general developer community, a need arises for a comparison of its capabilities, and what can be gained from choosing a cloud service over a custom implementation. This thesis empirically studies the performance of five general image recognition services (Google Cloud Vision, Microsoft Computer Vision, IBM Watson, Clarifai and Amazon Rekognition) and image recognition models of the Convolutional Neural Network (CNN) architecture that we ourselves have configured and trained. Image and object level annotations of images extracted from different datasets were tested, both in their original state and after being subjected to one of the following six types of distortions: brightness, color, compression, contrast, blurriness and rotation. The output labels and confidence scores were compared to the ground truth of multiple levels of concepts, such as food, soup and clam chowder. The results show that out of the services tested, there is currently no clear top performer over all categories and they all have some variations and similarities in their output, but on average Google Cloud Vision performs the best by a small margin. The services are all adept at identifying high level concepts such as food and most mid-level ones such as soup. However, in terms of further specifics, such as clam chowder, they start to vary, some performing better than others in different categories. Amazon was found to be the most capable at identifying multiple unique objects within the same image, on the chosen dataset. Additionally, it was found that by using synonyms of the ground truth labels, performance increased as the semantic gap between our expectations and the actual output from the services was narrowed. The services all showed vulnerability to image distortions, especially compression, blurriness and rotation. The custom models all performed noticeably worse, around half as well compared to the cloud services, possibly due to the difference in training data standards. The best model, configured with three convolutional layers, 128 nodes and a layer density of two, reached an average performance of almost 0.2 or 20%. In conclusion, if one is limited by a lack of experience with machine learning, computational resources and time, it is recommended to make use of one of the cloud services to reach a more acceptable performance level. Which to choose depends on the intended application, as the services perform differently in certain categories. The services are all vulnerable to multiple image distortions, potentially allowing adversarial attacks. Finally, there is definitely room for improvement in regards to the performance of these services and the computer vision field as a whole.
|
4 |
State-Of-The-Art on eHealth@home System ArchitecturesHeravi, Benjamin January 2019 (has links)
With growing life expectancy and decreasing of fertility rates, demands of additional healthcare services is increasing day by day. This results in a rising need for additional healthcare services which leads to more medical care costs. Modern technology can play an important role to reduce the healthcare costs. In the new era of IoT, secure, fast, low energy consumption and reliable connectivity are necessary qualities to meet demands of health service. New protocols such as IEEE 802.11ax and the fifth generation of mobile broadband have a revolutionary impact over the wireless connectivity. At the same time, new technologies such as cloud computing and Close Loop Medication Management open a new horizon in the medical environment. This thesis studies different eHealth@home architectures in terms of their wireless communication technologies, data collection and data storage strategies. The functionality, benefits and gaps of current distance health monitoring architecture have been presented and discussed. Additionally, this thesis proposes solutions for the integration of new wireless technologies for massive device connectivity, low end-to-end latency, high security, Edge-Computing mechanism, Close Loop Medication Management and cloud services.
|
5 |
QoS Representation, Negotiation and Assurance in Cloud ServicesZheng, Xianrong 20 February 2014 (has links)
Cloud services are Internet-based IT services. Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) are three representative examples. As the cloud market becomes more open and competitive, Quality of Service (QoS) will be more important. However, cloud providers and cloud consumers have different and sometimes opposite preferences. If such a conflict occurs, a Service Level Agreement (SLA) cannot be established without negotiation.
To allow service consumers to express their QoS requirements, and negotiate them with service providers, we argue for cloud service negotiation. It aims to establish and enforce SLAs for cloud services. Specifically, we study how to measure, negotiate, and enforce QoS requirements for cloud services, and so formulate three research problems, i.e., QoS measurement, QoS negotiation, and QoS assurance. In terms of its scope, the topic covers business side automated negotiation and technical side resource allocation techniques. As a result, it has a potential impact on cloud service adoption.
To address QoS measurement, we initiate a quality model named CLOUDQUAL for cloud services. It is a model with quality dimensions and metrics that targets general cloud services. CLOUDQUAL contains six quality dimensions, i.e., usability, availability, reliability, responsiveness, security, and elasticity, of which usability is subjective, whereas the others are objective.
To address QoS negotiation, we present a mixed negotiation approach for cloud services, which is based on the “game of chicken”. In particular, if a party is uncertain about the strategy of its counterpart, it is best to mix concession and tradeoff strategies in negotiation. In fact, the mixed approach, which exhibits a certain degree of intelligence, can achieve a higher utility than a concession approach, while incurring fewer failures than a tradeoff approach.
To address QoS assurance, we propose a QoS-driven resource allocation method for cloud services. It can meet users’ QoS requirements while minimizing resources consumed. Especially, to honor a QoS specified in a SLA, we develop QoS assurance mechanisms, and determine the minimum resources that should be allocated. As a result, the method makes both technical and economic sense for cloud providers. / Thesis (Ph.D, Computing) -- Queen's University, 2014-02-20 14:26:06.616
|
6 |
Cloud Services Brokerage for Mobile Ubiquitous Computing2015 June 1900 (has links)
Recently, companies are adopting Mobile Cloud Computing (MCC) to efficiently deliver enterprise services to users (or consumers) on their personalized devices. MCC is the facilitation of mobile devices (e.g., smartphones, tablets, notebooks, and smart watches) to access virtualized services such as software applications, servers, storage, and network services over the Internet. With the advancement and diversity of the mobile landscape, there has been a growing trend in consumer attitude where a single user owns multiple mobile devices. This paradigm of supporting a single user or consumer to access multiple services from n-devices is referred to as the Ubiquitous Cloud Computing (UCC) or the Personal Cloud Computing.
In the UCC era, consumers expect to have application and data consistency across their multiple devices and in real time. However, this expectation can be hindered by the intermittent loss of connectivity in wireless networks, user mobility, and peak load demands.
Hence, this dissertation presents an architectural framework called, Cloud Services Brokerage for Mobile
Ubiquitous Cloud Computing (CSB-UCC), which ensures soft real-time and reliable services consumption on multiple devices of users. The CSB-UCC acts as an application middleware broker that connects the n-devices of users to the multi-cloud services. The designed system determines the multi-cloud services based on the user's subscriptions and the n-devices are determined through device registration on the broker. The preliminary evaluations of the designed system shows that the following are achieved: 1) high scalability through the adoption of a distributed architecture of the brokerage service, 2) providing soft real-time application synchronization for consistent user experience through an enhanced mobile-to-cloud proximity-based access technique, 3) reliable error recovery from system failure through transactional services re-assignment to active nodes, and 4) transparent audit trail through access-level and context-centric provenance.
|
7 |
A Framework for Secure Logging and Analytics in Precision Healthcare Cloud-based ServicesMoghaddam, Parisa 12 July 2022 (has links)
Precision medicine is an emerging approach for disease treatment and prevention that
delivers personalized care to individual patients by considering their genetic make-
ups, medical histories, environments, and lifestyles. Despite the rapid advancement of
precision medicine and its considerable promise, several underlying technological chal-
lenges remain unsolved. One such challenge of great importance is the security and
privacy of precision health–related data, such as genomic data and electronic health
records, which stifle collaboration and hamper the full potential of machine-learning
(ML) algorithms. To preserve data privacy while providing ML solutions, this thesis
explores the feasibility of machine learning with encryption for precision healthcare
datasets. Moreover, to ensure audit logs’ integrity, we introduce a blockchain-based
secure logging architecture for precision healthcare transactions. We consider a sce-
nario that lets us send sensitive healthcare data into the cloud while preserving privacy
by using homomorphic encryption and develop a secure logging framework for this
precision healthcare service using Hyperledger Fabric. We test the architecture by
generating a considerable volume of logs and show that our system is tamper-resistant
and can ensure integrity. / Graduate
|
8 |
CURARE : curating and managing big data collections on the cloud / CURARE : curation et gestion de collections de données volumineuses sur le cloudKemp, Gavin 26 September 2018 (has links)
L'émergence de nouvelles plateformes décentralisées pour la création de données, tel que les plateformes mobiles, les capteurs et l'augmentation de la disponibilité d'open data sur le Web, s'ajoute à l'augmentation du nombre de sources de données disponibles et apporte des données massives sans précédent à être explorées. La notion de curation de données qui a émergé se réfère à la maintenance des collections de données, à la préparation et à l'intégration d'ensembles de données (data set), les combinant avec une plateforme analytique. La tâche de curation inclut l'extraction de métadonnées implicites et explicites ; faire la correspondance et l'enrichissement des métadonnées sémantiques afin d'améliorer la qualité des données. La prochaine génération de moteurs de gestion de données devrait promouvoir des techniques avec une nouvelle philosophie pour faire face au déluge des données. Ils devraient aider les utilisateurs à comprendre le contenue des collections de données et à apporter une direction pour explorer les données. Un scientifique peut explorer les collections de données pas à pas, puis s'arrêter quand le contenu et la qualité atteignent des niveaux satisfaisants. Notre travail adopte cette philosophie et la principale contribution est une approche de curation des données et un environnement d'exploration que nous avons appelé CURARE. CURARE est un système à base de services pour curer et explorer des données volumineuses sur les aspects variété et variabilité. CURARE implémente un modèle de collection de données, que nous proposons, visant représenter le contenu structurel des collections des données et les métadonnées statistiques. Le modèle de collection de données est organisé sous le concept de vue et celle-ci est une structure de données qui pourvoit une perspective agrégée du contenu des collections des données et de ses parutions (releases) associées. CURARE pourvoit des outils pour explorer (interroger) des métadonnées et pour extraire des vues en utilisant des méthodes analytiques. Exploiter les données massives requière un nombre considérable de décisions de la part de l'analyste des données pour trouver quelle est la meilleure façon pour stocker, partager et traiter les collections de données afin d'en obtenir le maximum de bénéfice et de connaissances à partir de ces données. Au lieu d'explorer manuellement les collections des données, CURARE fournit de outils intégrés à un environnement pour assister les analystes des données à trouver quelle est la meilleure collection qui peut être utilisée pour accomplir un objectif analytique donné. Nous avons implémenté CURARE et expliqué comment le déployer selon un modèle d'informatique dans les nuages (cloud computing) utilisant des services de science des donnés sur lesquels les services CURARE sont branchés. Nous avons conçu des expériences pour mesurer les coûts de la construction des vues à partir des ensembles des données du Grand Lyon et de Twitter, afin de pourvoir un aperçu de l'intérêt de notre approche et notre environnement de curation de données / The emergence of new platforms for decentralized data creation, such as sensor and mobile platforms and the increasing availability of open data on the Web, is adding to the increase in the number of data sources inside organizations and brings an unprecedented Big Data to be explored. The notion of data curation has emerged to refer to the maintenance of data collections and the preparation and integration of datasets, combining them to perform analytics. Curation tasks include extracting explicit and implicit meta-data; semantic metadata matching and enrichment to add quality to the data. Next generation data management engines should promote techniques with a new philosophy to cope with the deluge of data. They should aid the user in understanding the data collections’ content and provide guidance to explore data. A scientist can stepwise explore into data collections and stop when the content and quality reach a satisfaction point. Our work adopts this philosophy and the main contribution is a data collections’ curation approach and exploration environment named CURARE. CURARE is a service-based system for curating and exploring Big Data. CURARE implements a data collection model that we propose, used for representing their content in terms of structural and statistical meta-data organised under the concept of view. A view is a data structure that provides an aggregated perspective of the content of a data collection and its several associated releases. CURARE provides tools focused on computing and extracting views using data analytics methods and also functions for exploring (querying) meta-data. Exploiting Big Data requires a substantial number of decisions to be performed by data analysts to determine which is the best way to store, share and process data collections to get the maximum benefit and knowledge from them. Instead of manually exploring data collections, CURARE provides tools integrated in an environment for assisting data analysts determining which are the best collections that can be used for achieving an analytics objective. We implemented CURARE and explained how to deploy it on the cloud using data science services on top of which CURARE services are plugged. We have conducted experiments to measure the cost of computing views based on datasets of Grand Lyon and Twitter to provide insight about the interest of our data curation approach and environment
|
9 |
Virtualizacijos technologijų pritaikymas debesyje (Cloud) / Virtualization in the cloudMardosas, Jonas 09 July 2011 (has links)
Šiame darbe aprašomos technologijos naudojamos debesų kompiuterijos platformose. Pilnai išanalizuojama nemokama debesies platforma Eucalyptus. Bandoma sukurti internetinių puslapių talpinimo paslaugą debesyje (PaaS paslauga), kuria naudotis galėtų daug vartotojų. Taip pat sudaromas planas kaip galėtų atrodyti panašių paslaugų perkėlimas į debesies infrastruktūras. Išnagrinėjus, kokios programinės įrangos reikia tokiai paslaugai teikti, paruošti pavyzdiniai instaliaciniai skriptai, nubraižytos schemos kaip tokia paslauga galėtų veikti ir kokias funkcijas, bei kokią naudą gauna galutinis vartotojas naudodamas tokią paslaugą. Suprojektuota sistema, kuri automatiškai turi rūpintis tokios paslaugos valdymu, bei stebėjimu. Pateikti tokios automatizuotos sistemos kodo pavyzdžiai. / This document describes the technologies used in cloud computing platforms. Also this work completely analyze cloud open free platform Eucalyptus. On this platform trying to create a web page hosting service in the cloud as a PaaS service, which could be used of many users. Also work describes the plan/scheme as it might be possible to transfer similiar services to the cloud infrastructure. Examination of which software must be provided the following services, preparing model system installation scripts, either as a scheme for such a service can operate and what functions and what benefits the final consumer gets using this service. Designed a system that automatically can provide such a service management and monitoring. Shows such an automated system code examples.
|
10 |
Modelo de evaluación de riesgos de seguridad de la información basado en la ISO/IEC 27005 para analizar la viabilidad de adoptar un servicio en la nubeQuispe Loarte, Javier Esai, Pacheco Pedemonte, Diego Ludwing 01 September 2018 (has links)
El propósito del proyecto es proponer un modelo de evaluación de riesgos de seguridad de la información en base a la ISO/IEC 27005 para determinar la viabilidad de obtener un servicio en la nube, ya que en toda organización es necesario conocer los riesgos de seguridad de información que asumen actualmente con los controles de seguridad implementados, y los riesgos que podría asumir con la adquisición de un nuevo servicio en cloud, y así poder tomar la decisión de optar por el mismo.
El modelo fue realizado en base a 3 fases. En primer lugar, se realizó una investigación pertinente de las buenas prácticas en seguridad de la información. en la investigación se utilizó la ISO/IEC 27001, que nos da una visión general de un sistema de gestión de seguridad de información. Asimismo, se optó por la ISO/IEC 27005 orientada a la gestión de riesgos de seguridad de información en una organización.
En segundo lugar, se presenta la propuesta de modelo y se describe sus fases como contextualización de la organización, Identificación de riesgos, Evaluación de Riesgos y Tratamiento de Riesgos.
Finalmente, se desplego el modelo en el proceso de Exámenes parciales y Finales del área de Registros académicos de la Universidad Peruana de Ciencias aplicadas. / The purpose of the project is to propose an information security risk assessment model based on ISO / IEC 27005 to determine the feasibility of obtaining a service in the cloud, since in every organization it is necessary to know the security risks of information that they currently assume with the security controls implemented and those that could be assumed with the acquisition of a new service in the cloud so that they can make the decision to opt for one or the other.
The model was made based on 3 phases. First, a relevant investigation of good practices in information security was carried out. In the research, ISO / IEC 27001 was used, which gives us an overview of an information security management system. Likewise, the ISO / IEC 27005 is chosen oriented to the management of information security risks in an organization.
Second, the model proposal is presented and its phases are described as contextualization of the organization, risk identification, risk assessment and risk treatment.
Finally, the model was deployed in the process of partial and final examinations of the area of academic records of the “Universidad Peruana de Ciencias Aplicadas”. / Tesis
|
Page generated in 0.0589 seconds