Spelling suggestions: "subject:" cloud computing"" "subject:" aloud computing""
51 |
Sicheres Cloud Computing in der PraxisReinhold, Paul 11 April 2017 (has links) (PDF)
In dieser Dissertation werden verschiedene Anforderungen an sicheres Cloud Computing untersucht. Insbesondere geht es dabei um die Analyse bestehender Forschungs- und Lösungsansätze zum Schutz von Daten und Prozessen in Cloud-Umgebungen und um die Bewertung ihrer Praxistauglichkeit. Die Basis für die Vergleichbarkeit stellen spezifizierte Kriterien dar, nach denen die untersuchten Technologien bewertet werden.
Hauptziel dieser Arbeit ist zu zeigen, auf welche Weise technische Forschungsansätze verglichen werden können, um auf dieser Grundlage eine Bewertung ihrer Eignung in der Praxis zu ermöglichen. Hierzu werden zunächst relevante Teilbereiche der Cloud Computing Sicherheit aufgezeigt, deren Lösungsstrategien im Kontext der Arbeit diskutiert und State-of-the-Art Methoden evaluiert. Die Aussage zur Praxistauglichkeit ergibt sich dabei aus dem Verhältnis des potenziellen Nutzens zu den damit verbundene erwartenden Kosten. Der potenzielle Nutzen ist dabei als Zusammenführung der gebotenen Leistungsfähigkeit, Sicherheit und Funktionalität der untersuchten Technologie definiert. Zur objektiven Bewertung setzten sich diese drei Größen aus spezifizierten Kriterien zusammen, deren Informationen direkt aus den untersuchten Forschungsarbeiten stammen. Die zu erwartenden Kosten ergeben sich aus Kostenschlüsseln für Technologie, Betrieb und Entwicklung. In dieser Arbeit sollen die zugleich spezifizierten Evaluierungskriterien sowie die Konstellation der obig eingeführten Begriffe ausführlich erläutert und bewertet werden.
Für die bessere Abschätzung der Eignung in der Praxis wird in der Arbeit eine angepasste SWOT-Analyse für die identifizierten relevanten Teilbereiche durchgeführt. Neben der Definition der Praktikabilitätsaussage, stellt dies die zweite Innovation dieser Arbeit dar. Das konkrete Ziel dieser Analyse ist es, die Vergleichbarkeit zwischen den Teilbereichen zu erhöhen und so die Strategieplanung zur Entwicklung sicherer Cloud Computing Lösungen zu verbessern.
|
52 |
User Experience-Based Provisioning Services in Vehicular CloudsAloqaily, Moayad January 2016 (has links)
Today, the increasing number of applications based on the Internet of Things, as well as advances in wireless communication, information and communication technology, and mobile cloud computing have allowed users to access a wide range of resources while mobile. Vehicular clouds are considered key elements for today’s intelligent transportation systems. They are outfitted with equipment to enable applications and services for vehicle drivers, surrounding vehicles, pedestrians and third parties.
As vehicular cloud computing has become more popular, due to its ability to improve driver and vehicle safety and provide provisioning services and applications, researchers and industry have growing interest in the design and development of vehicular networks for emerging applications. Though vehicle drivers can now access a variety of on-demand resources en route via vehicular network service providers, the development of vehicular cloud provisioning services has many challenges. In this dissertation, we examine the most critical provisioning service challenges drivers face, including, cost, privacy and latency. To this point, very little research has addressed these issues from the driver perspective. Privacy and service latency are certainly emerging challenges for drivers, as are service costs since this is a relatively new financial concept.
Motivated by the Quality of Experience paradigm and the concept of the Trusted Third Party, we identify and investigate these challenges and examine the limitations and requirements of a vehicular environment. We found no research that addressed these challenges simultaneously, or investigated their effect on one another. We have developed a Quality of Experience framework that provides scalability and reduces congestion overhead for users. Furthermore, we propose two theory-based frameworks to manage on-demand service provision in vehicular clouds: Auction-driven Multi-objective Provisioning and a Multiagent/Multiobjective Interaction Game System. We present different approaches to these, and show through analytical and simulation results that our potential schemes help drivers minimize costs and latency, and maximize privacy.
|
53 |
Security and Privacy of Sensitive Data in Cloud ComputingGholami, Ali January 2016 (has links)
Cloud computing offers the prospect of on-demand, elastic computing, provided as a utility service, and it is revolutionizing many domains of computing. Compared with earlier methods of processing data, cloud computing environments provide significant benefits, such as the availability of automated tools to assemble, connect, configure and reconfigure virtualized resources on demand. These make it much easier to meet organizational goals as organizations can easily deploy cloud services. However, the shift in paradigm that accompanies the adoption of cloud computing is increasingly giving rise to security and privacy considerations relating to facets of cloud computing such as multi-tenancy, trust, loss of control and accountability. Consequently, cloud platforms that handle sensitive information are required to deploy technical measures and organizational safeguards to avoid data protection breakdowns that might result in enormous and costly damages. Sensitive information in the context of cloud computing encompasses data from a wide range of different areas and domains. Data concerning health is a typical example of the type of sensitive information handled in cloud computing environments, and it is obvious that most individuals will want information related to their health to be secure. Hence, with the growth of cloud computing in recent times, privacy and data protection requirements have been evolving to protect individuals against surveillance and data disclosure. Some examples of such protective legislation are the EU Data Protection Directive (DPD) and the US Health Insurance Portability and Accountability Act (HIPAA), both of which demand privacy preservation for handling personally identifiable information. There have been great efforts to employ a wide range of mechanisms to enhance the privacy of data and to make cloud platforms more secure. Techniques that have been used include: encryption, trusted platform module, secure multi-party computing, homomorphic encryption, anonymization, container and sandboxing technologies. However, it is still an open problem about how to correctly build usable privacy-preserving cloud systems to handle sensitive data securely due to two research challenges. First, existing privacy and data protection legislation demand strong security, transparency and audibility of data usage. Second, lack of familiarity with a broad range of emerging or existing security solutions to build efficient cloud systems. This dissertation focuses on the design and development of several systems and methodologies for handling sensitive data appropriately in cloud computing environments. The key idea behind the proposed solutions is enforcing the privacy requirements mandated by existing legislation that aims to protect the privacy of individuals in cloud-computing platforms. We begin with an overview of the main concepts from cloud computing, followed by identifying the problems that need to be solved for secure data management in cloud environments. It then continues with a description of background material in addition to reviewing existing security and privacy solutions that are being used in the area of cloud computing. Our first main contribution is a new method for modeling threats to privacy in cloud environments which can be used to identify privacy requirements in accordance with data protection legislation. This method is then used to propose a framework that meets the privacy requirements for handling data in the area of genomics. That is, health data concerning the genome (DNA) of individuals. Our second contribution is a system for preserving privacy when publishing sample availability data. This system is noteworthy because it is capable of cross-linking over multiple datasets. The thesis continues by proposing a system called ScaBIA for privacy-preserving brain image analysis in the cloud. The final section of the dissertation describes a new approach for quantifying and minimizing the risk of operating system kernel exploitation, in addition to the development of a system call interposition reference monitor for Lind - a dual sandbox. / “Cloud computing”, eller “molntjänster” som blivit den vanligaste svenska översättningen, har stor potential. Molntjänster kan tillhandahålla exaktden datakraft som efterfrågas, nästan oavsett hur stor den är; dvs. molntjäns-ter möjliggör vad som brukar kallas för “elastic computing”. Effekterna avmolntjänster är revolutionerande inom många områden av datoranvändning.Jämfört med tidigare metoder för databehandling ger molntjänster mångafördelar; exempelvis tillgänglighet av automatiserade verktyg för att monte-ra, ansluta, konfigurera och re-konfigurera virtuella resurser “allt efter behov”(“on-demand”). Molntjänster gör det med andra ord mycket lättare för or-ganisationer att uppfylla sina målsättningar. Men det paradigmskifte, sominförandet av molntjänster innebär, skapar även säkerhetsproblem och förutsätter noggranna integritetsbedömningar. Hur bevaras det ömsesidiga förtro-endet, hur hanteras ansvarsutkrävandet, vid minskade kontrollmöjligheter tillföljd av delad information? Följaktligen behövs molnplattformar som är såkonstruerade att de kan hantera känslig information. Det krävs tekniska ochorganisatoriska hinder för att minimera risken för dataintrång, dataintrångsom kan resultera i enormt kostsamma skador såväl ekonomiskt som policymässigt. Molntjänster kan innehålla känslig information från många olikaområden och domäner. Hälsodata är ett typiskt exempel på sådan information. Det är uppenbart att de flesta människor vill att data relaterade tillderas hälsa ska vara skyddad. Så den ökade användningen av molntjänster påsenare år har medfört att kraven på integritets- och dataskydd har skärptsför att skydda individer mot övervakning och dataintrång. Exempel på skyd-dande lagstiftning är “EU Data Protection Directive” (DPD) och “US HealthInsurance Portability and Accountability Act” (HIPAA), vilka båda kräverskydd av privatlivet och bevarandet av integritet vid hantering av informa-tion som kan identifiera individer. Det har gjorts stora insatser för att utvecklafler mekanismer för att öka dataintegriteten och därmed göra molntjänsternasäkrare. Exempel på detta är; kryptering, “trusted platform modules”, säker“multi-party computing”, homomorfisk kryptering, anonymisering, container-och “sandlåde”-tekniker.Men hur man korrekt ska skapa användbara, integritetsbevarande moln-tjänster för helt säker behandling av känsliga data är fortfarande i väsentligaavseenden ett olöst problem på grund av två stora forskningsutmaningar. Fördet första: Existerande integritets- och dataskydds-lagar kräver transparensoch noggrann granskning av dataanvändningen. För det andra: Bristande kän-nedom om en rad kommande och redan existerande säkerhetslösningar för att skapa effektiva molntjänster.Denna avhandling fokuserar på utformning och utveckling av system ochmetoder för att hantera känsliga data i molntjänster på lämpligaste sätt.Målet med de framlagda lösningarna är att svara de integritetskrav som ställsi redan gällande lagstiftning, som har som uttalad målsättning att skyddaindividers integritet vid användning av molntjänster.Vi börjar med att ge en överblick av de viktigaste begreppen i molntjäns-ter, för att därefter identifiera problem som behöver lösas för säker databe-handling vid användning av molntjänster. Avhandlingen fortsätter sedan med en beskrivning av bakgrundsmaterial och en sammanfattning av befintligasäkerhets- och integritets-lösningar inom molntjänster.Vårt främsta bidrag är en ny metod för att simulera integritetshot vidanvändning av molntjänster, en metod som kan användas till att identifierade integritetskrav som överensstämmer med gällande dataskyddslagar. Vårmetod används sedan för att föreslå ett ramverk som möter de integritetskravsom ställs för att hantera data inom området “genomik”. Genomik handlari korthet om hälsodata avseende arvsmassan (DNA) hos enskilda individer.Vårt andra större bidrag är ett system för att bevara integriteten vid publice-ring av biologiska provdata. Systemet har fördelen att kunna sammankopplaflera olika uppsättningar med data. Avhandlingen fortsätter med att före-slå och beskriva ett system kallat ScaBIA, ett integritetsbevarande systemför hjärnbildsanalyser processade via molntjänster. Avhandlingens avslutan-de kapitel beskriver ett nytt sätt för kvantifiering och minimering av risk vid“kernel exploitation” (“utnyttjande av kärnan”). Denna nya ansats är ävenett bidrag till utvecklingen av ett nytt system för (Call interposition referencemonitor for Lind - the dual layer sandbox). / <p>QC 20160516</p>
|
54 |
Security audit compliance for cloud computingDoelitzscher, Frank January 2014 (has links)
Cloud computing has grown largely over the past three years and is widely popular amongst today's IT landscape. In a comparative study between 250 IT decision makers of UK companies they said, that they already use cloud services for 61% of their systems. Cloud vendors promise "infinite scalability and resources" combined with on-demand access from everywhere. This lets cloud users quickly forget, that there is still a real IT infrastructure behind a cloud. Due to virtualization and multi-tenancy the complexity of these infrastructures is even increased compared to traditional data centers, while it is hidden from the user and outside of his control. This makes management of service provisioning, monitoring, backup, disaster recovery and especially security more complicated. Due to this, and a number of severe security incidents at commercial providers in recent years there is a growing lack of trust in cloud infrastructures. This thesis presents research on cloud security challenges and how they can be addressed by cloud security audits. Security requirements of an Infrastructure as a Service (IaaS) cloud are identified and it is shown how they differ from traditional data centres. To address cloud specific security challenges, a new cloud audit criteria catalogue is developed. Subsequently, a novel cloud security audit system gets developed, which provides a flexible audit architecture for frequently changing cloud infrastructures. It is based on lightweight software agents, which monitor key events in a cloud and trigger specific targeted security audits on demand - on a customer and a cloud provider perspective. To enable these concurrent cloud audits, a Cloud Audit Policy Language is developed and integrated into the audit architecture. Furthermore, to address advanced cloud specific security challenges, an anomaly detection system based on machine learning technology is developed. By creating cloud usage profiles, a continuous evaluation of events - customer specific as well as customer overspanning - helps to detect anomalies within an IaaS cloud. The feasibility of the research is presented as a prototype and its functionality is presented in three demonstrations. Results prove, that the developed cloud audit architecture is able to mitigate cloud specific security challenges.
|
55 |
Semi-supervised and Self-evolving Learning Algorithms with Application to Anomaly Detection in Cloud ComputingPannu, Husanbir Singh 12 1900 (has links)
Semi-supervised learning (SSL) is the most practical approach for classification among machine learning algorithms. It is similar to the humans way of learning and thus has great applications in text/image classification, bioinformatics, artificial intelligence, robotics etc. Labeled data is hard to obtain in real life experiments and may need human experts with experimental equipments to mark the labels, which can be slow and expensive. But unlabeled data is easily available in terms of web pages, data logs, images, audio, video les and DNA/RNA sequences. SSL uses large unlabeled and few labeled data to build better classifying functions which acquires higher accuracy and needs lesser human efforts. Thus it is of great empirical and theoretical interest. We contribute two SSL algorithms (i) adaptive anomaly detection (AAD) (ii) hybrid anomaly detection (HAD), which are self evolving and very efficient to detect anomalies in a large scale and complex data distributions. Our algorithms are capable of modifying an existing classier by both retiring old data and adding new data. This characteristic enables the proposed algorithms to handle massive and streaming datasets where other existing algorithms fail and run out of memory. As an application to semi-supervised anomaly detection and for experimental illustration, we have implemented a prototype of the AAD and HAD systems and conducted experiments in an on-campus cloud computing environment. Experimental results show that the detection accuracy of both algorithms improves as they evolves and can achieve 92.1% detection sensitivity and 83.8% detection specificity, which makes it well suitable for anomaly detection in large and streaming datasets. We compared our algorithms with two popular SSL methods (i) subspace regularization (ii) ensemble of Bayesian sub-models and decision tree classifiers. Our contributed algorithms are easy to implement, significantly better in terms of space, time complexity and accuracy than these two methods for semi-supervised anomaly detection mechanism.
|
56 |
Implementar un sistema de infraestructura como servicio (iaas) en cloud computing que sirva de alojamiento al ERP en una empresa comercialCampos Andia, Oscar Keyvin, Correa Lerzundi, Jose Manuel, Zevallos Duran, Gonzalo 02 January 2016 (has links)
El presente trabajo toma como punto de inicio el crecimiento de la empresa ST S.A. Para los próximos cinco años ST S.A. se ha planteado importar mayor cantidad de maquinarias debido a la gran demanda en la compra de maquinarias que se ha experimentado en los últimos años en nuestro país; para ello tendrá que ampliar su línea de crédito con la matriz fábrica New Holland. Dos de los principales requerimientos de fábrica es de tener los EEFF de ST S.A. auditados por una firma internacional y que éstos sean emitidos mensualmente. Para poder cumplir con dicho requerimiento, STSA ha decidido optar por la implementación de un sistema ERP SAP Business One. Esta herramienta le permitirá principalmente a la empresa: Contar con información oportuna y segura, para emitir los EEFF y consultas de Stocks.
Como consecuencia de este crecimiento, ST S.A. debe tomar decisiones importantes que le permitan mantener:
- Mayor crecimiento del negocio y participación de mercado
- Mejorar el enfoque de sus recursos en el CORE de su negocio
- Contar con herramientas tecnológicas de costo accesible que le permitan mejorar su competitividad en el mercado y contar con información oportuna
- Implementar plataformas de conectividad y comunicación en tiempo real las mismas que sirvan de ventaja competitiva a la empresa
En tal sentido se ha considerado la presentación de la empresa ST S.A en un primer momento de crecimiento con sus recursos actuales, y la propuesta que el plan de tesis contempla es sustentar la implementación de un sistema ERP por medio de Cloud Computing, el mismo que de soporte a la mejora de transmisión de información; haciendo notar las ventajas competitivas y costos que serán logrados por ST S.A. El contar con un sistema alojado en Cloud Computing nos brindará mayor seguridad, respaldo de la información, eficiencia en costos y concentrarnos en el core del negocio.
|
57 |
An Anonymous and Distributed Approach to Improving Privacy in Cloud Computing: An Analysis of Privacy-Preserving Tools & ApplicationsPeters, Emmanuel Sean January 2017 (has links)
The seemingly limitless computing resources and power of the cloud has made it ubiquitous. However, despite its utility and widespread adoption in several everyday applications the cloud still suffers from several trust and privacy concerns. Many of these concerns are validated by the endless reports of cyber-attacks that compromise the private information of large numbers of users.
A review of the literature reveals the following challenges with privacy in cloud computing: (1) Although there is a wealth of approaches that attempt to prevent cyber-attacks, these approach ignore the reality that system compromises are inevitable; every system can and will be compromised. (2) There are a handful of metrics for the security of systems, however, the current literature is lacking in privacy metrics that can be used to compare the privacy of across various systems. (3) One of the difficulties with addressing of privacy in cloud computing is the inevitable trade-off between privacy and utility; many privacy-preserving techniques sacrifice more utility than needed in an attempt to achieve the unattainable, perfect privacy.
In this dissertation we present our contributions that address the aforementioned privacy challenges supported by the literature. We base our approach on the assumption that every system can and will be compromised; we focused on mitigating the adverse effects of a cyber-attack by limiting the amount of information that is compromised during an attack. Our contribution is twofold and includes (1) a set of tools for designing privacy-mitigating applications and measuring privacy and (2) two applications designed using the aforementioned tools.
We will first describe three tools that we used to design two applications. These tools are: (1) The processing graph and its collection of creation protocols. The processing graph is the mechanism we used to partition data across multiple units of cloud-based storage and processing; it also manages the flow of processed information between components and is customizable based on the specific needs of the user; (2) A privacy metric based in information theory. We use this metric to compare the amount of information compromised when centralized and distributed systems are attacked; (3) The third tool is the extension of the double-locked box protocol in the cloud environment. The double-locked box protocol facilitates anonymous between two entities via an intermediary.
We then present two applications that utilize the aforementioned tools to improve the privacy of storing and processing a user’s data. These applications are (1) the anonymous tax preparation application and (2) the distributed insurance clearinghouse and distributed electronic health record. We show how the creation protocols are used to establish progressing graphs to privately complete a user’s tax form and process a patient’s insurance claim form. We also highlight the future work in medical research that is made possible because of our contributions; our approach allows for medical research to be conducted on data without risking the identity of patients.
For each application we perform a privacy analysis that employs the privacy metric; in these privacy analyses, we compare both applications to their centralized counterparts and show the reduction in the amount of information revealed during an attack. Based on our analysis, the anonymous tax preparation application reduces the amount of compromised information in the event of an attack by up 64%. Similarly, the distributed insurance clearinghouse reduces the amount of patient data revealed during an attack by up to 79%.
|
58 |
Improving energy efficiency of virtualized datacentersNitu, Vlad-Tiberiu 28 September 2018 (has links) (PDF)
Nowadays, many organizations choose to increasingly implement the cloud computing approach. More specifically, as customers, these organizations are outsourcing the management of their physical infrastructure to data centers (or cloud computing platforms). Energy consumption is a primary concern for datacenter (DC) management. Its cost represents about 80% of the total cost of ownership and it is estimated that in 2020, the US DCs alone will spend about $13 billion on energy bills. Generally, the datacenter servers are manufactured in such a way that they achieve high energy efficiency at high utilizations. Thereby for a low cost per computation all datacenter servers should push the utilization as high as possible. In order to fight the historically low utilization, cloud computing adopted server virtualization. The latter allows a physical server to execute multiple virtual servers (called virtual machines) in an isolated way. With virtualization, the cloud provider can pack (consolidate) the entire set of virtual machines (VMs) on a small set of physical servers and thereby, reduce the number of active servers. Even so, the datacenter servers rarely reach utilizations higher than 50% which means that they operate with sets of longterm unused resources (called 'holes'). My first contribution is a cloud management system that dynamically splits/fusions VMs such that they can better fill the holes. This solution is effective only for elastic applications, i.e. applications that can be executed and reconfigured over an arbitrary number of VMs. However the datacenter resource fragmentation stems from a more fundamental problem. Over time, cloud applications demand more and more memory but the physical servers provide more an more CPU. In nowadays datacenters, the two resources are strongly coupled since they are bounded to a physical sever. My second contribution is a practical way to decouple the CPU-memory tuple that can simply be applied to a commodity server. Thereby, the two resources can vary independently, depending on their demand. My third and my forth contribution show a practical system which exploit the second contribution. The underutilization observed on physical servers is also true for virtual machines. It has been shown that VMs consume only a small fraction of the allocated resources because the cloud customers are not able to correctly estimate the resource amount necessary for their applications. My third contribution is a system that estimates the memory consumption (i.e. the working set size) of a VM, with low overhead and high accuracy. Thereby, we can now consolidate the VMs based on their working set size (not the booked memory). However, the drawback of this approach is the risk of memory starvation. If one or multiple VMs have an sharp increase in memory demand, the physical server may run out of memory. This event is undesirable because the cloud platform is unable to provide the client with the booked memory. My fourth contribution is a system that allows a VM to use remote memory provided by a different rack server. Thereby, in the case of a peak memory demand, my system allows the VM to allocate memory on a remote physical server.
|
59 |
Ontologiebasiertes Cloud Computing / Ontology-based Cloud ComputingFehrmann, Sven January 2015 (has links) (PDF)
Die Dissertation „Ontologiebasiertes Cloud Computing“ im Fachbereich Wirtschaftsinformatik behandelt das Thema Cloud Computing und veranschaulicht die Möglichkeiten der theoretischen und praktischen Nutzung einer Ontologie für das Cloud Computing.
Neben den Private und Public Clouds sowie Hybrid-Lösungen wird vor allem eine ausgefeilte Virtualisierungstechnologie die Zukunft im IT-Bereich mitgestalten. Die Vielfalt und Anzahl der angebotenen Services nimmt besonders auf dem Sektor der Public Clouds weiterhin stark zu, während im Hybrid-Bereich ansprechende Lösungen noch ausstehen. Das Nutzen eines Cloud-Services ist in der Regel einfach und wird mit den fallenden Preisen zunehmend interessanter. Eine Reihe von Punkten, die im Vorfeld genau betrachtet und festgelegt werden müssen, wie Aspekte der IT-Sicherheit, des Datenschutzes und der Kosten, ermöglichen eine wirtschaftliche und rechtssichere Inanspruchnahme eines Cloud-Services. Vor der Nutzung eines Services müssen zudem der Wert, die Nutzungshäufigkeit und die Geheimhaltungsstufe der eigenen Daten bekannt sein, um sicher bestimmen zu können, ob alle Informationen oder nur ein Teil zum Auslagern geeignet sind. Dazu bedarf es einer klaren Festlegung der vertraglichen Rahmenbedingungen und einer Regelung bezüglich des Schadensersatzes bei einem Ausfall. Ein aktives Change Management sollte schon vor der Einführung eines Services Akzeptanz für die sich im IT-Umfeld ändernden Aufgabengebiete schaffen.
Vergleichbare Alternativen zu finden, dies war die Zielvorgabe der durchgeführten, breiten Untersuchung von 15 Serviceanbietern, verbunden mit dem Aufbau einer Ontologie. Auf einem sehr dynamischen Cloud Computing Markt können diese Untersuchungen natürlich nur eine Momentaufnahme abbilden, denn neue Provider etablieren sich, schon länger bestehende verändern und verbessern ihre Angebote. Damit diese Momentaufnahme nicht in einem statischen Endzustand verbleibt, wurde eine Ontologie aufgebaut, die die konsistente Einpflege veränderter Sachverhalte zulässt. Die Idealvorstellung ist es, dass beim Bekanntwerden einer neuen Information diese auch immer in die Ontologie einfließt. Die Anbieteruntersuchung zeigt, dass Cloud-Services heute schon ein hohes Potential haben. Um sich einen Gesamtüberblick über die unterschiedlichen Services und ihre Angebote zu verschaffen, ist eine Ontologie besonders geeignet.
Die aufgebaute Cloud-Ontologie beinhaltet eine Service-Auswahl, die auf die Literatur- und Anbieteruntersuchung aufbaut. Ähnlich einer Suchmaschine hilft sie, sich über bestehende Angebote auf dem Markt zu informieren. Und sie vereinfacht die Selektion, definiert klar bekannte technische Details, erleichtert die Suche z. B. nach benötigten Zusatzdienstleistungen über standardisierte Schnittstellen, versucht Transparenz und Nachvollziehbarkeit bei den Abrechnungsmodellen herzustellen, um eine Vergleichbarkeit überhaupt erst zu ermöglichen. Der größte Vorteil liegt in der Zeitersparnis: Die Recherche nach passenden Cloud-Services wird durch formalisierte und somit vergleichbare Kriterien verkürzt. Bei mehreren passenden Anbietern lässt sich über weitere Abfragen oder Kostenvergleiche der jeweils für den Nutzer beste Anbieter gezielt finden. Ebenso können Services mit signifikanten Ausschlusskriterien frühzeitig aus der Auswahl entfernt werden. Durch das Verbot bestimmter Zuweisungen oder durch die Forderung von Mindestbedingungen innerhalb der Ontologie wird die Einpflege falscher Sachverhalte verhindert und sie erweist sich damit wesentlich unempfindlicher als viele Programme. Die Aufgabenstellung bei der Modellerstellung lag darin, zu einer allgemeinen Aussagekraft der modellierten Abhängigkeiten zu kommen. Außerdem erfüllt die Cloud-Ontologie die vier typischen Anforderungen an eine Ontologie: Sie ist ausschließlich durch die standardisierte Sprache OWL beschrieben, kann durch einen Inferenzalgorithmus (z. B. Pellet) ausgewertet werden, unterscheidet eindeutig zwischen 80 Klassen und 342 Individuals und bildet zahlreiche Informationen über 2657 Verknüpfungen ab. Die Ontologie kann mit geringem Aufwand auch in ein Programm mit einer ansprechenden Oberfläche überführt werden, wie der programmierte Prototyp beweist.
In der Praxis müssen für Unternehmen verstärkt Hilfsmittel angeboten werden oder in den Vordergrund rücken, wie Cloud-Ontologien, die die Auswahl von Services erleichtern, Vergleiche erst ermöglichen, die Suche verkürzen und zum Schluss zu Ergebnissen führen, die den Vorstellungen des künftigen Nutzers entsprechen. / The thesis "Ontology-based cloud computing" in the subject Information Systems Research discusses Cloud Computing and illustrates the possibilities of theoretical and practical use of an ontology for cloud computing.
In addition to private, public and hybrid clouds especially sophisticated virtualization technologies will shape the future of the IT sector. The variety and number of services offered will dramatically grow within the public cloud sector, while attractive hybrid solutions are yet to come. Using cloud services is usually simple and the interest in them will increase with falling prices. A number of issues need to be considered and determined precisely in advance: aspects of IT security, data protection and costs. Taken together this will enable economic use of cloud services with legal certainty. Prior to the use of a service an association has to evaluate the value, the frequency of use and the confidentiality level of their proprietary data in order to decide whether all information or only parts are suitable for outsourcing. This requires a clear definition of the contractual framework and regulations of damages in case of service inavailabilities. An active management should create acceptance for the changes in the IT environment before the introduction of a service.
The objective of the presented broad survey was to find comparable alternatives among 15 service providers in combination with the construction of an ontology. In a highly dynamic cloud computing market, such a study obviously represents only a snapshot - new providers emerge, long established ones change and/or improve their offerings. In order not to remain in static, the ontology was constructed to allow for a consistent incorporation of changing circumstances. Ideally new information is fed continuously into the ontology. The provider survey shows that cloud services have a high potential already today. The ontology is especially useful to gain an overview over the different service providers and their offerings.
The structured cloud ontology includes a selection of services that builds on the literature supplier research. Similar to a search engine, it helps to inform about existing offers on the market. Additionally it simplifies the selection, defines known technical details, facilitates the search for supplemental services via standardized interfaces and tries to establish transparency and accountability in the billing models to allow actual comparisons. The biggest advantage is time saving: The search for a suitable cloud services is shortened by formalized and thus comparable criteria. When several suitable providers exist, it offers the possibility to refine the results via further queries or cost comparisons to find the best match. Likewise, services can be excluded based on significant exclusion criteria early on during the selection. By prohibiting certain assignments or the requirement of minimum conditions within the ontology, the addition of false facts is prevented and it was therefore much more indulgent than many other programs. The definition of the project was to build a model with general validity of the conclusions drawn from the modeled dependencies. In addition, the cloud ontology satisfies the four typical requirements for an ontology: it is described solely by the standardized language OWL, can be evaluated by an inference algorithm (e.g. pellet), clearly distinguishes between 80 classes and 342 individuals and maps a lot of information to 2657 properties. The ontology can be loaded in to a program with an appealing user interface with only little effort, as demonstrated by the programmed prototype.
In practice companies require more possibilities for searching matching cloud services. This can be reached by cloud ontologies, which facilitate the selection of services. They enable comparisons, shorten the search and eventually lead to results that meet the needs of a future user.
|
60 |
台灣資通訊業者因應雲端運算之發展策略 / Development strategy for Taiwanese ICT firms in the presence of cloud computing孫烱煌 Unknown Date (has links)
近年來雲端運算已成為全球資訊與通訊科技產業重要的發展趨勢和議題,台灣過去在資通訊產業價值鏈中一直扮演著硬體製造代工的重要角色,是全球資通訊產業發展分工與資源整合不可或缺的重要一環。雲端運算將藉由各種類型的端末設備(如個人電腦、智慧手機及平板電腦等裝置),透過網路連接遠端的資料中心,連結整合並使用各項資訊服務。未來企業機房建置與伺服器,甚至個人電腦的升級將不再具有如今日的大量需求。原本毛利率就已不高的台灣資通訊代工業,勢必面臨訂單減少、業績衰退的衝擊。本研究將就台灣資通訊業者,在雲端運算產業群聚效應尚未形成之前,如何在既有政府政策、網通基礎建設下,就企業在產業相對位置,採取最適之優勢競爭策略,來做為進入及因應雲端運算發展趨勢之企業策略做探討。 / Clouding computing has recently become a developing trend and key concern in IT and telecommunication sectors worldwide. Taiwan enterprises, used to be important hardware manufactures via OEM/ODM business model in these value chains, continue to be critical in the all segments of worldwide resources integration.
Cloud computing will take advantage of all terminal devices, including PC, smart phone and tablet computer, to access all information in the cyber space via the integrated service of distant data centers.
The expected demands for the setup of IT rooms and upgrade of servers and PC are decreasing. Accordingly, Taiwan IT OEM/ODM companies with thin gross margin will suffer from the impact of reducing orders and declining profit.
This research studies the strategies of all Taiwanese ICT firms in different subsectors, before the cluster formation of related participants, to join in and deal with the development of cloud computing with the government’s current policies and infrastructure.
|
Page generated in 0.1071 seconds