Spelling suggestions: "subject:"dervice broker"" "subject:"dervice croker""
1 |
Teknik för en flerskiktadwebbapplikationPettersson, Jonnie January 2008 (has links)
The report analyses if some common problems can be avoided by using modern technology. As a reference system “Fartygsrapporteringssystemet” is used. It is an n-tier web application built with modern technology at time, 2003-2004. The aim is to examine whether ASP.Net MVC, Windows Communication Foundation, Workflow Foundation and SQL Server 2005 Service Broker can be used to create an n-tier web application which also communicate with other systems and facilitate automated testing. The report describes the construction of a prototype in which the presentation layer uses ASP.Net MVC to separate presentation and business logic. Communication with the business layer is done through the Windows Communication Foundation. Hard coded processes are broken out and dealt with by Workflow Foundation. Asynchronous communication with other systems is done by using Microsoft SQL Server 2005 Service Broker. The results of the analysis is that these techniques can be used to create a n-tier web application, but that ASP.Net MVC, which at present only available in a preview release, is not sufficiently developed yet.
|
2 |
A verified and optimized Stream X-Machine testing method, with application to cloud service certificationSimons, A.J.H., Lefticaru, Raluca 15 January 2020 (has links)
Yes / The Stream X-Machine (SXM) testing method provides strong and repeatable guarantees of functional correctness, up to a specification. These qualities make the method attractive for software
certification, especially in the domain of brokered cloud services, where arbitrage seeks to substitute functionally equivalent services from alternative providers. However, practical obstacles
include: the difficulty in providing a correct specification, the translation of abstract paths into
feasible concrete tests, and the large size of generated test suites. We describe a novel SXM
verification and testing method, which automatically checks specifications for completeness and
determinism, prior to generating complete test suites with full grounding information. Three optimisation steps achieve up to a ten-fold reduction in the size of the test suite, removing infeasible
and redundant tests. The method is backed by a set of tools to validate and verify the SXM specification, generate technology-agnostic test suites and ground these in SOAP, REST or rich-client
service implementations. The method was initially validated using seven specifications, three
cloud platforms and five grounding strategies. / European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 328392, the Broker@Cloud project [11].
|
3 |
Evaluation and Implementation of Machine Learning Methods for an Optimized Web Service Selection in a Future Service MarketKarg, Philipp January 2014 (has links)
In future service markets a selection of functionally equal services is omnipresent. The evolving challenge, finding the best-fit service, requires a distinction between the non-functional service characteristics (e.g., response time, price, availability). Service providers commonly capture those quality characteristics in so-called Service Level Agreements (SLAs). However, a service selection based on SLAs is inadequate, because the static SLAs generally do not consider the dynamic service behaviors and quality changes in a service-oriented environment. Furthermore, the profit-oriented service providers tend to embellish their SLAs by flexibly handling their correctness. Within the SOC (Service Oriented Computing) research project of the Karlsruhe University of Applied Sciences and the Linnaeus University of Sweden, a service broker framework for an optimized web service selection is introduced. Instead of relying on the providers’ quality assertions, a distributed knowledge is developed by automatically monitoring and measuring the service quality during each service consumption. The broker aims at optimizing the service selection based on the past real service performances and the defined quality preferences of a unique consumer.This thesis work concerns the design, implementation and evaluation of appropriate machine learning methods with focus on the broker’s best-fit web service selection. Within the time-critical service optimization the performance and scalability of the broker’s machine learning plays an important role. Therefore, high- performance algorithms for predicting the future non-functional service characteristics within a continuous machine learning process were implemented. The introduced so-called foreground-/background-model enables to separate the real-time request for a best-fit service selection from the time-consuming machine learning. The best-fit services for certain consumer call contexts (e.g., call location and time, quality preferences) are continuously pre-determined within the asynchronous background-model. Through this any performance issues within the critical path from the service request up to the best-fit service recommendation are eliminated. For evaluating the implemented best-fit service selection a sophisticated test data scenario with real-world characteristics was created showing services with different volatile performances, cyclic performance behaviors and performance changes in the course of time. Besides the significantly improved performance, the new implementation achieved an overall high selection accuracy. It was possible to determine in 70% of all service optimizations the actual best-fit service and in 94% of all service optimizations the actual two best-fit services.
|
4 |
Analysis of 5G Edge Computing solutions and APIs from an E2E perspective addressing the developer experienceManocha, Jitendra January 2021 (has links)
Edge Computing is considered one of the key capabilities in next generation (5G) networks, which will enable inundation of latency, throughput, and data sensitive edge-native applications. Edge application developers require infrastructure at the edge to host the application workload and network connectivity procedures to connect the application users to the nearest edge where the application workload is hosted. Distributed nature of edge infrastructure and the requirement on network connectivity makes it attractive for communication service providers (CSPs) to become Edge Service providers (ESP); similarly, hyper-scale cloud providers (HCPs) are also planning to expand as ESP building on their cloud presence targeting edge application developers. CSPs across the globe follow a standard approach for building interoperable networks and infrastructure, while HCPs do not participate in telecom standardization bodies. Standards development organizations (SDOs) such as the European Telecommunication Standardization Institute (ETSI) and 3rd Generation Partnership Project (3GPP) are working to provide a standard architecture for edge computing solution for service providers. However, the current focus of SDOs is more on architecture and not much focus on application developer experience and the Application Programming Interfaces (APIs). On the architecture itself, there are different standards and approaches available which overlap with each other. APIs proposed by different SDOs are not easily consumable by edge application developers and require simplification. On the other hand, there are not many widely known standards in the hyper-scale cloud and public cloud industry to integrate with each other except the public application programming interfaces (APIs) offered by cloud providers. To scale and succeed, edge service providers need to focus on interoperability, not only from a cloud infrastructure perspective but from a network connectivity perspective as well. This work analyzes standards defined by different standardization bodies in the 5G edge computing area and the overlaps between the standards. The work then highlights the requirements from an edge application developer perspective, investigates the deficiencies of the standards, and proposes an end-to-end edge solution architecture and a method to simplify the APIs which fulfil the need for edge-native applications. The proposed solution considers CSPs providing multi-cloud infrastructure for edge computing by integrating with HCPs infrastructure. In addition, the work investigates existing standards to integrate cloud capabilities in network platforms and elaborates the way network and cloud computing capabilities can be integrated to provide complete edge service to edge application developers or enterprises. It proposes an alternative way to integrate edge application developers with cloud service providers dynamically by offering a catalog of services. / Edge Computing anses vara en av nyckelfunktionerna i nästa generations (5G) nätverk, vilket möjliggör minskad fördröjning, ökad genomströmning och datakänsliga och kantnära applikationer. Applikationsutvecklare för Edge Computing är beroende av kantinfrastruktur som är värd för applikationen, och nätverksanslutning för att ansluta applikationsanvändarna till närmaste kant där applikationens är placerad. Även om kantapplikationer kan vara värd för vilken infrastruktur som helst, planerar leverantörer av kommunikationstjänster (CSP:er) att erbjuda distribuerad kantinfrastruktur och anslutningar. På liknande sätt planerar även molnleverantörer med hög skalbarhet (HCP) att erbjudakantinfrastruktur. CSP:er följer standardmetoden för att bygga nätverk och infrastruktur medan HCP:er inte deltar i standardiseringsorgan. Standardutvecklingsorganisationer (SDO) som europeisk telekommunikations standardiseringsinstitut (ETSI) och 3rd Generation Partnership Project (3GPP) arbetar för att tillhandahålla en standardarkitektur för Edge Computing för tjänsteleverantörer. Men nuvarande fokus är mer på arkitektur och inte mycket fokus är riktat mot applikationsutvecklares erfarenhet och API:er. I själva arkitekturen finns det olika standarder och tillvägagångssätt som överlappar varandra. API:er föreslagna av olika SDO:er är inte lättillgängliga för utvecklar av kantapplikationer och måste förenklas. Å andra sidan finns det inte många allmänt kända standarder i hyperskalära moln och offentlig molnindustri som går att integrera med varandra förutom de offentliga gränssnitten för applikationsprogrammering (API:er) som erbjuds av molnleverantörer. För att kunna betjäna omfattande applikationsutvecklare måste CSP:er erbjuda multimolnfunktioner och därmed komplettera sin egen infrastruktur med kapaciteten för HCP:er. På liknande sätt kommer HCP:er att behöva integrera anslutningstjänster utöver infrastruktur för att erbjuda kantfunktioner. Den här arbetet beskriver olika standarder definierade av olika standardiseringsorgan i Edge Computing-området för 5G, och analyzerar överlappningar mellan standarderna. Arbetet belyser sedan kraven från ett utvecklingsperspektiv av kantapplikationer, undersöker bristerna i standarderna och föreslår en lösningsarkitektur som uppfyller behovet för kantbyggda applikationer. Den föreslagna lösningen beaktar CSP:er som tillhandahåller flermolnsinfrastruktur för Edge Computing genom att integreras med HCP:s infrastruktur. Arbetet undersöker vidare befintliga standarder för att integrera molnfunktioner i nätverksplattformar och utvecklar på vilket sätt nätverks- och molntjänster kan integreras för att erbjuda kompletta tjänster till utvecklare av kantapplikationer. Arbetet föreslår ett alternativt sätt att dynamiskt integrera utvecklare av kantapplikationer med leverantörer av molntjänster genom att erbjuda en katalog av tjänster.
|
Page generated in 0.0379 seconds