1 |
Providing Context in WS-BPEL ProcessesGeorge, Allen Ajit January 2008 (has links)
Business processes are increasingly used by organizations to automate their activities. Written in languages like WS-BPEL, they allow an institution to describe precisely its internal operations. As the pace of change increases, however, both organizations and their internal processes are required to be more flexible; they have to account for an increasing amount of externally-driven environment state, or context, and modify their behavior accordingly. This puts a significant burden on business-process programmers, who now have to source, track, and update context from multiple entities, in addition to implementing and maintaining core business logic. Implementing this state-maintenance logic in a WS-BPEL business process is involved. This is because WS-BPEL business processes are modeled as if they were the only thing operating in, and making changes to, the business environment. This mental model does not reflect the real world, where organizations and entities depend on state that is outside their control – state that is modified independent of, and concurrent with, the organization’s activities. This makes it hard for business-process programmers to write context-dependent processes in a concise manner.
This thesis presents a solution to this problem based on the notion of a context variable for WS-BPEL business processes. It describes how context variables are designed using the WS-BPEL language-extension mechanism, and how these variables can be used in business processes. It also outlines an architecture for offering context in the web services environment that uses constructs from the WS-Resource Framework specification. It shows how changes in context can be propagated from these context sources to WS-BPEL context variables using WS-Notification-based publish/subscribe. The design also includes a standards-compliant method for extending web-service responses with references to context sources. Finally, a prototype validating the overall system is described, and enhancements for increasing the utility of context variables proposed.
This solution offers significant advantages: it builds on established practices and well-understood message-exchange patterns, leverages widely used languages, frameworks and specifications, is standards compliant, and has a low barrier-to-entry for business-process programmers. Moreover, when compared to existing alternatives, this solution requires significantly less process logic and fewer interface changes to maintain constantly changing environment state.
|
2 |
Providing Context in WS-BPEL ProcessesGeorge, Allen Ajit January 2008 (has links)
Business processes are increasingly used by organizations to automate their activities. Written in languages like WS-BPEL, they allow an institution to describe precisely its internal operations. As the pace of change increases, however, both organizations and their internal processes are required to be more flexible; they have to account for an increasing amount of externally-driven environment state, or context, and modify their behavior accordingly. This puts a significant burden on business-process programmers, who now have to source, track, and update context from multiple entities, in addition to implementing and maintaining core business logic. Implementing this state-maintenance logic in a WS-BPEL business process is involved. This is because WS-BPEL business processes are modeled as if they were the only thing operating in, and making changes to, the business environment. This mental model does not reflect the real world, where organizations and entities depend on state that is outside their control – state that is modified independent of, and concurrent with, the organization’s activities. This makes it hard for business-process programmers to write context-dependent processes in a concise manner.
This thesis presents a solution to this problem based on the notion of a context variable for WS-BPEL business processes. It describes how context variables are designed using the WS-BPEL language-extension mechanism, and how these variables can be used in business processes. It also outlines an architecture for offering context in the web services environment that uses constructs from the WS-Resource Framework specification. It shows how changes in context can be propagated from these context sources to WS-BPEL context variables using WS-Notification-based publish/subscribe. The design also includes a standards-compliant method for extending web-service responses with references to context sources. Finally, a prototype validating the overall system is described, and enhancements for increasing the utility of context variables proposed.
This solution offers significant advantages: it builds on established practices and well-understood message-exchange patterns, leverages widely used languages, frameworks and specifications, is standards compliant, and has a low barrier-to-entry for business-process programmers. Moreover, when compared to existing alternatives, this solution requires significantly less process logic and fewer interface changes to maintain constantly changing environment state.
|
3 |
Towards reducing bandwidth consumption in publish/subscribe systemsYe, Yifan January 2020 (has links)
Efficient data collection is one of the key research areas for 5G and beyond, since it can reduce the network burden of transferring massive data for various data analytics and machine learning applications. Specifically, 5G offers great support for massive deployment of IoT devices, and the number of IoT devices is exploding.There are mainly two complementary ways for achieving efficient data collection: one is integrating data processing into the collection process via e.g. data filtering, aggregation; the other one is reducing the amount of the data needs to be transferred via e.g. data compression/approximation.In this thesis, efficient data collection is studied from the mentioned two perspectives. In particular, we introduce enhanced syntax and functionalities to the message queueing telemetry transport (MQTT) protocol, such as data filtering and data aggregation. Furthermore, we enhance the flexibility of MQTT by supporting customized or user-defined functions to be executed in the MQTT broker, and thus data processing in the broker will not be constrained to the predefined processing functions. Lastly, dual prediction is studied for reducing the data transmissions by maintaining the same learning model on both sides of the sender and receiver. In particular, we study and prototype least mean square (LMS) as the dual prediction algorithm. Our implementations are based on MQTT and the benefits are shown and evaluated via experiments using real IoT data. / Effektiv datainsamling är ett av de viktigaste forskningsområdena för 5G och därefter, eftersom det kan minska nätbördan för att överföra massiva data för olika dataanalyser och maskininlärningsapplikationer. Specifikt erbjuder 5G bra stöd för massiv distribution av IoT-enheter, och antalet IoT-enheter exploderar.Det finns huvudsakligen två komplementära sätt att uppnå effektiv datainsamling: ett är att integrera databehandling i insamlingsprocessen via t.ex. datafiltrering, aggregering; den andra minskar mängden data som behöver överföras via t.ex. datakomprimering / tillnärmning.I denna avhandling studeras effektiv datainsamling ur nämnda två perspektiv.I synnerhet introducerar vi förbättrad syntax och funktionalitet till meddelandekö telemetri-transportprotokollet (MQTT), till exempel datafiltrering och dataggregation. Dessutom förbättrar vi MQTT-flexibiliteten genom att stödja anpassade eller användardefinierade funktioner som ska köras i MQTT-mäklaren, och därför kommer databehandling i mäklaren inte att begränsas till de fördefinierade behandlingsfunktionerna. Slutligen studeras dubbla förutsägelser för att minska dataöverföringarna genom att bibehålla samma inlärningsmodell på båda sidornaav avsändaren och mottagaren. I synnerhet studerar och prototypar vi minst genomsnitt kvadrat (LMS) som den dubbla förutsägelsealgoritmen. Våra implementeringar är baserade på MQTT och fördelarna visas och utvärderas via experiment med verkliga IoT-data.
|
4 |
Evaluating a publish/subscribe proxy for HTTPZhang, Yuanhui January 2013 (has links)
With the increasingly high speed of the Internet and its wide spread usage, the current Internet architecture exhibits some problems. The publish/subscribe paradigm has been developed to support one of the most common patterns of communication. It makes “information” the center of communication and removes the “location-identity split” (i.e., that objects are at specific locations to which you must communicate with to access the object). In this thesis project a publish/subscribe network is built and then used in the design, implementation, and evaluation of a publish/subscribe proxy for today’s HTTP based communication. By using this proxy users are able to use their existing web browser to send both HTTP requests and Publish/Subscribe Internet Routing Paradigm (PSIRP) requests. A publish/subscribe overlay is responsible for maintaining PSIRP contents. The proxy enables web browser clients to benefit from the publish/subscribe network, without requiring them to change their behavior or even be aware of the fact that the content that they want to access is being provided via the publish/subscribe overlay. The use of the overlay enables a user’s request to be satisfied by any copy of the content, potentially decreasing latency, reducing backbone network traffic, and reducing the load on the original content server. One of the aims of this thesis is to make more PSIRP content available, this is done by introducing a proxy who handles both HTTP and PSIRP requests, but having received content as a result of an HTTP response it publishes this data as PSIRP accessible content. The purpose is to foster the introduction and spread of content based access. / Med allt högre Internetshastighet och dess utbredda användning, uppvisar den aktuella Internet-arkitekturen vissa problem. Publicera / prenumerera paradigm har utvecklats för att stödja en av de vanligaste mönstren för kommunikation. Det gör att "information" blir centrum av kommunikation och tar bort "plats-identitet split" (dvs att objekten är på specifika platser som du måste kommunicera med för att komma åt objektet). i detta examensarbete byggs ett publicera / prenumerera nätverk och sedan används i utformningen, genomförandet, och utvärdering av en publicera / prenumerera proxy för dagens HTTP-baserad kommunikation. Genom att använda denna proxy kan användare kan använda sin befintliga webbläsare för att skicka både HTTP-förfrågningar och publicera / Prenumerera Internet Routing Paradigm (PSIRP) begäran. En publicera / prenumerera överlagring är ansvarig för att upprätthålla innehåll av PSIRP. Fullmakten gör det möjligt för klienter av webbläsare att dra nytta av publicera / prenumerera nätverket, utan att kräva dem att ändra sitt beteende eller ens vara medvetna om det faktum att det innehållet som de vill komma åt tillhandahålls via publicera / prenumerera överlägg. Användningen av överlägget kan en användare begäran som skall uppfyllas av en kopia av innehållet, eventuellt minskande latens, vilket minskar trafiken stamnät, och minska belastningen på det ursprungliga innehållet servern. Ett av syftena med denna uppsats är att göra mer PSIRP innehåll tillgängligt och detta görs genom att införa en proxy som hanterar både HTTP och PSIRP förfrågningar, men har fått innehåll som en följd av en HTTP-svar offentliggörs denna data som PSIRP tillgängligt innehåll. Syftet är att främja införandet och innehållbaserade åtkomsten.
|
5 |
PHOENIX: A Premise to Reinforce Heterogeneous and Evolving Internet Architectures with Exemplary ApplicationsAdhatarao, Sripriya Srikant 11 September 2020 (has links)
No description available.
|
6 |
A CoAP Publish-Subscribe Broker for More Resource-Efficient Wireless Sensor NetworksOudishu, Ramcin, Gärdborn, Pethrus January 2018 (has links)
With the rapid development of the Internet of Things, Wireless Sensor Networks (WSNs) are deployed increasingly all over the world, providing data that can help increase sustainable development. Currently in Uppsala, Sweden, the GreenIoT project monitors air pollution by using WSNs. The resource constrained nature of WSNs demand that special care is taken in the design of communication models and communication protocols. The publish-subscribe (pub/sub) model suits WSNs very well since it puts an intermediary the broker server between sensor nodes and clients, thus alleviating the workload of the sensor nodes. The IETF (Internet Engineering Task Force) is currently in the process of standardizing a pub/sub extension to the Constrained Application Protocol (CoAP). Since the extension is such a recent addition to CoAP and not yet standardized, there are very few actual implementations of it and little is known of how it would work in practice. The GreenIoT project is considering replacement of their current pub/sub broker with the CoAP pub/sub broker since its underlying implementation is likely to be more energy efficient and the standardizing organization behind CoAP is the well-esteemed IETF. On a general level, this report offers an investigation of the problems and challenges faced when implementing the CoAP pub/sub extension with respect to design choices, implementation and protocol ambiguities. More specifically, a CoAP pub/sub broker is implemented for the GreenIoT project. By means of carefully analyzing the CoAP protocol and CoAP pub/sub draft as well as other necessary protocols, then proceeding to make decisions of what programming language to use as well as what existing CoAP library to use, a broker server was implemented and tested iteratively as the work proceeded. The implementation gave rise to several questions regarding the pub/sub draft which are also discussed in the report. / Den hastiga utvecklingen av Sakernas Internet över hela världen har medfört ett ökat användande av trådlösa sensornätverk vars datainsamling kan bidra till en mer hållbar utveckling. För närvarande använder sig GreenIoT-projektet i Uppsala av trådlösa sensornätverk för att övervaka halterna av luftföroreningar. Resursbegränsningarna för dylika nätverk medför att särskild hänsyn måste tas vid design av såväl kommunikationsmodeller som kommunikationsprotokoll. Modellen Publicera-Prenumerera (pub/pre) passar ypperligt för trådlösa sensornätverk då en mellanhand placeras mellan klient och server en s.k. broker vilket får den positiva effekten att att sensornoderna avlastas. För närvarande är IETF (Internet Engineering Task Force) i färd med att standardisera en pub/pre-utvidgning av det redan standardiserade CoAP (Constrained Application Protocol). Eftersom att utvidgningen är så pass ny finns ytterst få implementationer av den och man vet därmed väldigt lite om hur den faktiskt fungerar i praktiken. GreenIoT-projektet överväger att ersätta sin nuvarande pub/pre-broker med en CoAP pub/pre-broker eftersom att energianvändningen kan antas bli lägre samt att standardiseringsorganisationen bakom CoAP är det välrenommerade IETF. Sett ur ett större perspektiv erbjuder denna rapport en undersökning av de problem och utmaningar man ställs inför vid implementation av CoAP pub/pre-utvidgningen med avseende på designval, implementationsval, och protokolltvetydigheter. Mer konkret implementeras en CoAP pub/pre-broker åt GreenIoT-projektet. Genom att först noggrant analysera CoAP-protokollet, CoAP pub/pre-utkastet, liksom andra nödvändiga protokoll, för att därefter bestämma vilket programmeringsspråk och vilket existerande CoAP-bibliotek som skulle användas, implementerades en broker server som testades iterativt under processens gång. Ett flertal frågor som uppstod rörande pub/pre-utkastet presenteras och diskuteras i rapporten.
|
7 |
A Framework for Interoperability on the United States Electric Grid InfrastructureLaval, Stuart 01 January 2015 (has links)
Historically, the United States (US) electric grid has been a stable one-way power delivery infrastructure that supplies centrally-generated electricity to its predictably consuming demand. However, the US electric grid is now undergoing a huge transformation from a simple and static system to a complex and dynamic network, which is starting to interconnect intermittent distributed energy resources (DERs), portable electric vehicles (EVs), and load-altering home automation devices, that create bidirectional power flow or stochastic load behavior. In order for this grid of the future to effectively embrace the high penetration of these disruptive and fast-responding digital technologies without compromising its safety, reliability, and affordability, plug-and-play interoperability within the field area network must be enabled between operational technology (OT), information technology (IT), and telecommunication assets in order to seamlessly and securely integrate into the electric utility's operations and planning systems in a modular, flexible, and scalable fashion. This research proposes a potential approach to simplifying the translation and contextualization of operational data on the electric grid without being routed to the utility datacenter for a control decision. This methodology integrates modern software technology from other industries, along with utility industry-standard semantic models, to overcome information siloes and enable interoperability. By leveraging industrial engineering tools, a framework is also developed to help devise a reference architecture and use-case application process that is applied and validated at a US electric utility.
|
Page generated in 0.0175 seconds