Spelling suggestions: "subject:"distributed aprocessing"" "subject:"distributed eprocessing""
321 |
Economic issues in distributed computingHuang, Yun, 1973- 28 August 2008 (has links)
On the Internet, one of the essential characteristics of electronic commerce is the integration of large-scale computer networks and business practices. Commercial servers are connected through open and complex communication technologies, and online consumers access the services with virtually unpredictable behavior. Both of them as well as the e-Commerce infrastructure are vulnerable to cyber attacks. Among the various network security problems, the Distributed Denial-of-Service (DDoS) attack is a unique example to illustrate the risk of commercial network applications. Using a massive junk traffic, literally anyone on the Internet can launch a DDoS attack to flood and shutdown an eCommerce website. Cooperative technological solutions for Distributed Denial-of-Service (DDoS) attacks are already available, yet organizations in the best position to implement them lack incentive to do so, and the victims of DDoS attacks cannot find effective methods to motivate the organizations. Chapter 1 discusses two components of the technological solutions to DDoS attacks: cooperative filtering and cooperative traffic smoothing by caching, and then analyzes the broken incentive chain in each of these technological solutions. As a remedy, I propose usage-based pricing and Capacity Provision Networks, which enable victims to disseminate enough incentive along attack paths to stimulate cooperation against DDoS attacks. Chapter 2 addresses possible Distributed Denial-of-Service (DDoS) attacks toward the wireless Internet including the Wireless Extended Internet, the Wireless Portal Network, and the Wireless Ad Hoc network. I propose a conceptual model for defending against DDoS attacks on the wireless Internet, which incorporates both cooperative technological solutions and economic incentive mechanisms built on usage-based fees. Cost-effectiveness is also addressed through an illustrative implementation scheme using Policy Based Networking (PBN). By investigating both technological and economic difficulties in defense of DDoS attacks which have plagued the wired Internet, our aim here is to foster further development of wireless Internet infrastructure as a more secure and efficient platform for mobile commerce. To avoid centralized resources and performance bottlenecks, online peer-to-peer communities and online social network have become increasingly popular. In particular, the recent boost of online peer-to-peer communities has led to exponential growth in sharing of user-contributed content which has brought profound changes to business and economic practices. Understanding the dynamics and sustainability of such peer-to-peer communities has important implications for business managers. In Chapter 3, I explore the structure of online sharing communities from a dynamic process perspective. I build an evolutionary game model to capture the dynamics of online peer-to-peer communities. Using online music sharing data collected from one of the IRC Channels for over five years, I empirically investigate the model which underlies the dynamics of the music sharing community. Our empirical results show strong support for the evolutionary process of the community. I find that the two major parties in the community, namely sharers and downloaders, are influencing each other in their dynamics of evolvement in the community. These dynamics reveal the mechanism through which peer-to-peer communities sustain and thrive in a constant changing environment. / text
|
322 |
Distributed processing techniques for parameter estimation and efficient data-gathering in wireless communication and sensor networks / Κατανεμημένες τεχνικές επεξεργασίας για εκτίμηση παραμέτρων και αποδοτική συλλογή δεδομένων σε ασύρματα δίκτυα επικοινωνιών και αισθητήρωνBogdanovic, Nikola 07 May 2015 (has links)
This dissertation deals with the distributed processing techniques for parameter estimation and efficient data-gathering in wireless communication and sensor networks.
With the aim of enabling an energy aware and low-complexity distributed implementation of the estimation task, several useful optimization techniques that generally yield linear estimators were derived in the literature. Up to now, most of the works considered that the nodes are interested in estimating the same vector of global parameters. This scenario can be viewed as a special case of a more general problem where the nodes of the network have overlapped but different estimation interests.
Motivated by this fact, this dissertation states a new Node-Specific Parameter Estimation (NSPE) formulation where the nodes are interested in estimating parameters of local, common and/or global interest. We consider a setting where the NSPE interests are partially overlapping, while the non-overlapping parts can be arbitrarily different. This setting can model several applications, e.g., cooperative spectrum sensing in cognitive radio networks, power system state estimation in smart grids etc. Unsurprisingly, the effectiveness of any distributed adaptive implementation is dependent on the ways cooperation is established at the network level, as well as the processing strategies considered at the node level.
At the network level, this dissertation is concerned with the incremental and diffusion cooperation schemes in the NSPE settings. Under the incremental mode, each node communicates with only one neighbor, and the data are processed in a cyclic manner throughout the network at each time instant. On the other hand, in the diffusion mode at each time step each node of the network cooperates with a set of neighboring nodes.
Based on Least-Mean Squares (LMS) and Recursive Least-Squares (RLS) learning rules employed at the node level, we derive novel distributed estimation algorithms that undertake distinct but coupled optimization processes in order to obtain adaptive solutions of the considered NSPE setting.
The detailed analyses of the mean convergence and the steady-state mean-square performance have been provided. Finally, different performance gains have been illustrated in the context of cooperative spectrum sensing in cognitive radio networks. Another fundamental problem that has been considered in this dissertation is the data-gathering problem, sometimes also named as the sensor reachback, that arises in Wireless Sensor Networks (WSN). In particular, the problem is related to the transmission of the acquired observations to a data-collecting node, often termed to as sink node, which has increased processing capabilities and more available power as compared to the other nodes. Here, we focus on WSNs deployed for structural health monitoring.
In general, there are several difficulties in the sensor reachback problem arising in such a network. Firstly, the amount of data generated by the sensor nodes may be immense, due to the fact that structural monitoring applications need to transfer relatively large amounts of dynamic response measurement data. Furthermore, the assumption that all sensors have direct, line-of-sight link to the sink does not hold in the case of these structures.
To reduce the amount of data required to be transmitted to the sink node, the correlation among measurements of neighboring nodes can be exploited. A possible approach to exploit spatial data correlation is Distributed Source Coding (DSC). A DSC technique may achieve lossless compression of multiple correlated sensor outputs without establishing any communication links between the nodes. Other approaches employ lossy techniques by taking advantage of the temporal correlations in the data and/or suitable stochastic modeling of the underlying processes. In this dissertation, we present a channel-aware lossless extension of sequential decoding based on cooperation between the nodes. Next, we also present a cooperative communication protocol based on adaptive spatio-temporal prediction. As a more practical approach, it allows a lossy reconstruction of transmitted data, while offering considerable energy savings in terms of transmissions toward the sink. / Η παρούσα διατριβή ασχολείται με τεχνικές κατανεμημένης επεξεργασίας για εκτίμηση παραμέτρων και για την αποδοτική συλλογή δεδομένων σε ασύρματα δίκτυα επικοινωνιών και αισθητήρων.
Το πρόβλημα της εκτίμησης συνίσταται στην εξαγωγή ενός συνόλου παραμέτρων από χρονικές και χωρικές θορυβώδεις μετρήσεις που συλλέγονται από διαφορετικούς κόμβους οι οποίοι παρακολουθούν μια περιοχή ή ένα πεδίο. Ο στόχος είναι να εξαχθεί μια εκτίμηση που θα είναι τόσο ακριβής όσο αυτή που θα πετυχαίναμε εάν κάθε κόμβος είχε πρόσβαση στην πληροφορία που έχει το σύνολο του δικτύου. Στο πρόσφατο σχετικά παρελθόν έγιναν διάφορες προσπάθειες που είχαν ως σκοπό την ανάπτυξη ενεργειακά αποδοτικών και χαμηλής πολυπλοκότητας κατανεμημένων υλοποίησεων του εκτιμητή. Έτσι, υπάρχουν πλέον στη βιβλιογραφία διάφορες ενδιαφέρουσες τεχνικές βελτιστοποίησης που οδηγούν σε γραμμικούς, κυρίως, εκτιμητές. Μέχρι τώρα, οι περισσότερες εργασίες θεωρούσαν ότι οι κόμβοι ενδιαφέρονται για την εκτίμηση ενός κοινού διανύσματος παραμέτρων, το οποίο είναι ίδιο για όλο το δίκτυο. Αυτό το σενάριο μπορεί να θεωρηθεί ως μια ειδική περίπτωση ενός γενικότερου προβλήματος, όπου οι κόμβοι του δικτύου έχουν επικαλυπτόμενα αλλά διαφορετικά ενδιαφέροντα εκτίμησης.
Παρακινημένη από αυτό το γεγονός, αυτή η Διατριβή ορίζει ένα νέο πλαίσιο της Κόμβο-Ειδικής Εκτίμησης Παραμέτρων (ΚΕΕΠ), όπου οι κόμβοι ενδιαφέρονται για την εκτίμηση των παραμέτρων τοπικού ενδιαφέροντος, των παραμέτρων που είναι κοινές σε ένα υποσύνολο των κόμβων ή/και των παραμέτρων που είναι κοινές σε όλο το δίκτυο. Θεωρούμε ένα περιβάλλον όπου η ΚΕΕΠ αναφέρεται σε ενδιαφέροντα που αλληλεπικαλύπτονται εν μέρει, ενώ τα μη επικαλυπτόμενα τμήματα μπορούν να είναι αυθαίρετα διαφορετικά. Αυτό το πλαίσιο μπορεί να μοντελοποιήσει διάφορες εφαρμογές, π.χ., συνεργατική ανίχνευση φάσματος σε γνωστικά δίκτυα ραδιοεπικοινωνιών, εκτίμηση της κατάστασης ενός δικτύου μεταφοράς ενέργειας κλπ. Όπως αναμένεται, η αποτελεσματικότητα της οποιασδήποτε κατανεμημένης προσαρμοστικής τεχνικής εξαρτάται και από τον συγκεκριμένο τρόπο με τον οποίο πραγματοποιείται η συνεργασία σε επίπεδο δικτύου, καθώς και από τις στρατηγικές επεξεργασίας που χρησιμοποιούνται σε επίπεδο κόμβου. Σε επίπεδο δικτύου, αυτή η διατριβή ασχολείται με τον incremental (κυκλικά εξελισσόμενο) και με τον diffusion (διαχεόμενο) τρόπο συνεργασίας στο πλαίσιο της ΚΕΕΠ. Στον incremental τρόπο, κάθε κόμβος επικοινωνεί μόνο με ένα γείτονα, και τα δεδομένα από το δίκτυο υποβάλλονται σε επεξεργασία με ένα κυκλικό τρόπο σε κάθε χρονική στιγμή. Από την άλλη πλευρά, στον diffusion τρόπο σε κάθε χρονική στιγμή κάθε κόμβος του δικτύου συνεργάζεται με ένα σύνολο γειτονικών κόμβων. Με βάση τους αλγορίθμους Ελαχίστων Μέσων Τετραγώνων (ΕΜΤ) και Αναδρομικών Ελαχίστων Τετραγώνων (ΑΕΤ) οι οποίοι χρησιμοποιούνται ως κανόνες μάθησης σε επίπεδο κόμβου, αναπτύσσουμε νέους κατανεμημένους αλγόριθμους για την εκτίμηση οι οποίοι αναλαμβάνουν ευδιακριτές, αλλά συνδεδεμένες διαδικασίες βελτιστοποίησης, προκειμένου να αποκτηθούν οι προσαρμοστικές λύσεις της εξεταζόμενης ΚΕΕΠ. Οι λεπτομερείς αναλύσεις για τη σύγκλιση ως προς τη μέση τιμή και για τη μέση τετραγωνική απόδοση σταθερής κατάστασης έχουν επίσης εξαχθεί στο πλαίσιο αυτής της Διατριβής. Τέλος, όπως αποδεικνύεται, η εφαρμογή των προτεινόμενων τεχνικών εκτίμησης στο πλαίσιο της συνεργατικής ανίχνευσης φάσματος σε γνωστικές ραδιοεπικοινωνίες, οδηγεί σε αισθητά κέρδη απόδοσης.
Ένα άλλο βασικό πρόβλημα που έχει μελετηθεί στην παρούσα εργασία είναι το πρόβλημα συλλογής δεδομένων, επίσης γνωστό ως sensor reachback, το οποίο προκύπτει σε ασύρματα δίκτυα αισθητήρων (ΑΔΑ). Πιο συγκεκριμένα, το πρόβλημα σχετίζεται με την μετάδοση των λαμβανόμενων μετρήσεων σε έναν κόμβο συλλογής δεδομένων, που ονομάζεται sink node, ο οποίος έχει αυξημένες δυνατότητες επεξεργασίας και περισσότερη διαθέσιμη ισχύ σε σύγκριση με τους άλλους κόμβους. Εδώ, έχουμε επικεντρωθεί σε ΑΔΑ που έχουν αναπτυχθεί για την παρακολούθηση της υγείας κατασκευών. Σε γενικές γραμμές, σε ένα τέτοιο δίκτυο προκύπτουν πολλές δυσκολίες σε ότι αφορά το sensor reachback προβλήμα. Πρώτον, η ποσότητα των δεδομένων που παράγονται από τους αισθητήρες μπορεί να είναι τεράστια, γεγονός που οφείλεται στο ότι για την παρακολούθηση της υγείας κατασκευών είναι απαραίτητο να μεταφερθούν σχετικά μεγάλες ποσότητες μετρήσεων δυναμικής απόκρισης. Επιπλέον, η υπόθεση ότι όλοι οι αισθητήρες έχουν απευθείας μονοπάτι μετάδοσης, με άλλα λόγια ότι βρίσκονται σε οπτική επαφή με τον sink node, δεν ισχύει στην περίπτωση των δομών αυτών.
Για να μειωθεί η ποσότητα των δεδομένων που απαιτούνται για να μεταδοθούν στον sink node, αξιοποιείται η συσχέτιση μεταξύ των μετρήσεων των γειτονικών κόμβων. Μία πιθανή προσέγγιση για την αξιοποίηση της χωρικής συσχέτισης μεταξύ δεδομένων σχετίζεται με την Κατανεμημένη Κωδικοποίηση Πηγής (ΚΚΠ). Η τεχνική ΚΚΠ επιτυγχάνει μη απωλεστική συμπίεση των πολλαπλών συσχετιζόμενων μετρήσεων των κόμβων χωρίς να απαιτεί την οποιαδήποτε επικοινωνία μεταξύ των κόμβων. Άλλες προσεγγίσεις χρησιμοποιούν απωλεστικές τεχνικές συμπίεσης εκμεταλλευόμενες τις χρονικές συσχετίσεις στα δεδομένα ή / και κάνοντας μία κατάλληλη στοχαστική μοντελοποίηση των σχετικών διαδικασιών. Σε αυτή τη Διατριβή, παρουσιάζουμε μία επέκταση της διαδοχικής αποκωδικοποίησης χωρίς απώλειες λαμβάνοντας υπόψιν το κανάλι και βασιζόμενοι σε κατάλληλα σχεδιασμένη συνεργασία μεταξύ των κόμβων. Επιπρόσθετα, παρουσιάζουμε ενα συνεργατικό πρωτόκολλο επικοινωνίας που στηρίζεται σε προσαρμοστική χωρο-χρονική πρόβλεψη. Ως μια πιο πρακτική προσέγγιση, το πρωτόκολλο επιτρέπει απώλειες στην ανακατασκευή των μεταδιδόμενων δεδομένων, ενώ προσφέρει σημαντική εξοικονόμηση ενέργειας μειώνοντας των αριθμό των απαιτούμενων μεταδόσεων προς τον sink node.
|
323 |
Collaborative design in electromagneticsAlmaghrawi, Ahmed Almaamoun. January 2007 (has links)
We present a system architecture and a set of control techniques that allow heterogeneous software design tools to collaborate intelligently and automatically. One of their distinguishing features is the ability to perform concurrent processing. Systems based on this architecture are able to effectively solve large electromagnetic analysis problems, particularly those that involve loose coupling between several areas of physics. The architecture can accept any existing software analysis tool, without requiring any modification or customization of the tool. This characteristic is produced in part by our use of a neutral virtual representation for storing problem data, including geometry and material definitions. We construct a system based on this architecture, using several circuit and finite-element analysis tools, and use it to perform electromagnetic analyses of several different devices. Our results show that our architecture and techniques do allow practical problems to be solved effectively by heterogeneous tools. / On présente une architecture de système et un ensemble de techniquesde contrôle qui permettent aux logiciels d'analyse hétérogènes de collaborerde façon intelligente et automatique. Un de ses traits caractéristiques est sacapacité d'effectuer simultanément plusieurs traitements. Les systèmes baséssur cette architecture sont capables de résoudre de manière efficace des grandsproblèmes dans le domaine de l'analyse électromagnétique, particulièrementceux où existe un accouplement dégagé entre plusieurs domaines de physique.L'architecture peut accepter n'importe quel logiciel d'analyse existant; ellen'exige pas que les logiciels soyent modifiés ou fabriqués sur mesure. Cettecaractéristique est produite en partie par notre utilisation d'une représentationneutre virtuelle pour représenter les données du problème, y inclus sa géométrieet les proprietés de ses matériels. On construit un système basé sur cettearchitecture, comprenant plusieurs logiciels de simulation, et on l'emploie pourexécuter des analyses électromagnétiques de plusieurs appareils différents. Nosrésultats montrent que notre architecture et nos techniques permettent desproblèmes pratiques d'être résolus efficacement par les outils hétérogènes.
|
324 |
Conceptual design methodology of distributed intelligence large scale systemsNairouz, Bassem R. 20 September 2013 (has links)
Distributed intelligence systems are starting to gain dominance in the field of large-scale complex systems. These systems are characterized by nonlinear behavior patterns that are only predicted through simulation-based engineering. In addition, the autonomy, intelligence, and reconfiguration capabilities required by certain systems introduce obstacles adding another layer of complexity. However, there exists no standard process for the design of such systems. This research presents a design methodology focusing on distributed control architectures while concurrently considering the systems design process. The methodology has two major components. First, it introduces a hybrid design process, based on the infusion of the control architecture and conceptual system design processes. The second component is the development of control architectures metamodel, placing a distinction between control configuration and control methods. This enables a standard representation of a wide spectrum of control architectures frameworks.
|
325 |
Building a multi-tier enterprise system utilizing visual Basic, MTS, ASP, and MS SQLPoti Allison, Tamara S. January 2001 (has links)
Multi-tier enterprise systems consist of more than two distributed tiers. The design of multi-tier systems is considerably more involved than two tier systems. Not all systems should be designed as multi-tier, but if the decision to build a multi-tier system is made, there are benefits to this type of system design. CSCources is a system that tracks computer science course information. The requirements of this system indicate that it should be a multi-tier system. This system has three tiers, client, business and data. Microsoft tools are used such as Visual Basic (VB) that was used to build the client tier that physically resides on the client machine. VB is also used to create the business tier. This tier consists of the business layer and the data layer. The business layer contains most of the business logic for the system. The data layer communicates with the data tier. Microsoft SQL Server (MS SQL) is used for the data store. The database containsseveral tables and stored procedures. The stored procedures are used to add, edit, update and delete records in the database. Microsoft Transaction Server (MTS) is used to control modifications to the database. The transaction and security features available in the MTS environment are used. The business tier and data tier may or may not reside on the same physical computer or server. Active Server Pages (ASP) was built that accesses the business tier to retrieve the needed information for display on a web page. The cost of designing a distributed system, building a distributed system, upgrades to the system and error handling are examined.Ball State UniversityMuncie, IN 47306 / Department of Computer Science
|
326 |
Contech: a shared memory parallel program analysis frameworkVassenkov, Phillip 13 January 2014 (has links)
We are in the era of multicore machines, where we must exploit thread level parallelism for programs to run better, smarter, faster, and more efficiently. In order to increase instruction level parallelism, processors and compilers perform heavy dataflow analyses between instructions. However, there isn’t much work done in the area of inter-thread dataflow analysis. In order to pave the way and find new ways to conserve resources across a variety of domains (i.e., execution speed, chip die area, power efficiency, and computational throughput), we propose a novel framework, termed Contech, to facilitate the analysis of multithreaded program in terms of its communication and execution patterns. We focus the scope on shared memory programs rather than message passing programs, since it is more difficult to analyze the communication and execution patterns for these programs. Discovering patterns of shared memory programs has the potential to allow general purpose computing machines to turn on or off architectural tricks according to application-specific features. Our design of Contech is modular in nature, so we can glean a large variety of information from an architecturally independent representation of the program under examination.
|
327 |
The functionality of spatial and time domain artificial neural modelsCapanni, Niccolo Francesco January 2006 (has links)
This thesis investigates the functionality of the units used in connectionist Artificial Intelligence systems. Artificial Neural Networks form the foundation of the research and their units, Artificial Neurons, are first compared with alternative models. This initial work is mainly in the spatial-domain and introduces a new neural model, termed a Taylor Series neuron. This is designed to be flexible enough to assume most mathematical functions. The unit is based on Power Series theory and a specifically implemented Taylor Series neuron is demonstrated. These neurons are of particular usefulness in evolutionary networks as they allow the complexity to increase without adding units. Training is achieved via various traditiona and derived methods based on the Delta Rule, Backpropagation, Genetic Algorithms and associated evolutionary techniques. This new neural unit has been presented as a controllable and more highly functional alternative to previous models. The work on the Taylor Series neuron moved into time-domain behaviour and through the investigation of neural oscillators led to an examination of single-celled intelligence from which the later work developed. Connectionist approaches to Artificial Intelligence are almost always based on Artificial Neural Networks. However, another route towards Parallel Distributed Processing was introduced. This was inspired by the intelligence displayed by single-celled creatures called Protoctists (Protists). A new system based on networks of interacting proteins was introduced. These networks were tested in pattern-recognition and control tasks in the time-domain and proved more flexible than most neuron models. They were trained using a Genetic Algorithm and a derived Backpropagation Algorithm. Termed "Artificial BioChemical Networks" (ABN) they have been presented as an alternative approach to connectionist systems.
|
328 |
A distributed Monte Carlo method for initializing state vector distributions in heterogeneous smart sensor networksBorkar, Milind 08 January 2008 (has links)
The objective of this research is to demonstrate how an underlying system's state vector distribution can be determined in a distributed heterogeneous sensor network with reduced subspace observability at the individual nodes. We show how the network, as a whole, is capable of observing the target state vector even if the individual nodes are not capable of observing it locally. The initialization algorithm presented in this work can generate the initial state vector distribution for networks with a variety of sensor types as long as the measurements at the individual nodes are known functions of the target state vector. Initialization is accomplished through a novel distributed implementation of the particle filter that involves serial particle proposal and weighting strategies, which can be accomplished without sharing raw data between individual nodes in the network. The algorithm is capable of handling missed detections and clutter as well as compensating for delays introduced by processing, communication and finite signal propagation velocities. If multiple events of interest occur, their individual states can be initialized simultaneously without requiring explicit data association across nodes. The resulting distributions can be used to initialize a variety of distributed joint tracking algorithms. In such applications, the initialization algorithm can initialize additional target tracks as targets come and go during the operation of the system with multiple targets under track.
|
329 |
An empirical investigation of SSDLFornasier, Patric, Computer Science & Engineering, Faculty of Engineering, UNSW January 2007 (has links)
The SOAP Service Description Language (SSDL) is a SOAP-centric language for describing Web Service contracts. SSDL focuses on message abstraction as the building block for creating service-oriented applications and provides an extensible range of protocol frameworks that can be used to describe and formally model Web Service interactions. SSDL's natural alignment with service-oriented design principles intuitively suggests that it encourages the creation of applications that adhere to this architectural paradigm. Given the lack of tools and empirical data for using SSDL as part of Web Services-based SOAs, we identified the need to investigate its practicability and usefulness through empirical work. To that end we have developed Soya, a programming model and runtime environment for creating and executing SSDL-based Web Services. On the one hand, Soya provides straightforward programming abstractions that foster message-oriented thinking. On the other hand, it leverages contemporary tooling (i.e. Windows Communication Foundation) with SSDL-related runtime functionality and semantics. In this thesis, we describe the design and architecture of Soya and show how it makes it possible to use SSDL as an alternative and powerful metadata language without imposing unrealistic burdens on application developers. In addition, we use Soya and SSDL in a case study which provides a set of initial empirical results with respect to SSDL's strengths and drawbacks. In summary, our work serves as a knowledge framework for better understanding message-oriented Web Service development and demonstrates SSDL's practicability in terms of implementation and usability.
|
330 |
Eidolon: adapting distributed applications to their environment.Potts, Daniel Paul, Computer Science & Engineering, Faculty of Engineering, UNSW January 2008 (has links)
Grids, multi-clusters, NUMA systems, and ad-hoc collections of distributed computing devices all present diverse environments in which distributed computing applications can be run. Due to the diversity of features provided by these environments a distributed application that is to perform well must be specifically designed and optimised for the environment in which it is deployed. Such optimisations generally affect the application's communication structure, its consistency protocols, and its communication protocols. This thesis explores approaches to improving the ability of distributed applications to share consistent data efficiently and with improved functionality over wide-area and diverse environments. We identify a fundamental separation of concerns for distributed applications. This is used to propose a new model, called the view model, which is a hybrid, cost-conscious approach to remote data sharing. It provides the necessary mechanisms and interconnects to improve the flexibility and functionality of data sharing without defining new programming models or protocols. We employ the view model to adapt distributed applications to their run-time environment without modifying the application or inventing new consistency or communication protocols. We explore the use of view model properties on several programming models and their consistency protocols. In particular, we focus on programming models used in distributed-shared-memory middleware and applications, as these can benefit significantly from the properties of the view model. Our evaluation demonstrates the benefits, side effects and potential short-comings of the view model by comparing our model with traditional models when running distributed applications across several multi-clusters scenarios. In particular, we show that the view model improves the performance of distributed applications while reducing resource usage and communication overheads.
|
Page generated in 0.1126 seconds