• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 14
  • 14
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Interface conversion between CCITT recommendations X.21 and V.24

Van der Harst, Hubert January 1983 (has links)
The subject of this thesis concerns conversion between the interfaces specified by CCITT recommendations X.21 and V.24. The evolution of public data networks against the background of data communications using the telephone network is outlined. The DTE/DCE interface is identified as being of particular importance and is explained in terms of the ISO model for Open Systems interconnection (OSI). CCITT recommendation X.21 is described in detail using the OSI layered approach. Finite state machine (FSM) terminology is defined and the concept of an interface machine introduced. CCITT recommendation V.24 is described in terms of the physical layer of the OSI model. Only those aspects of V.24 relevant to the subject of this thesis are examined. Interface conversion between X.21 and V.24 is discussed in detail and the design of devices to perform the conversion described. A microprocessor-based translator to perform interface conversion between a V.24 DTE and a X.21 DCE for switched circuit use is designed, using the FSM approach. A preliminary model of such a translator, implemented on a development system, is described. Its hardware and software are outlined and areas for further work identified.
2

Bitrate smooting: a study on traffic shaping and -analysis in data networks / Utjämning av datatakt: en studie av trafikformning och analys i datanät

Gratorp, Christina January 2007 (has links)
<p>Examensarbetet bakom denna rapport utgör en undersökande studie om hur transmission av mediadata i nätverk kan göras effektivare. Det kan åstadkommas genom att viss tilläggsinformation avsedd för att jämna ut datatakten adderas i det realtidsprotokoll, Real Time Protocol, som används för strömmande media. Genom att försöka skicka lika mycket data under alla konsekutiva tidsintervall i sessionen kommer datatakten vid en godtycklig tidpunkt med större sannolikhet att vara densamma som vid tidigare klockslag. En streamingserver kan tolka, hantera och skicka data vidare enligt instruktionerna i protokollets sidhuvud. Datatakten jämnas ut genom att i förtid, under tidsintervall som innehåller mindre data, skicka även senare data i strömmen. Resultatet av detta är en utjämnad datataktskurva som i sin tur leder till en jämnare användning av nätverkskapaciteten.</p><p>Arbetet inkluderar en översiktlig analys av beteendet hos strömmande media, bakgrundsteori om filkonstruktion och nätverksteknologier samt ett förslag på hur mediafiler kan modifieras för att uppfylla syftet med examensarbetet. Resultat och diskussion kan förhoppningsvis användas som underlag för en framtida implementation av en applikation ämnad att förbättra trafikflöden över nätverk.</p>
3

Analysis and optimisation of stable matching in combined input and output queued switches

Schweizer, Andreas January 2009 (has links)
Output queues in network switches are known to provide a suitable architecture for scheduling disciplines that need to provide quality of service (QoS) guarantees. However, today’s memory technology is incapable of meeting the speed requirements. Combined input and output queued (CIOQ) switches have emerged as one alternative to address the problem of memory speed. When a switch of this architecture uses a stable matching algorithm to transfer packets across the switch fabric, an output queued (OQ) switch can be mimicked exactly with a speedup of only two. The use of a stable matching algorithm typically requires complex and time-consuming calculations to ensure the behaviour of an OQ switch is maintained. Stable matching algorithms are well studied in the area in which they originally appeared. However, little is presently known on how the stable matching algorithm performs in CIOQ switches and how key parameters are affected by switch size, traffic type and traffic load. Knowledge of how these conditions affect performance is essential to judge the practicability of an architecture and to provide useful information on how to design such switches. Until now, CIOQ switches were likely to be dismissed due to the high complexity of the stable matching algorithm when applied to other applications. However, the characteristics of a stable matching algorithm in a CIOQ switch have not been thoroughly analysed. The principal goal of this thesis is to identify the conditions the stable matching algorithm encounters in a CIOQ switch under realistic operational scenarios. This thesis provides accurate mathematical models based on Markov chains to predict the value of key parameters that affect the complexity and runtime of a stable matching algorithm in CIOQ switches. The applicability of the models is then backed up by simulations. The results of the analysis quantify critical operational parameters, such as the size and number of preference lists and runtime complexity. These provide detailed insights into switch behaviour and useful information for switch designs. Major conclusions to be drawn from this analysis include that the average values of the key parameters of the stable matching algorithm are feasibly small and do not strongly correlate with switch size, which is contrary to the behaviour of the stable matching ii algorithm in its original application. Furthermore, although these parameters have wide theoretical ranges, the mean values and standard deviations are found to be small under operational conditions. The results also suggest that the implementation becomes very versatile as the completion time of the stable matching algorithm is not strongly correlated to the network traffic type; that is, the runtime is minimally affected by the nature of the traffic.
4

Myspeedtest: active and passive measurements of cellular data networks

Muckaden, Sachit 09 April 2013 (has links)
As the number and diversity of applications available to mobile users increases, there is an increasing need for developers, network service providers, and users to understand how users perceive the network performance of these applications. MySpeedTest is a measurement tool that actively probes the network to determine not only TCP throughput and round trip time, but also the proximity to popular content providers, IP packet delay variation, and loss. It also records other metadata that could affect user experience, such as signal strength, service provider, connection type, battery state, device type, manufacturer, time of day, and location. The tool also takes passive measurements of the applications installed on the device and the network usage of these applications. My SpeedTest is available on the Google Play Store and currently has 1300+ active users. This thesis presents the design and implementation of MySpeedTest as well as effect of metrics like latency and IP packet delay variation on performance.
5

Hit and Bandwidth Optimal Caching for Wireless Data Access Networks

Akon, Mursalin January 2011 (has links)
For many data access applications, the availability of the most updated information is a fundamental and rigid requirement. In spite of many technological improvements, in wireless networks, wireless channels (or bandwidth) are the most scarce resources and hence are expensive. Data access from remote sites heavily depends on these expensive resources. Due to affordable smart mobile devices and tremendous popularity of various Internet-based services, demand for data from these mobile devices are growing very fast. In many cases, it is becoming impossible for the wireless data service providers to satisfy the demand for data using the current network infrastructures. An efficient caching scheme at the client side can soothe the problem by reducing the amount of data transferred over the wireless channels. However, an update event makes the associated cached data objects obsolete and useless for the applications. Frequencies of data update, as well as data access play essential roles in cache access and replacement policies. Intuitively, frequently accessed and infrequently updated objects should be given higher preference while preserving in the cache. However, modeling this intuition is challenging, particularly in a network environment where updates are injected by both the server and the clients, distributed all over networks. In this thesis, we strive to make three inter-related contributions. Firstly, we propose two enhanced cache access policies. The access policies ensure strong consistency of the cached data objects through proactive or reactive interactions with the data server. At the same time, these policies collect information about access and update frequencies of hosted objects to facilitate efficient deployment of the cache replacement policy. Secondly, we design a replacement policy which plays the decision maker role when there is a new object to accommodate in a fully occupied cache. The statistical information collected by the access policies enables the decision making process. This process is modeled around the idea of preserving frequently accessed but less frequently updated objects in the cache. Thirdly, we analytically show that a cache management scheme with the proposed replacement policy bundled with any of the cache access policies guarantees optimum amount of data transmission by increasing the number of effective hits in the cache system. Results from both analysis and our extensive simulations demonstrate that the proposed policies outperform the popular Least Frequently Used (LFU) policy in terms of both effective hits and bandwidth consumption. Moreover, our flexible system model makes the proposed policies equally applicable to applications for the existing 3G, as well as upcoming LTE, LTE Advanced and WiMAX wireless data access networks.
6

Hit and Bandwidth Optimal Caching for Wireless Data Access Networks

Akon, Mursalin January 2011 (has links)
For many data access applications, the availability of the most updated information is a fundamental and rigid requirement. In spite of many technological improvements, in wireless networks, wireless channels (or bandwidth) are the most scarce resources and hence are expensive. Data access from remote sites heavily depends on these expensive resources. Due to affordable smart mobile devices and tremendous popularity of various Internet-based services, demand for data from these mobile devices are growing very fast. In many cases, it is becoming impossible for the wireless data service providers to satisfy the demand for data using the current network infrastructures. An efficient caching scheme at the client side can soothe the problem by reducing the amount of data transferred over the wireless channels. However, an update event makes the associated cached data objects obsolete and useless for the applications. Frequencies of data update, as well as data access play essential roles in cache access and replacement policies. Intuitively, frequently accessed and infrequently updated objects should be given higher preference while preserving in the cache. However, modeling this intuition is challenging, particularly in a network environment where updates are injected by both the server and the clients, distributed all over networks. In this thesis, we strive to make three inter-related contributions. Firstly, we propose two enhanced cache access policies. The access policies ensure strong consistency of the cached data objects through proactive or reactive interactions with the data server. At the same time, these policies collect information about access and update frequencies of hosted objects to facilitate efficient deployment of the cache replacement policy. Secondly, we design a replacement policy which plays the decision maker role when there is a new object to accommodate in a fully occupied cache. The statistical information collected by the access policies enables the decision making process. This process is modeled around the idea of preserving frequently accessed but less frequently updated objects in the cache. Thirdly, we analytically show that a cache management scheme with the proposed replacement policy bundled with any of the cache access policies guarantees optimum amount of data transmission by increasing the number of effective hits in the cache system. Results from both analysis and our extensive simulations demonstrate that the proposed policies outperform the popular Least Frequently Used (LFU) policy in terms of both effective hits and bandwidth consumption. Moreover, our flexible system model makes the proposed policies equally applicable to applications for the existing 3G, as well as upcoming LTE, LTE Advanced and WiMAX wireless data access networks.
7

The virtual time function and rate-based schedulers for real-time communications over packet networks

Devadason, Tarith Navendran January 2007 (has links)
[Truncated abstract] The accelerating pace of convergence of communications from disparate application types onto common packet networks has made quality of service an increasingly important and problematic issue. Applications of different classes have diverse service requirements at distinct levels of importance. Also, these applications offer traffic to the network with widely variant characteristics. Yet a common network is expected at all times to meet the individual communication requirements of each flow from all of these application types. One group of applications that has particularly critical service requirements is the class of real-time applications, such as packet telephony. They require both the reproduction of a specified timing sequence at the destination, and nearly instantaneous interaction between the users at the endpoints. The associated delay limits (in terms of upper bound and variation) must be consistently met; at every point where these are violated, the network transfer becomes worthless, as the data cannot be used at all. In contrast, other types of applications may suffer appreciable deterioration in quality of service as a result of slower transfer, but the goal of the transfer can still largely be met. The goal of this thesis is to evaluate the potential effectiveness of a class of packet scheduling algorithms in meeting the specific service requirements of real-time applications in a converged network environment. Since the proposal of Weighted Fair Queueing, there have been several schedulers suggested to be capable of meeting the divergent service requirements of both real-time and other data applications. ... This simulation study also sheds light on false assumptions that can be made about the isolation produced by start-time and finish-time schedulers based on the deterministic bounds obtained. The key contributions of this work are as follows. We clearly show how the definition of the virtual time function affects both delay bounds and delay distributions for a real-time flow in a converged network, and how optimality is achieved. Despite apparent indications to the contrary from delay bounds, the simulation analysis demonstrates that start-time rate-based schedulers possess useful characteristics for real-time flows that the traditional finish-time schedulers do not. Finally, it is shown that all the virtual time rate-based schedulers considered can produce isolation problems over multiple hops in networks with high loading. It becomes apparent that the benchmark First-Come-First-Served scheduler, with spacing and call admission control at the network ingresses, is a preferred arrangement for real-time flows (although lower priority levels would also need to be implemented for dealing with other data flows).
8

Bitrate smooting: a study on traffic shaping and -analysis in data networks / Utjämning av datatakt: en studie av trafikformning och analys i datanät

Gratorp, Christina January 2007 (has links)
Examensarbetet bakom denna rapport utgör en undersökande studie om hur transmission av mediadata i nätverk kan göras effektivare. Det kan åstadkommas genom att viss tilläggsinformation avsedd för att jämna ut datatakten adderas i det realtidsprotokoll, Real Time Protocol, som används för strömmande media. Genom att försöka skicka lika mycket data under alla konsekutiva tidsintervall i sessionen kommer datatakten vid en godtycklig tidpunkt med större sannolikhet att vara densamma som vid tidigare klockslag. En streamingserver kan tolka, hantera och skicka data vidare enligt instruktionerna i protokollets sidhuvud. Datatakten jämnas ut genom att i förtid, under tidsintervall som innehåller mindre data, skicka även senare data i strömmen. Resultatet av detta är en utjämnad datataktskurva som i sin tur leder till en jämnare användning av nätverkskapaciteten. Arbetet inkluderar en översiktlig analys av beteendet hos strömmande media, bakgrundsteori om filkonstruktion och nätverksteknologier samt ett förslag på hur mediafiler kan modifieras för att uppfylla syftet med examensarbetet. Resultat och diskussion kan förhoppningsvis användas som underlag för en framtida implementation av en applikation ämnad att förbättra trafikflöden över nätverk.
9

TRUE UNMANNED TELEMETRY COLLECTION USING OC-12 NETWORK DATA FORWARDING

Bullers, Bill 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The cost of telemetry collection is significantly reduced by unmanned store and forward systems made possible using 622MHz OC-12 networks. Networks are readily available to telemetry system architects. The in-band control of remote unmanned collection platforms is handled through a Java browser interface. Data from many telemetry channels are collected and temporarily stored on a digital disk system designed around the OC-12 network. The I/O, storage, and network components are configured, set, and initialized remotely. Recordings are started and stopped on command and can be made round-the-clock. Files of stored, time stamped data are delivered at the rate of OC-12 to a distribution center.
10

Χρονοπρογραμματισμός και δρομολόγηση σε δίκτυα πλέγματος και δίκτυα δεδομένων

Κόκκινος, Παναγιώτης 05 January 2011 (has links)
Τα δίκτυα πλέγματος (grid networks) αποτελούνται από ένα σύνολο ισχυρών υπολογιστικών, αποθηκευτικών και άλλων πόρων. Οι πόροι αυτοί είναι συνήθως γεωγραφικά αλλά και διοικητικά διασκορπισμένοι και συνδέονται με ένα δίκτυο δεδομένων. Τα δίκτυα πλέγματος το τελευταίο καιρό έχουν αποκτήσει μία δυναμική, η οποία εντάσσεται μέσα σε ένα γενικότερο πλαίσιο, αυτό της κατανεμημένης επεξεργασίας και αποθήκευσης δεδομένων. Επιστήμονες, ερευνητές αλλά και απλοί χρήστες χρησιμοποιούν από κοινού τους κατανεμημένους πόρους για την εκτέλεση διεργασιών ή τη χρήση εφαρμογών, για τις οποίες δεν μπορούν να χρησιμοποιήσουν τους τοπικά διαθέσιμους υπολογιστές τους λόγω των περιορισμένων δυνατοτήτων τους. Στην παρούσα διδακτορική διατριβή εξετάζουμε ζητήματα που σχετίζονται με το χρονοπρογραμματισμό (scheduling) των διεργασιών στους διαθέσιμους πόρους, καθώς και με τη δρομολόγηση (routing) των δεδομένων που οι διεργασίες χρειάζονται. Εξετάζουμε τα ζητήματα αυτά είτε χωριστά, είτε σε συνδυασμό, μελετώντας έτσι τις αλληλεπιδράσεις τους. Αρχικά, προτείνουμε ένα πλαίσιο παροχής ποιότητας υπηρεσιών στα δίκτυα πλέγματος, το οποίο μπορεί να εγγυηθεί σε ένα χρήστη μία μέγιστη χρονική καθυστέρηση εκτέλεσης των διεργασιών του. Με τον τρόπο αυτό, ένας χρήστης μπορεί να επιλέξει με απόλυτη βεβαιότητα εκείνον τον υπολογιστικό πόρο που μπορεί να εκτελέσει τη διεργασία του πριν τη λήξη της προθεσμίας της. Το προτεινόμενο πλαίσιο δεν στηρίζεται στην εκ των προτέρων δέσμευση των υπολογιστικών πόρων, αλλά στο ότι οι χρήστες μπορούν να αυτό-περιορίσουν το ρυθμό δημιουργίας διεργασιών τους, ο οποίος συμφωνείται ξεχωριστά με κάθε πόρο κατά τη διάρκεια μίας φάσης εγγραφής τους. Πραγματοποιούμε έναν αριθμό πειραμάτων προσομοίωσης που αποδεικνύουν ότι το προτεινόμενο πλαίσιο μπορεί πράγματι να παρέχει στους χρήστες εγγυημένο μέγιστο χρόνο καθυστέρησης εκτέλεσης των διεργασιών τους, ενώ με τις κατάλληλες επεκτάσεις το πλαίσιο μπορεί να χρησιμοποιηθεί ακόμα και όταν το φορτίο των διεργασιών δεν είναι εκ των προτέρων γνωστό. Στη συνέχεια εξετάζουμε το πρόβλημα της ``Συγκέντρωσης Δεδομένων'' (ΣΔ), που εμφανίζεται όταν μία διεργασία χρειάζεται περισσότερα του ενός τμήματα δεδομένων να μεταφερθούν σε έναν υπολογιστικό πόρο, πριν η διεργασία ξεκινήσει την εκτέλεσή της σε αυτόν. Μελετάμε τα υπό-προβλήματα της επιλογής των αντιγράφων των δεδομένων, του χρονοπρογραμματισμού της διεργασίας και της δρομολόγησης των δεδομένων της και προτείνουμε έναν αριθμό πλαισίων ``Συγκέντρωσης Δεδομένων''. Μερικά πλαίσια εξετάζουν μόνο τις υπολογιστικές ή μόνο τις επικοινωνιακές απαιτήσεις των διεργασιών, ενώ άλλα εξετάζουν και τα δύο είδη απαιτήσεων. Επιπλέον, προτείνονται πλαίσια ``Συγκέντρωσης Δεδομένων'' τα οποία βασίζονται στην κατασκευή ελαχίστων γεννητικών δέντρων(Minimum Spanning Tree - MST), με σκοπό τη μείωση της συμφόρησης στο δίκτυο δεδομένων, που εμφανίζεται κατά την ταυτόχρονη μεταφορά των δεδομένων μίας διεργασίας. Στα πειράματα προσομοίωσης μας αξιολογούμε τα προτεινόμενα πλαίσια και δείχνουμε ότι αν η διαδικασία της ``Συγκέντρωση Δεδομένων'' πραγματοποιηθεί σωστά, τότε η απόδοση του δικτύου πλέγματος, όσον αφορά τη χρήση των πόρων και την εκτέλεση των διεργασιών, μπορεί να βελτιωθεί. Επιπλέον, ερευνούμε την εφαρμογή τεχνικών σύνοψης της πληροφορίας των χαρακτηριστικών των πόρων στα δίκτυα πλέγματος. Προτείνουμε ένα σύνολο μεθόδων και τελεστών σύνοψης, προσπαθώντας να μειώσουμε τον όγκο των πληροφοριών πόρων που μεταφέρονται πάνω από το δίκτυο, ενώ παράλληλα επιθυμούμε οι συνοπτικές πληροφορίες που παράγονται να βοηθούν το χρονοπρογραμματιστή να παίρνει αποδοτικές αποφάσεις ανάθεσης διεργασιών στους διαθέσιμους πόρους. Οι τεχνικές αυτές μπορούν να συνδυαστούν και με τις αντίστοιχες τεχνικές που εφαρμόζονται στα ιεραρχικά δίκτυα δεδομένων για τη δρομολόγηση, εξασφαλίζοντας έτσι τη διαλειτουργικότητα μεταξύ διαφορετικών δικτύων πλέγματος καθώς και το απόρρητο των πληροφοριών που ανήκουν σε διαφορετικούς παρόχους πόρων. Στα πειράματα προσομοίωσης μας χρησιμοποιούμε σαν μετρική της ποιότητας / αποδοτικότητας των αποφάσεων του χρονοπρογραμματιστή τον Stretch Factor (SF), που ορίζεται ως ο λόγος της μέσης καθυστέρησης εκτέλεσης των διεργασιών όταν αυτές χρονοπρογραμματίζονται με βάση ακριβείς πληροφορίες πόρων, προς τη μέση καθυστέρηση τους όταν χρησιμοποιούνται συνοπτικές πληροφορίες. Ακόμα, μετράμε τη συχνότητα με την οποία ο χρονοπρογραμματιστής ενημερώνεται για τις αλλαγές στην κατάσταση των πόρων καθώς και τον όγκο των πληροφοριών πόρων που μεταφέρονται. Μελετάμε, ακόμα, ζητήματα που προκύπτουν από την υλοποίηση αλγορίθμων χρονοπρογραμματισμού που έχουν αρχικά μελετηθεί σε περιβάλλοντα προσομοίωσης, σε πραγματικά συστήματα ενδιάμεσου λογισμικού (middleware) για δίκτυα πλέγματος, όπως το gLite. Το πρώτο ζήτημα που εξετάζουμε είναι το γεγονός ότι οι πληροφορίες που παρέχονται στους αλγορίθμους χρονοπρογραμματισμού στα συστήματα αυτά δεν είναι πάντα έγκυρες, ενώ το δεύτερο ζήτημα είναι ότι δεν υπάρχει ευελιξία στο διαμοιρασμό των πόρων μεταξύ διαφορετικών διεργασιών. Η μελέτη μας δείχνει ότι με απλές αλλαγές στους μηχανισμούς διαχείρισης διεργασιών ενός συστήματος ενδιάμεσου λογισμικού, αυτά αλλά και άλλα ζητήματα μπορούν να αντιμετωπιστούν, επιτυγχάνοντας σημαντικές βελτιώσεις στην απόδοση των δικτύων πλέγματος. Στα πλαίσια αυτά μάλιστα, εξετάζουμε τη χρήση της τεχνολογίας της εικονικοποίησης (virtualization). Υλοποιούμε και αξιολογούμε τους προτεινόμενους μηχανισμούς σε ένα μικρό δοκιμαστικό δίκτυο πλέγματος. Τέλος, προτείνουμε έναν αλγόριθμο πολλαπλών κριτηρίων για τη δρομολόγηση και ανάθεση μήκους κύματος υπό την παρουσία φυσικών εξασθενήσεων (Impairment-Aware Routing and Wavelength Assignment, IA-RWA) για οπτικά δίκτυα δεδομένων. Τα οπτικά δίκτυα είναι η δικτυακή τεχνολογία που χρησιμοποιείται σήμερα για τη διασύνδεση των υπολογιστικών και αποθηκευτικών πόρων των δικτύων πλέγματος, ενώ οι διάφορες φυσικές εξασθενήσεις τείνουν να μειώνουν την ποιότητα μετάδοσης (Quality of Transmission - QoT) των οπτικών σημάτων. Κύριο χαρακτηριστικό του προτεινόμενου αλγορίθμου είναι ότι υπολογίζει την ποιότητα μετάδοσης (Quality of Transmission - QoT) ενός υποψήφιου οπτικού μονοπατιού (lightpath) μη βασιζόμενο σε πραγματικές μετρήσεις ή εκτιμήσεις μέσω αναλυτικών μοντέλων των διαφόρων φυσικών εξασθενήσεων, αλλά μετρώντας τις αιτίες στις οποίες αυτά οφείλονται. Με τον τρόπο αυτό ο αλγόριθμος γίνεται πιο γενικός και εφαρμόσιμος σε διαφορετικές συνθήκες (μέθοδοι διαμόρφωσης του οπτικού σήματος, ρυθμοί μετάδοσης, τιμές διαφόρων φυσικών παραμέτρων, κ.α.). Τα πειράματα προσομοίωσης μας δείχνουν ότι ο προτεινόμενος αλγόριθμος μπορεί να εξυπηρετήσει τις περισσότερες δυναμικές αιτήσεις σύνδεσης, υπολογίζοντας γρήγορα, μονοπάτια με καλή ποιότητα μετάδοσης σήματος. Γενικά, η παρούσα διδακτορική διατριβή παρουσιάζει έναν αριθμό σημαντικών και καινοτόμων μεθόδων, πλαισίων και αλγορίθμων που αφορούν τα δίκτυα πλέγματος. Παράλληλα ωστόσο αποκαλύπτει το εύρος των ζητημάτων και ως ένα βαθμό και τις αλληλεπιδράσεις τους, που σχετίζονται με την αποδοτική λειτουργία των δικτύων πλέγματος, τα οποία απαιτούν τη σύνθεση και τη συνεργασία ερευνητών, μηχανικών και επιστημόνων από διάφορα πεδία. / Grid networks consist of several high capacity, computational, storage and other resources, which are geographically distributed and may belong to different administrative domains. These resources are usually connected through high capacity optical networks. The grid networks evolution follows the current trend of distributedly performed computation and storage. This trend provides several new possibilities to scientists, researchers and to simple users around the world, so as to use the shared resources for executing their tasks and running their applications. These operations are not always possible to perform in local, limited capacity, resources. In this thesis we study issues related to the scheduling of tasks and the routing of their datasets. We study these issues both separately and jointly, along with their interactions. Initially, we present a Quality of Service (QoS) framework for grids that guarantees to users an upper bound on the execution delay of their submitted tasks. Such delay guarantees imply that a user can choose, with absolute certainty, a resource to execute a task before its deadline expires. Our framework is not based on the advance reservation of resources, instead, the users follow a self constrained task generation pattern, which is agreed separately with each resource during a registration phase. We validate experimentally the proposed Quality of Service (QoS) framework for grids, verifying that it satisfies the delay guarantees promised to users. In addition, when the proposed extensions are used, the framework also provides delay guarantees without exact a-priori knowledge of the task workloads. Next, we examine a task scheduling and data migration problem for grid networks, which we refer to as the Data Consolidation (DC) problem. Data Consolidation arises when a task requests concurrently multiple pieces of data, possibly scattered throughout the grid network that have to be present at a selected site before the task's execution starts. In such a case, the scheduler must select the data replicas to be used, the site where these data will be gathered for the task to be executed, and the routing paths to be followed. We propose and experimentally evaluate several Data Consolidation schemes. Some consider only the computational or only the communication requirements of the tasks, while others consider both kinds of requirements. We also propose Data Consolidation (DC) schemes, which are based on Minimum Spanning Trees (MST) that route concurrently the datasets so as to reduce the congestion that may appear in the future, due to these transfers. In our simulation experiments we validate the proposed schemes and show that if the Data Consolidation operation is performed efficiently, then significant benefits can be achieved, in terms of the resources' utilization and task delay. We also consider the use of resource information aggregation in grid networks. We propose a number of aggregation schemes and operators for reducing the information exchanged in a grid network and used by the resource manager in order to make efficient scheduling decisions. These schemes can be integrated with the schemes utilized in hierarchical data networks for data routing, providing interoperability between different grid networks, while the sensitive or detailed information of resource providers is kept private. We perform a large number of experiments to evaluate the proposed aggregation schemes and the used operators. As a metric of the quality of the aggregated information we introduce the Stretch Factor (SF), defined as the ratio of the task delay when the task is scheduled using complete resource information over the task delay when an aggregation scheme is used. We also measure the number of resource information updates triggered by each aggregation scheme and the amount of resource information transferred. In addition, we are interested in the difficulties encountered and the solutions provided in order to develop and evaluate scheduling policies, initially implemented in a simulation environment, in the gLite grid middleware. We identify two important such implementation issues, namely the inaccuracy of the information provided to the scheduler by the information system, and the inflexibility in the sharing of a resource among different jobs. Our study indicates that simple changes in the gLite's scheduling procedures can solve these and other similar issues, yielding significant performance gains. We also investigate the use of the virtualization technology in the gLite middleware. We implement and evaluate the proposed mechanisms in a small gLite testbed. Finally, we propose a multicost impairment-aware routing and wavelength assignment (IA-RWA) algorithm in optical networks. In general, physical impairments tend to degrade the optical signal quality. Also, optical networks is the main networking technology used today for the interconnection of the grid's, computational and storage, resources around the world. The main characteristic of the proposed algorithm is that it calculates the quality of transmission (QoT) of a candidate lightpath by measuring several impairment-generating source parameters and not by using complex formulas to directly account for the effects of physical impairments. In this way, this approach is more generic and more easily applicable to different conditions (modulation formats, bit rates). Our results indicate that the proposed impairment-aware routing and wavelength assignment (IA-RWA) algorithm can efficiently serve the online traffic in an optical network and to guarantee the transmission quality of the found lightpaths, with low running times. In general, in this thesis we present several novel mechanisms and algorithms for grid networks. At the same time, this Thesis reveals the variety of the issues that relate to the efficient operation of the grid networks and their interdependencies. For handling all these issues the cooperation of researches, scientists and engineers from various fields, is required.

Page generated in 0.0592 seconds