• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 43
  • 43
  • 15
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Media Access Control for Wireless Sensor and Actuator Networks

Nabi, Muaz Un January 2012 (has links)
In a wireless network, the medium is a shared resource. The nodes in the network negotiate access of the shared resource using the Medium Access Control (MAC) protocol. The design of a MAC protocol for a sensor node is not the same as that for a wireless transceiver. Due to the transceiver characteristics, the MAC protocol design is limited in terms of medium access methods. However, in most cases, the protocols rely on simple access methods i.e. Time Division Multiple Access (TDMA) or Carrier Sense Multiple Access / Collision Avoidance (CSMA / CA). Control and monitoring applications, running over a wireless network, are typical examples of Wireless Sensor Actuator Network (WSAN) application in industries. In an industrial network, the message deliveries must be time-bounded otherwise, they are of no use. This report aims to present the thesis work carried out at ABB AB, Västerås. The purpose of this thesis was to compare the performance of WLAN and WirelessHART when it comes to control applications. For the purpose of WLAN, the media access schemes are analyzed in terms of deadline misses. There are other metrices for the performance evaluation but our focus was on the latency, since it is very important in the field of industrial automation. NS-2 was used for the purpose of MAC layer analysis and it is also shown that PCF gives better performance as compared to DCF, in terms of deadline misses. Finally, WLAN is proven to accommodate more control loops as compared to WirelessHART for a given scenario.
2

A Modified SCTP with Load Balancing

Tseng, Cheng-Liang 26 August 2003 (has links)
To support the transmission of real-time multimedia stream, Stream Control Transmission Protocol (SCTP) developed by IETF is considered to be more efficient because of its high-degree expandability and compatibility. Today we can observe that instead of using SCTP may become the transmission protocol of next-generation IP network. In this Thesis, we propose a mechanism to upgrade TCP and UDP, the multi-home feature of SCTP to ensure that multiple paths can exist between two SCTP ends. Not only can the primary path continue to function, but the secondary paths covey part of data packets once the network congestion occurs. Considering the dynamic change of our Internet, the proposed mechanism can enhance the effectiveness of SCTP data transmission, and increase overall network utilization. Cutting user data into chunks in SCTP, we can analyze the transmission performance of individual path by measuring the transmission delay from the sender to the receiving end. By modifying the simulator of NS-2, we set up different topologies in the experiment to analyze the performance of our mechanism. We compare the modified SCTP with the original SCTP to highlight our proposed mechanism in increasing throughput and network utilization by adjusting the background traffic on the paths.
3

Análise de problemas ligados às comunicações em redes elétricas inteligentes. / Analysis of communication issues related to smart grids.

Petenel, Fernando Henrique Jacyntho 06 December 2013 (has links)
Este estudo é uma análise de interfaces e protocolos de automação que possuem potencial para serem adotados como padrões em redes elétricas inteligentes em um futuro próximo. A fim de verificar a possibilidade de implementar a IEC 61850 em uma aplicação típica de redes deste tipo, é realizada uma simulação usando o software NS-2. Os resultados deste trabalho servirão de orientação para o dimensionamento de redes de automação baseadas em IEC 61850. / This study is an analysis of automation interfaces and protocols with the potential to be adopted as standards for smart grids in the near future. In order to verify the feasibility of implementing the IEC 61850 in a typical application of such grids, it is performed a simulation using NS-2 software. The results of this work will be an orientation to dimension automation networks based on IEC 61850.
4

Análise de problemas ligados às comunicações em redes elétricas inteligentes. / Analysis of communication issues related to smart grids.

Fernando Henrique Jacyntho Petenel 06 December 2013 (has links)
Este estudo é uma análise de interfaces e protocolos de automação que possuem potencial para serem adotados como padrões em redes elétricas inteligentes em um futuro próximo. A fim de verificar a possibilidade de implementar a IEC 61850 em uma aplicação típica de redes deste tipo, é realizada uma simulação usando o software NS-2. Os resultados deste trabalho servirão de orientação para o dimensionamento de redes de automação baseadas em IEC 61850. / This study is an analysis of automation interfaces and protocols with the potential to be adopted as standards for smart grids in the near future. In order to verify the feasibility of implementing the IEC 61850 in a typical application of such grids, it is performed a simulation using NS-2 software. The results of this work will be an orientation to dimension automation networks based on IEC 61850.
5

Power characterisation of a Zigbee wireless network in a real time monitoring application

Prince-Pike, Arrian January 2009 (has links)
Zigbee is a relatively new wireless mesh networking standard with emphasis on low cost and energy conservation. It is intended to be used in wireless monitoring and control applications such as sensors and remotely operated switches where the end devices are battery powered. Because it is a recent technology there is not sufficient understanding on how network architecture and configuration affect power consumption of the battery powered devices. This research investigates the power consumption and delivery ratio of Zigbee wireless mesh and star networks for a single sink real time monitoring system at varying traffic rates and the beacon and non beacon mode operation of its underlying standard IEEE 802.15.4 in the star network architecture. To evaluate the performance of Zigbee, the network operation was simulated using the simulation tool NS-2. NS-2 is capable of simulating the entire network operation including traffic generation and energy consumption of each node in the network. After first running the simulation it was obvious that there were problems in the configuration of the simulator as well as some unexpected behaviour. After performing several modifications to the simulator the results improved significantly. To validate the operation of the simulator and to give insight on the operation of Zigbee, a real Zigbee wireless network was constructed and the same experiments that were conducted on the simulator were repeated on the Zigbee network. The research showed that the modified simulator produced good results that were close to the experimental results. It was found that the non beacon mode of operation had the lowest power consumption and best delivery ratio at all tested traffic rates. The operation of Zigbee mesh and star networks were compared to the results for IEEE 802.15.4 star networks in non beacon mode which revealed that the extra routing traffic sent by the Zigbee networking layers does contribute significantly to the power consumption, however even with the extra routing traffic, power consumption is still so low that it the battery life of the device would be limited by the shelf life of the battery, not by the energy consumption of the device. This research has successfully achieved its objectives and identified areas for future development. The simulator model for NS-2 could be improved to further increase the accuracy of the results as well as include the Zigbee routing layers and the experimental results could be improved by a more accurate power consumption data acquisition method.
6

Power characterisation of a Zigbee wireless network in a real time monitoring application

Prince-Pike, Arrian January 2009 (has links)
Zigbee is a relatively new wireless mesh networking standard with emphasis on low cost and energy conservation. It is intended to be used in wireless monitoring and control applications such as sensors and remotely operated switches where the end devices are battery powered. Because it is a recent technology there is not sufficient understanding on how network architecture and configuration affect power consumption of the battery powered devices. This research investigates the power consumption and delivery ratio of Zigbee wireless mesh and star networks for a single sink real time monitoring system at varying traffic rates and the beacon and non beacon mode operation of its underlying standard IEEE 802.15.4 in the star network architecture. To evaluate the performance of Zigbee, the network operation was simulated using the simulation tool NS-2. NS-2 is capable of simulating the entire network operation including traffic generation and energy consumption of each node in the network. After first running the simulation it was obvious that there were problems in the configuration of the simulator as well as some unexpected behaviour. After performing several modifications to the simulator the results improved significantly. To validate the operation of the simulator and to give insight on the operation of Zigbee, a real Zigbee wireless network was constructed and the same experiments that were conducted on the simulator were repeated on the Zigbee network. The research showed that the modified simulator produced good results that were close to the experimental results. It was found that the non beacon mode of operation had the lowest power consumption and best delivery ratio at all tested traffic rates. The operation of Zigbee mesh and star networks were compared to the results for IEEE 802.15.4 star networks in non beacon mode which revealed that the extra routing traffic sent by the Zigbee networking layers does contribute significantly to the power consumption, however even with the extra routing traffic, power consumption is still so low that it the battery life of the device would be limited by the shelf life of the battery, not by the energy consumption of the device. This research has successfully achieved its objectives and identified areas for future development. The simulator model for NS-2 could be improved to further increase the accuracy of the results as well as include the Zigbee routing layers and the experimental results could be improved by a more accurate power consumption data acquisition method.
7

An MPLS-based Quality of Service Architecture for Heterogeneous Networks

Raghavan, Srihari 26 November 2001 (has links)
This thesis proposes a multi-protocol label switching (MPLS)-based architecture to provide quality of service (QoS) for both internet service provider (ISP) networks and backbone Internet Protocol (IP) networks that are heterogeneous in nature. Heterogeneous networks are present due to the use of different link-layer mechanisms in the current Internet. Copper-based links, fiber-based links, and wireless links are some examples of different physical media that lead to different link-layer mechanisms. The proposed architecture uses generalized MPLS and other MPLS features to combat heterogeneity. The proposed architecture leverages the QoS capabilities of asynchronous transfer mode (ATM) and the scalability advantages of the IP differentiated services (DiffServ) architecture. This architecture is constructed in such a way that MPLS interacts with DiffServ in the backbone networks while performing ATM-like QoS enforcement in the periphery of the networks. The architecture supports traffic engineering through MPLS explicit paths. MPLS network management, bandwidth broker capabilities, and customizability is handled through domain specific MPLS management entities that use the Common Open Policy Service (COPS) protocol to interact with other MPLS entities like MPLS label switch routers and label edge routers. The thesis provides a description of MPLS and QoS, followed by a discussion of the motivation for a new architecture. The MPLS-based architecture is then discussed and compared against similar architectures. To integrate the ATM and DiffServ QoS attributes into this architecture, MPLS signaling protocols are used. There are two common MPLS signaling protocols. They are Resource Reservation Protocol with traffic engineering extensions (RSVP-TE) and Constraint-Routed Label Distribution Protocol (CR-LDP). Both these protocols offer comparative MPLS features for constraint routed label switch path construction, maintenance, and termination. RSVP-TE uses UDP and IP, while CR-LDP uses TCP. This architecture proposes a multi-level domain of operation where CR-LDP operates in internet service provider (ISP) networks and RSVP- TE operates in backbone networks along with DiffServ. Qualitative analysis for this choice of domain of operation of the signaling protocols is then presented. Quantitative analysis through simulation demonstrates the advantages of combining DiffServ and MPLS in the backbone. The simulation setup compares the network performance in handling mixed ill-behaved and well-behaved traffic in the same link, with different levels of DiffServ and MPLS integration in the network. The simulation results demonstrate the advantages of integrating the QoS features of DiffServ, ATM functionality, and MPLS into a single architecture. / Master of Science
8

FEASIBILITY OF NS-2 MODELS IN SIMULATING THE CUSTODY TRANSFER MECHANISM

Kaniganti, Madhuri Choudary January 2005 (has links)
No description available.
9

Πρωτόκολλα πραγματικού χρόνου για τη μετάδοση πληροφορίας πολυμέσων με δυνατότητα προσαρμογής σε δίκτυα μη εγγυημένης ποιότητας / Adaptive real-time protocols for multimedia transmission over best-effort networks

Κιουμουρτζής, Γεώργιος 11 January 2011 (has links)
Οι εφαρμογές πολυμέσων έχουν αποκτήσει τα τελευταία χρόνια μία αυξανόμενη ζήτηση από τους χρήστες γενικά του Διαδικτύου καθώς προσφέρουν νέες και ποικιλόμορφες δυνατότητες ανταλλαγής πληροφοριών εικόνας και ήχου. Όμως οι εφαρμογές αυτές υπόκεινται σε περιορισμούς που έχουν να κάνουν κυρίως με τη φύση τους και χαρακτηρίζονται από τις υψηλές απαιτήσεις σε ρυθμό μετάδοσης δεδομένων (bandwidth-consuming applications) και την ευαισθησία τους στις καθυστερήσεις που παρουσιάζονται κατά τη μετάδοση των πακέτων από τον αποστολέα στο παραλήπτη (delay-sensitive applications). Από την άλλη μεριά οι εφαρμογές αυτές φέρεται ότι είναι λιγότερο ευαίσθητες στις απώλειες των πακέτων (packet-loss tolerant applications). Το ζητούμενο όμως με τις εφαρμογές πολυμέσων, πέρα από το εύρος των υπηρεσιών τις οποίες προσφέρουν, είναι και η παρεχόμενη ποιότητα των υπηρεσιών (Quality of Service, QOS) στο τελικό χρήστη. Η ποιότητα αυτή των υπηρεσιών συνδέεται άμεσα με τα προαναφερόμενα χαρακτηριστικά των εφαρμογών πολυμέσων. Η μέχρι τώρα προσέγγιση από την ερευνητική κοινότητα αλλά και τις εταιρείες παροχής υπηρεσιών Διαδικτύου (Internet Service Providers), σε ότι αφορά την εξασφάλιση της ποιότητας υπηρεσιών, έχει εστιασθεί είτε στην επιμέρους βελτιστοποίηση της απόδοσης των πρωτοκόλλων μετάδοσης, είτε στην εγκατάσταση επιπλέον εξοπλισμού για τη δημιουργία δικτύων διανομής πολυμέσων (Content Distribution Networks, CDNs) που τοποθετούνται συνήθως κοντά στον τελικό χρήστη. Επιπρόσθετα η αυξανόμενη προσπάθεια της ερευνητικής κοινότητας με σκοπό την αύξηση της ποιότητας υπηρεσιών προσέφερε νέες καινοτόμες λύσεις με την μορφή των υπηρεσιών-αρχιτεκτονικών όπως οι Ολοκληρωμένες Υπηρεσίες (Integrated services, Intserv) και οι Διαφοροποιημένες Υπηρεσίες (Differentiated Services, Diffserv) οι οποίες φιλοδοξούν να προσφέρουν εγγυήσεις ποιότητας υπηρεσιών σε συγκεκριμένες ομάδες χρηστών. Όμως και οι δύο αυτές αρχιτεκτονικές δεν κατάφεραν μέχρι τώρα να αποτελέσουν μια ολοκληρωμένη λύση για τη παροχή εγγυήσεων ποιότητας υπηρεσιών στο χρήστη λόγω των μεγάλων δυσκολιών στην εφαρμογή τους που έχουν να κάνουν τόσο με χρηματοοικονομικά κριτήρια όσο και με τη ίδια τη δομή του Διαδικτύου. Έτσι βλέπουμε ότι παρόλη τη πρόοδο που έχει γίνει μέχρι σήμερα στη τεχνολογία των δικτύων η παροχή ποιότητας υπηρεσίας στο Διαδίκτυο από άκρο σε άκρο δεν είναι ακόμη στις μέρες μας εφικτή με αποτέλεσμα οι υπηρεσίες μετάδοσης πολυμέσων στο Διαδίκτυο – π.χ “youtube” – να επηρεάζονται σημαντικά από τις όποιες μεταβολές στη κατάσταση του δικτύου. Προς το σκοπό αυτό η ερευνητική κοινότητα έχει στραφεί στη μελέτη μηχανισμών οι οποίοι θα είναι να θέση να προσαρμόζουν το ρυθμό μετάδοσης της πολυμεσικής πληροφορίας, ανάλογα με τις εκάστοτε συνθήκες του δικτύου, έτσι ώστε να προσφέρουν τη μέγιστη δυνατή ποιότητα υπηρεσίας στο τελικό χρήστη. Η προσπάθεια αυτή μπορεί να κατηγοριοποιηθεί σε δύο μεγάλες κατηγορίες ανάλογα με το τρόπο δρομολόγησης της πολυμεσικής πληροφορίας, όπως παρακάτω: • Μηχανισμοί προσαρμογής για unicast μετάδοση: Σε αυτή τη περίπτωση οι μηχανισμοί προσαρμογής προσαρμόζουν το ρυθμό μετάδοσης της πληροφορίας από ένα σημείο (αποστολέας) προς ένα σημείο (παραλήπτης). • Μηχανισμοί προσαρμογής εκπομπής πολλαπλής διανομής (multicast): Στη περίπτωση αυτή οι μηχανισμοί προσαρμογής προσαρμόζουν το ρυθμό μετάδοσης της πληροφορίας που λαμβάνει χώρα από ένα σημείο (αποστολέας) προς πολλά σημεία (παραλήπτες). Σε ότι αφορά τη unicast μετάδοση την επικρατέστερη πρόταση αποτελεί ο μηχανισμός ελέγχου συμφόρησης με την ονομασία TCP Friendly Rate Control (TFRC) που έχει γίνει αποδεκτός ως διεθνές πρότυπο από τη Internet Engineering Task Force (IETF). Στη περιοχή της multicast εκπομπής ο μηχανισμός TCP-friendly Multicast Congestion Control (TFMCC) έχει γίνει επίσης αποδεκτός ως πειραματικό πρότυπο από την IETF. Παρόλα αυτά όμως εργαστηριακές μελέτες και πειράματα έχουν δείξει ότι τόσο ο μηχανισμός TFRC όσο και ο TFMCC δεν είναι και οι πλέον κατάλληλοι μηχανισμοί προσαρμογής για τη μετάδοση πολυμέσων. Τα κυριότερα προβλήματα αφορούν στη “φιλικότητα” των μηχανισμών αυτών προς το πρωτόκολλο Transmission Control Protocol (TCP) καθώς και στις απότομες διακυμάνσεις του ρυθμού μετάδοσης. Ιδιαίτερα οι απότομες διακυμάνσεις του ρυθμού μετάδοσης είναι ένα στοιχείο μη επιθυμητό από τις εφαρμογές πολυμέσων και ιδιαίτερα από τις εφαρμογές πολυμέσων πραγματικού χρόνου. Στη περιοχή των ασυρμάτων δικτύων τα προβλήματα τα οποία αντιμετωπίζονται κατά τη μετάδοση των πολυμέσων δεν έχουν τόσο άμεση σχέση με τη συμφόρηση του δικτύου (αυτή παρατηρείται κυρίως στα ενσύρματα δίκτυα), όσο με τις απώλειες των πακέτων που είναι ένα άμεσο αποτέλεσμα του μέσου διάδοσης. Η μέχρι τώρα προσέγγιση αφορά στην επιμέρους βελτιστοποίηση των διαφόρων πρωτοκόλλων των στρωμάτων του OSI έτσι ώστε να μειωθούν τα προβλήματα διάδοσης και να ελαχιστοποιηθούν οι απώλειες πακέτων και οι καθυστερήσεις από τον αποστολέα στο παραλήπτη. Κατά τα τελευταία χρόνια όμως κερδίζει όλο και περισσότερο έδαφος μια διαφορετική προσέγγιση η οποία έχει επικρατήσει να ονομάζεται διεθνώς ως “cross-layer optimization-adaptation”. Κατά τη προσέγγιση αυτή διαφαίνεται ότι θα μπορούσαμε να πετύχουμε τη βελτιστοποίηση της παρεχόμενης ποιότητας στις εφαρμογές πολυμέσων μέσω κάποιων μηχανισμών προσαρμογής, που θα εμπλέκουν περισσότερα του ενός εκ των στρωμάτων του OSI στις τρέχουσες συνθήκες του δικτύου. Η μεθοδολογία, οι προκλήσεις, οι περιορισμοί καθώς και οι εφαρμογές της διαστρωματικής (cross layer) προσαρμογής αποτελούν ένα ανοικτό ερευνητικό πεδίο το οποίο βρίσκεται αυτή τη στιγμή σε εξέλιξη. Σκοπός της παρούσας Διδακτορικής Διατριβής είναι καταρχήν η μελέτη των υπαρχόντων μηχανισμών ελέγχου συμφόρησης που αφορούν τα ενσύρματα δίκτυα μη εγγυημένης ποιότητας, όπως το Διαδίκτυο. Προς τη κατεύθυνση αυτή αξιολογούνται αρχικά οι υπάρχοντες μηχανισμοί ελέγχου συμφόρησης και διαπιστώνονται τα κυριότερα προβλήματα τα οποία σχετίζονται με τη ποιότητα υπηρεσιών. Η αξιολόγηση γίνεται με βάση τα κριτήρια που αφορούν τόσο στη φιλικότητα των μηχανισμών αυτών προς το TCP όσο και στα κριτήρια που αφορούν τη ποιότητα υπηρεσιών των εφαρμογών πολυμέσων. Η αξιολόγηση αυτή μας οδηγεί στο σχεδιασμό νέων πρωτοκόλλων τα οποία υπόσχονται υψηλότερη φιλικότητα προς το TCP και καλύτερη ποιότητα υπηρεσιών. Ένα σημαντικό στοιχείο που διαφοροποιεί τα πρωτόκολλα αυτά από τις άλλες προσεγγίσεις είναι η ομαλή (smooth) συμπεριφορά κατά την οποία ελαχιστοποιούνται οι απότομες μεταβολές του ρυθμού μετάδοσης, που είναι μη επιθυμητές από τις εφαρμογές πολυμέσων, διατηρώντας παράλληλα ένα υψηλό ρυθμό απόκρισης στις απότομες μεταβολές των συνθηκών του δικτύου. Ένα δεύτερο σημαντικό στοιχείο της εργασίας αυτής είναι οι προσθήκες στις βιβλιοθήκες του προσομοιωτή ns-2 οι οποίες είναι ήδη αντικείμενο εκμετάλλευσης από άλλους ερευνητές. Για το σκοπό αυτό τα νέα πρωτόκολλα καθορίζονται πλήρως και ενσωματώνονται στις βιβλιοθήκες του προσομοιωτή ns-2 έτσι ώστε να είναι διαθέσιμα στην ερευνητική κοινότητα ως μέρος του προσομοιωτή, για περαιτέρω μελέτη και αξιολόγηση. Επεκτείνονται παράλληλα υπάρχοντα ερευνητικά εργαλεία προσομοιώσεων τα οποία επιτρέπουν την ανάλυση και την αξιολόγηση υπαρχόντων και μελλοντικών μηχανισμών προσαρμογής με βάση κριτήρια ποιότητας που αφορούν ειδικά τις εφαρμογές πολυμέσων, πλέον των “κλασσικών” κριτηρίων που σχετίζονται με μετρικά δικτύων. Σε ότι αφορά στα ασύρματα δίκτυα μελετάται η διαστρωματική προσαρμογή και εάν και κατά πόσο είναι δυνατό να επιτευχθεί η αύξηση της ποιότητας της παρεχόμενης υπηρεσίας μέσα από την εφαρμογή μια τέτοιας σχεδίασης. Μελετώνται οι διάφοροι τρόποι και μεθοδολογίες σχεδίασης μιας διαστρωματικής προσαρμογής και προτείνεται ένα νέο πλαίσιο με το οποίο είναι δυνατό να αυξήσουμε τη ποιότητα υπηρεσίας σε υβριδικά δίκτυα, που αποτελούνται τόσο από ενσύρματους όσο και από ασύρματους χρήστες. / Multimedia applications have gained in recent years an increasing demand from Internet users as they offer new opportunities and diverse multimedia services. These applications, however, are subject to restrictions which mainly have to do with its nature and are characterized by high requirements of the transmission rates (bandwidth-consuming applications) and their sensitivity to delays in the transmission of packets by the consignor to consignee (delay-sensitive applications). On the other hand, allegedly these applications are less sensitive to packet losses (packet-loss tolerant applications). The issue, however, with multimedia applications, except for the scope of the services which offer, is the Quality of Services (QoS) that is offered to the end user. This quality of services is directly linked to the above characteristics of multimedia applications. The approach so far by the research community and also the Internet Service Providers (ISPs), as regards ensuring the quality of service, has focused either to individually optimizing the efficiency of transmission protocols, or in the installation of additional equipment (servers) for the establishment of distribution networks (Content Distribution Networks, CDNS) which are normally positioned close to the final user. In addition, the growing effort of the research community with a view to increasing the quality of service offered new innovative solutions in the form of services-architectures like the Integrated Services (Intserv) and Differentiated Services (Diffserv ) which aspire to offer guarantees of quality of services in specific user groups. But these two architectures failed until now to become the solution for the provision of guarantees of quality of services to the end user due to difficulties in applying them which have to do with financial criteria and the structure of the Internet itself. Therefore, we can see that despite the progress made so far in networks technology the provision of QoS across the Internet is not still feasible with the result that multimedia services via the Internet (for example “YouTube”) are significantly affected by the changes of the network conditions. To this course, the research community has directed to the study of mechanisms which will be able to adjust the transmission rate of multimedia data, according to the conditions of the network, so as to offer the best possible quality of service to the end user. This effort could be classified into two broad categories, according to the way the multimedia information is routed, as follows: • Adaptation mechanisms for unicast transmission: In this case the adaptation mechanisms regulate the transmission rate between the sender and the receiver in a unicast connection. • Adaptation mechanisms for multicast transmission: In this case the adaptation mechanisms regulate the transmission rate between the sender and a group of receivers. Regarding the unicast transmission the predominant proposal is the congestion control mechanism that is termed as the “TCP-friendly Rate Control (TFRC) and has been accepted as an international standard by the Internet Engineering Task Force (IETF). In the area of multicast transmission the TCP-friendly Multicast Congestion Control (TFMCC) has also become acceptable as an experimental standard from IETF. Nevertheless, laboratory studies and experiments have shown that both TFRC and TFMCC are not the most suitable adaptation mechanisms for multimedia transmission. The main problems have to do with its friendliness towards the Transmission Connection Protocol (TCP) and the sudden fluctuations in the transmission. rate. These sharp variations of the transmission rates are an attribute non desirable by multimedia applications and particularly by real-time applications. In the area of wireless networks the problems with the transmission of multimedia data are not directly linked to the congestion of the network (this mainly occurs in wired networks) as the packet losses are a direct result of the free space propagation. The approach so far has aimed at the individual optimization of the various protocols of the OSI model so as to reduce the transmission problems and minimize packet losses and the delays from the sender to the end user. In recent years, however, a different approach which has prevailed to be termed internationally as “cross-layer optimization-adaptation” has earned more and more space. Under this approach we could be able to succeed the optimization of the service, regarding the quality of multimedia applications, by means of some adaptation mechanisms which will involve more than one of the OSI layers to current network conditions. The methodology, the challenges, the restrictions and the applications of cross layer adaptation constitute an open research area which is currently in progress. The aim of this dissertation is firs the study of the existing congestion control mechanisms which mainly concern best-effort wired networks, such as the Internet. In this direction we evaluate the existing congestion/flow control mechanisms and record the main problems related to the quality of service. The performance evaluation is based on criteria relating both to TCP-friendliness and the quality of service of multimedia applications. This performance evaluation leads us to the design of new protocols which promise greater TCP-friendliness and better quality of service. An important element that distinguishes these protocols of the other approaches is the “smooth” behavior by which we minimize the high oscillations of the transmission rate, which are not desirable by multimedia applications, while maintaining a high response to sudden changes of network conditions. A second important element of this dissertation is the additions we have made to the libraries of the ns-2 simulator which are already exploited by other researchers. For this purpose the new protocols are fully defined and incorporated into the ns-2 libraries so as to be available to the research community as part of the simulator, for further studies and evaluation. At the same time we expand existing research tools in order to enable the analysis and evaluation of existing and future mechanisms based on quality criteria specific to multimedia applications, along with network-centric criteria. Regarding the wireless networks we study the cross layer adaptation and how it is possible to achieve the increase in the quality of service by implementing such a design. We study the various ways and design methodologies of a cross layer adaptation and propose a new framework with which it is possible to increase the quality of service in hybrid networks consisting of both by wired and wireless users.
10

City Mobility Model with Google Earth Visualization

Andersson, Henrik, Oreland, Peter January 2007 (has links)
<p>Mobile Ad Hoc Networks are flexible, self configuring networks that do not need a fixed infrastructure. When these nets are simulated, mobility models can be used to specify node movements. The work in this thesis focuses on designing an extension of the random trip</p><p>mobility model on a city section from EPFL (Swiss federal institute of technology). Road data is extracted from the census TIGER database, displayed in Google Earth and used as input for the model. This model produces output that can be used in the open source network simulator ns-2.</p><p>We created utilities that take output from a database of US counties, the TIGER database, and convert it to KML. KML is an XML based format used by Google Earth to store geographical data, so that it can be viewed in Google Earth. This data will then be used as input to the modified mobility model and finally run through the ns-2 simulator. We present some NAM traces, a network animator that will show node movements over time.</p><p>We managed to complete most of the goals we set out, apart from being able to modify node positions in Google Earth. This was skipped because the model we modified had an initialization phase that made node positions random regardless of initial position. We were also asked to add the ability to set stationary nodes in Google Earth; this was not added due to time constraints.</p>

Page generated in 0.0555 seconds