• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 13
  • 9
  • 6
  • 4
  • 4
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 91
  • 19
  • 16
  • 14
  • 13
  • 13
  • 13
  • 12
  • 12
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Scheduling Algorithms For Wireless Cdma Networks

Hakyemez, Serkan Ender 01 December 2007 (has links) (PDF)
In recent years the need for multimedia packet data services in wireless networks has grown rapidly. To overcome that need third generation (3G) mobile services have been proposed. The fast growing demands multimedia services in 3G services brought the need for higher capacity. As a result of this, the improvement on throughput, traffic serving performance has become necessary in 3G systems. Code division multiple access (CDMA) technique is one of the most important 3G wireless mobile techniques that has been defined. The scheduling mechanisms used in CDMA plays an important role on the efficiency of the system. The power, rate and capacity parameters are variable and dependent to each other in designing a scheduling mechanism. The schedulers for CDMA decide which user will use the frequency band at which time interval with what power and rate. In this thesis different type of algorithms used in time slotted CDMA are studied and a new algorithm which supports Quality of Service (QoS) is proposed. The performance analysis of this proposed algorithm is done via simulation in comparison to selected CDMA schedulers.
32

Connectionless Traffic And Variable Packet Size Support In High Speed Network Switches: Improvements For The Delay-limiter Switch

Akcasoy, Alican 01 June 2008 (has links) (PDF)
Quality of Service (QoS) support for real-time traffic is a critical issue in high speed networks. The previously proposed Delay-Limiter Switch working with the Framed-Deadline Scheduler (FDS) is a combined input-output queuing (CIOQ) packet switch that can provide end-to-end bandwidth and delay guarantees for connection-oriented traffic. The Delay-Limiter Switch works with fixed-size packets. It has a scalable architecture and can provide QoS support for connection-oriented real-time traffic in a low-complexity fashion. The Delay-Limiter Switch serves connectionless traffic by using the remaining resources from the connection-oriented traffic. In this case, efficient management of the residual resources plays an important role on the performance of the connectionless traffic. This thesis work integrates new methods to the Delay-Limiter Switch that can improve the performance of the connectionless traffic while still serving the connection-oriented traffic with the promised QoS guarantees. A new method that makes it possible for the Delay-Limiter Switch to support variable-sized packets is also proposed.
33

Design And Implementation Of Scheduling And Switching Architectures For High Speed Networks

Sanli, Mustafa 01 October 2011 (has links) (PDF)
Quality of Service (QoS) schedulers are one of the most important components for the end-to-end QoS support in the Internet. The focus of this thesis is the hardware design and implementation of the QoS schedulers, that is scalable for high line speeds and large number of traffic flows. FPGA is the selected hardware platform. Previous work on the hardware design and implementation of QoS schedulers are mostly algorithm specific. In this thesis, a general architecture for the design of the class of Packet Fair Queuing (PFQ) schedulers is proposed. Worst Case Fair Weighted Fair Queuing Plus (WF2Q+) scheduler is implemented and tested in hardware to demonstrate the proposed architecture and design enhancements. The maximum line speed that PFQ algorithms can operate decreases as the number of scheduled flows increases. For this reason, this thesis proposes to aggregate the flows to scale the PFQ architecture to high line speeds. The Window Based Fair Aggregator (WBFA) algorithm that this thesis suggests for flow aggregation provides a tunable trade-off between the efficient use of the available bandwidth and the fairness among the constituent flows. WBFA is also integrated to the hardware PFQ architecture. The QoS support provided by the proposed PFQ architecture and WBFA is measured by conducting hardware experiments on a custom built high speed network testbed which consists of three data processing cards and a backplane. In these experiments, the input traffic is provided by the hardware traffic generator which is designed in the scope of this thesis.
34

A dynamic regulation scheme with scheduler feedback information for multimedia network

Shih, Hsiang-Ren 11 July 2001 (has links)
Most proposed regulation methods do not take advantage of the state information of the underlying scheduler, resulting in a waste of resources. We propose a dynamic regulation approach in which the regulation function is modulated by both the tagged stream's characteristics and the state information fed-back from the scheduler. The transmission speed of a regulator is accelerated when too much traffic has been sent to the scheduler by the other regulators or when the scheduler's queue is empty. As a result, the mean delay of the traffic can be reduced and the scheduler's throughput can be increased. Since no complicated computation is involved, our approach is suitable for the use in high-speed networks.
35

A Dynamic Queue Adjustment Based on Packet Loss Ratio in Wireless Networks

Chu, Tsuh-Feng 13 August 2003 (has links)
Traditional TCP when applied in wireless networks may encounter two limitations. The first limitation is the higher bit error rate (BER) due to noise, fading, and multipath interference. Because traditional TCP is designed for wired and reliable networks, packet loss is mainly caused by network congestions. As a result, TCP may decrease congestion window inappropriately upon detecting a packet loss. The second limitation is about the packet scheduling, which mostly does not consider wireless characteristics. In this Thesis, we propose a local retransmission mechanism to improve TCP throughput for wireless networks with higher BER. In addition, we measure the packet loss ratio (PLR) to adjust the queue weight such that the available bandwidth for each queue can be changed accordingly. In our mechanism, the queue length is used to determine whether there is a congestion in wireless networks. When the queue length exceeds a threshold, it indicates that the wireless networks may have congestion very likely. We not only propose the dynamic weight-adjustment mechanism, but also solve the packet out-of-sequence problem, which results form when a TCP flow changes to a new queue. For the purpose of demonstration, we implement the proposed weight-adjustment mechanisms on the Linux platform. Through the measurements and discussions, we have shown that the proposed mechanisms can effectively improve the TCP throughput in wireless networks.
36

Mitigating DRAM complexities through coordinated scheduling policies

Stuecheli, Jeffrey Adam 04 June 2012 (has links)
Contemporary DRAM systems have maintained impressive scaling by managing a careful balance between performance, power, and storage density. In achieving these goals, a significant sacrifice has been made in DRAM's operational complexity. To realize good performance, systems must properly manage the significant number of structural and timing restrictions of the DRAM devices. DRAM's efficient use is further complicated in many-core systems where the memory interface has to be shared among multiple cores/threads competing for memory bandwidth. In computer architecture, caches have primarily been viewed as a means to hide memory latency from the CPU. Cache policies have focused on anticipating the CPU's data needs, and are mostly oblivious to the main memory. This work demonstrates that the era of many-core architectures has created new main memory bottlenecks, and mandates a new approach: coordination of cache policy with main memory characteristics. Using the cache for memory optimization purposes dramatically expands the memory controller's visibility of processor behavior, at low implementation overhead. Through memory-centric modification of existing policies, such as scheduled writebacks, this work demonstrates that performance-limiting effects of highly-threaded architectures combined with complex DRAM operation can be overcome. This work shows that an awareness of the physical main memory layout and by focusing on writes, both read and write average latency can be shortened, memory power reduced, and overall system performance improved. The use of the "Page-Mode" feature of DRAM devices can mitigate many DRAM constraints. Current open-page policies attempt to garner the highest level of page hits. In an effort to achieve this, such greedy schemes map sequential address sequences to a single DRAM resource. This non-uniform resource usage pattern introduces high levels of conflict when multiple workloads in a many-core system map to the same set of resources. This work presents a scheme that provides a careful balance between the benefits (increased performance and decreased power), and the detractors (unfairness) of page-mode accesses. In the proposed Minimalist approach, the system targets "just enough" page-mode accesses to garner page-mode benefits, avoiding system unfairness. This is accomplished with the use of a fair memory hashing scheme to control the maximum number of page mode hits. High density memory is becoming ever more important as many execution streams are consolidated onto single chip many-core processors. DRAM is ubiquitous as a main memory technology, but while DRAM's per-chip density and frequency continue to scale, the time required to refresh its dynamic cells has grown at an alarming rate. This work shows how currently-employed methods to schedule refresh operations are ineffective in mitigating the significant performance degradation caused by longer refresh times. Current approaches are deficient -- they do not effectively exploit the flexibility of DRAMs to postpone refresh operations. This work proposes dynamically reconfigurable predictive mechanisms that exploit the full dynamic range allowed in the industry standard DRAM memory specifications. The proposed mechanisms are shown to mitigate much of the penalties seen with dense DRAM devices. In summary this work presents a significant improvement in the ability to exploit the capabilities of high density, high frequency, DRAM devices in a many-core environment. This is accomplished though coordination of previously disparate system components, exploiting integration of such components into highly integrated system designs. / text
37

Ανάπτυξη χρονοπρογραμματιστή ROLM για ενσωματωμένους μεταγωγείς ΑΤΜ

Στούμπου, Κωνσταντίνα 07 September 2009 (has links)
Στη μεταπτυχιακή αυτή εργασίας γίνεται μελέτη και υλοποίηση ενός αλγορίθμου χρονοπρογραμματισμού για μεταγωγέα ΑΤΜ της κατηγορίας αλγορίθμων ranking, ο οποίος χρησιμοποιεί μνήμη οργανωμένη σε πολλαπλές ουρές εισόδου για την αποθήκευση των πακέτων πριν την δρομολόγησή τους. O αλγόριθμος ROLM (Randomized On – Line Matching) επιτυγχάνει μέγιστο ταίριασμα εισόδων – εξόδων λόγω του permutation των εισόδων που γίνεται πριν την είσοδο των αιτήσεων. Επίσης, στοχεύει στη μείωση του latency που αφορά τη hardware υλοποίηση (χάρις στον υπολογισμό του τυχαίου permutation) και σε υψηλά ποσοστά δικαιοσύνης και throughput. Η υλοποίησηση του αλγορίθμου ROLM εκτελείται με δύο τρόπους: α) σε υλικό (FPGA) και β) σε λογισμικό (κώδικας C για AVR). Η πλατφόρμα FPSLIC μας επιτρέπει να αξιολογήσουμε και να συγκρίνουμε τις hardware και software υλοποιήσεις του αλγορίθμου κατά έναν ρεαλιστικό τρόπο, καθώς τόσο ο μικροελεγκτής ΑVR, όσο και η προγραμματιζόμενη λογική FPGA είναι κατασκευασμένα με την ίδια ακριβώς τεχνολογία, ενσωματωμένα σε μια μονολιθική συσκευή. Εξάγονται τα αποτελέσματα μετρήσεων της ταχύτητας και επιφάνειας του χρονοπρογραμματιστή και γίνεται σύγκριση για διαφορετικά μεγέθη μεταγωγέα στην απόδοση μεταξύ των δύο υλοποιήσεων του αλγορίθμου μεταξύ τους. Γίνεται επίσης σύγκριση μεταξύ αποτελεσμάτων του αλγόριθμου ROLM και του αλγορίθμου FIRM, που έχουν ληφθεί από παρεμφερή εργασία. / In this study the design and implementation of a scheduler ranking algorithm for ATM switches is presented. The algorithm employs a multiple-queue input memory for storing packets prior to sending them out. It is the ROLM (Randomized On-Line Matching) algorithm, which performs a high level of input-output matching due to pre-request input permutation. It also reduces hardware-related latency (due to the calculation of random permutation), and achieves fairness and high throughput. The ROLM algorithm was implemented in two ways: one implementation for hardware (FPGA) and one for software (using code C for AVR). The FPSLIC platform allowed for a reliable assessment of the algorithm's hardware and software implementation since the AVR microcontroller and the FGPA programming logic are technologically compatible and integrated on a single device. Measurement results are presented on the controller's surface and speed for different switch sizes as well as a performance comparison is conducted on published results between ROLM and FIRM algorithms.
38

Ανάπτυξη χρονοπρογραμματιστή αμοιβαίας προτεραιότητας για ενσωματωμένους μεταγωγείς ΑΤΜ. / Development of mutual priority scheduler for embedded ATM switches.

Χρόνης, Ανδρέας 16 May 2007 (has links)
To ATM είναι μια δικτυακή τεχνολογία μετάδοσης που υποστηρίζει την μεταφορά ετερογενούς κίνησης, δηλ πραγματικού χρόνου όπως ήχος, εικόνα και μη πραγματικού χρόνου όπως υπολογιστικά δεδομένα, χρησιμοποιώντας έναν μηχανισμό που διαβιβάζει μονάδες δεδομένων σταθερού μεγέθους, τα cells. Η απόδοση του δικτύου ΑΤΜ εξαρτάται σε μεγάλο βαθμό από την χαρακτηριστικά των μεταγωγέων πακέτων. Για την ανάπτυξη αποτελεσματικών μεταγωγέων χρειάζεται να αναπτύξουμε αποτελεσματικούς χρονοπρογραμματιστές υψηλής ταχύτητας που είναι απλοί στην υλοποίησή τους. Στα πλαίσια αυτής της μεταπτυχιακής εργασίας γίνεται η μελέτη και η υλοποίηση ενός νέου κατανεμημένου αλγορίθμου χρονοπρογραμματισμού για μεταγωγέα ΑΤΜ, που χρησιμοποιεί μνήμη οργανωμένη σε πολλαπλές ουρές εισόδου για την αποθήκευση των πακέτων πριν την δρομολόγησή τους. O αλγόριθμος Αμοιβαίας Προτεραιότητας (Mutual Priority) μπορεί να επιτύχει υψηλό throughput και βέλτιστη εγγύηση εξυπηρέτησης, ίση με Ν κύκλους. Επιπλέον προσφέρει πολύ υψηλή απόδοση ακόμα και με μια μόνο επανάληψη, υπερτερώντας έτσι των υπόλοιπων αλγορίθμων. Η υλοποίησηση του αλγορίθμου Αμοιβαίας Προτεραιότητας εκτελείται με 2 τρόπους: α) σε υλικό (FPGA) και β) σε λογισμικό (κώδικας C για AVR). Η πλατφόρμα FPSLIC μας επιτρέπει να αξιολογήσουμε και να συγκρίνουμε τις hardware και software υλοποιήσεις του αλγορίθμου κατά έναν ρεαλιστικό τρόπο, αφού τόσο ο μικροελεγκτής ΑVR, αλλά και η προγραμματιζόμενη λογική FPGA είναι κατασκευασμένα με την ίδια ακριβώς τεχνολογία, ενσωματωμένα σε μια μονολιθική συσκευή. Τέλος εξάγουμε αποτελέσματα μετρήσεων της ταχύτητας και επιφάνειας του χρονοπρογραμματιστή και εκπονούμε σύγκριση για διαφορετικά μεγέθη μεταγωγέα στην απόδοση μεταξύ των 2 υλοποιήσεων του αλγορίθμου μεταξύ τους, αλλά και σύγκριση μεταξύ αποτελεσμάτων, που έχουν ληφθεί από παρεμφερή εργασία, του αλγορίθμου FIRM και του αλγόριθμου Mutual Priority. Παρατηρούμε ότι ο αλγόριθμος Mutual Priority υπερέχει ξεκάθαρα, είτε για υλοποίηση σε υλικό είτε σε λογισμικό, έναντι του άλλου αλγορίθμου. / ATM is a network transmission technology that allows transfer of heterogeneous traffic, that is real-time like sound, image and non real-time like computer data, using a mechanism that delivers fixed size data units, the cells. The performance-efficiency of ATM network depends on a grate scale from the characteristics of packet switches. In order to develop efficient switches we need to design optimal high speed schedulers, which are easy to realize. In this master thesis, we present the study and realization of a new distributed scheduling algorithm for ATM switch, that uses memory organized according to the scheme of advanced input queuing. Mutual Priority Scheduler can achieve high throughput and optimum service guarentee, equal to N cycles. Furthermore it provides high performance, even with one iteration, exceeding all the other algoritms. The realization of the algorithm is performed with 2 ways: a)in hardware and b) in software. Fpslic platform let us evaluate and compare these 2 different realizations of Mutual Priority algorithm, as it contains an FPGA and a microcontroller embedded on the same chip. Finaly we present measurements for the speed and area of the scheduler, and make comparisons for different switch sizes. Moreover we compare the realization of Mutual Priority Scheduler and that of Firm scheduler. We take us an outcome that Mutual Priority scheduler surpass the other algorithm either in hardware, or in software.
39

Models of time in audio processing environments

Burroughs, Ivan Neil 06 August 2008 (has links)
Time has always been a parameter to minimize in computer programs. It is the stuff that measures our patience as we wait for results. However, for a number of problems, we seek to model a notion of time that can be used to regulate the rate at which things happen. Audio processing is one of these problem areas. It has seen the development of many languages and environments with each one having to adopt a suitable notion of time to support such things as accurately timed events and interactivity while remaining efficient. In this thesis I will investigate the forms of simulated time within audio processing environments. To this end, I will define a set of properties that shape the construction of a model of time simulated on a computer. We can see these properties in the design of languages and environments that support the scheduling of events. With that in mind, I will provide a survey of the use of time in a number of computer languages and paradigms. The reach of this survey will not be exhaustive but will instead try to investigate different ideas with an emphasis on languages for audio processing. I will also put some of these ideas into practice by presenting two separate audio processing frameworks each with their own model of time.
40

Optimising application performance with QoS support in Ad Hoc networks

Marchang, Jims January 2016 (has links)
The popularity of wireless communication has increased substantially over the last decade, due to mobility support, flexibility and ease of deployment. Among next generation of mobile communication technologies, Ad Hoc networking plays an important role, since it can stand alone as private network, become a part of public network, either for general use or as part of disaster management scenarios. The performance of multihop Ad Hoc networks is heavily affected by interference, mobility, limited shared bandwidth, battery life, error rate of wireless media, and the presence of hidden and exposed terminals. The scheduler and the Medium Access Control (MAC) play a vital role in providing Quality of Service (QoS) and policing delay, end-to-end throughput, jitter, and fairness for user application services. This project aims to optimise the usage of the available limited resources in terms of battery life and bandwidth, in order to reduce packet delivery time and interference, enhance fairness, as well as increase the end-to-end throughput, and increase the overall network performance. The end-to-end throughput of an Ad Hoc network decays rapidly as the hop count between the source and destination pair increases and additional flows injected along the path of an existing flow affects the flows arriving from further away; in order to address this problem, the thesis proposes a Hop Based Dynamic Fair Scheduler that prioritises flows subject to the hop count of frames, leading to a 10% increase in fairness when compared to a IEEE 802.11b with single queue. Another mechanism to improve network performance in high congestion scenarios is network-aware queuing that reduces loss and improve the end-to-end throughput of the communicating nodes, using a medium access control method, named Dynamic Queue Utilisation Based Medium Access Control (DQUB-MAC). This MAC provides higher access probability to the nodes with congested queue, so that data generated at a high rate can be forwarded more effectively. Finally, the DQUB-MAC is modified to take account of hop count and a new MAC called Queue Utilisation with Hop Based Enhanced Arbitrary Inter Frame Spacing (QU-EAIFS) is also designed in this thesis. Validation tests in a long chain topology demonstrate that DQUB-MAC and QU-EAIFS increase the performance of the network during saturation by 35% and 40% respectively compared to IEEE 802.11b. High transmission power leads to greater interference and represents a significant challenge for Ad Hoc networks, particularly in the context of shared bandwidth and limited battery life. The thesis proposes two power control mechanisms that also employ a random backoff value directly proportional to the number of the active contending neighbours. The first mechanism, named Location Based Transmission using a Neighbour Aware with Optimised EIFS for Ad Hoc Networks (LBT-NA with Optimised EIFS MAC), controls the transmission power by exchanging location information between the communicating nodes in order to provide better fairness through a dynamic EIFS based on the overheard packet length. In a random topology, with randomly placed source and destination nodes, the performance gain of the proposed MAC over IEEE 802.11b ranges from approximately 3% to above 90% and the fairness index improved significantly. Further, the transmission power is directly proportional to the distance of communication. So, the performance is high and the durability of the nodes increases compared to a fixed transmission power MAC such as IEEE 802.11b when communicating distance is shorter. However, the mechanism requires positional information, therefore, given that location is typically unavailable, a more feasible power control cross layered system called Dynamic Neighbour Aware – Power controlled MAC (Dynamic NA -PMAC)is designed to adjust the transmission power by estimating the communicating distance based on the estimated overheard signal strength. In summary, the thesis proposes a number of mechanisms that improve the fairness amongst the competing flows, increase the end-to-end throughput, decrease the delay, reduce the transmission power in Ad Hoc environments and substantially increase the overall performance of the network.

Page generated in 0.0443 seconds