• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Study of Scheduling Algorithm for GPRS

Lee, Hsusn-Chang 24 July 2003 (has links)
GPRS is one of popular topics for mobile communication. To satisfy the Quality of Service (QoS) requirement for multimedia transmission, the QoS is divided into four classes, which are conversational, interactive, streaming and background class. When mobile communication network transmits the multimedia data, it needs a proper scheduling algorithm to assign the radio resource and make every data meet the requirement of the QoS in GPRS system. In this thesis, we discuss the properties of every traffic class. For each traffic class, we propose a transmission method. The proposed methods are integrated with the link adaptation to develop a scheduling algorithm to suit the QoS requirement in the GPRS system. In addition, we introduce the FIFO scheduling algorithm and the scheduling algorithm that has priority. Our methods are then compared with those existing algorithms. We use OPNET simulation system to study the FIFO scheduling algorithm, the priority-scheduling algorithm and our new scheduling algorithms. The effects on the delay time, packet loss ratio and throughput for every scheduling algorithm are analyzed and compared. The simulations show that the new scheduling algorithm can satisfy all QoS requirements, and the performance of the new scheduling algorithm is better than that of the other scheduling algorithms in the interactive class.
2

Achieving predictable timing and fairness through cooperative polling

Sinha, Anirban 05 1900 (has links)
Time-sensitive applications that are also CPU intensive like video games, video playback, eye-candy desktops etc. are increasingly common. These applications run on commodity operating systems that are targeted at diverse hardware, and hence they cannot assume that sufficient CPU is always available. Increasingly, these applications are designed to be adaptive. When executing multiple such applications, the operating system must not only provide good timeliness but also (optionally) allow co-ordinating their adaptations so that applications can deliver uniform fidelity. In this work, we present a starvation-free, fair, process scheduling algorithm that provides predictable and low latency execution without the use of reservations and assists adaptive time sensitive tasks with achieving consistent quality through cooperation. We combine an event-driven application model called cooperative polling with a fair-share scheduler. Cooperative polling allows sharing of timing or priority information across applications via the kernel thus providing good timeliness, and the fair-share scheduler provides fairness and full utilization. Our experiments show that cooperative polling leverages the inherent efficiency advantages of voluntary context switching versus involuntary pre-emption. In CPU saturated conditions, we show that the scheduling responsiveness of cooperative polling is five times better than a well-tuned fair-share scheduler, and orders of magnitude better than the best-effort scheduler used in the mainstream Linux kernel.
3

Achieving predictable timing and fairness through cooperative polling

Sinha, Anirban 05 1900 (has links)
Time-sensitive applications that are also CPU intensive like video games, video playback, eye-candy desktops etc. are increasingly common. These applications run on commodity operating systems that are targeted at diverse hardware, and hence they cannot assume that sufficient CPU is always available. Increasingly, these applications are designed to be adaptive. When executing multiple such applications, the operating system must not only provide good timeliness but also (optionally) allow co-ordinating their adaptations so that applications can deliver uniform fidelity. In this work, we present a starvation-free, fair, process scheduling algorithm that provides predictable and low latency execution without the use of reservations and assists adaptive time sensitive tasks with achieving consistent quality through cooperation. We combine an event-driven application model called cooperative polling with a fair-share scheduler. Cooperative polling allows sharing of timing or priority information across applications via the kernel thus providing good timeliness, and the fair-share scheduler provides fairness and full utilization. Our experiments show that cooperative polling leverages the inherent efficiency advantages of voluntary context switching versus involuntary pre-emption. In CPU saturated conditions, we show that the scheduling responsiveness of cooperative polling is five times better than a well-tuned fair-share scheduler, and orders of magnitude better than the best-effort scheduler used in the mainstream Linux kernel.
4

Achieving predictable timing and fairness through cooperative polling

Sinha, Anirban 05 1900 (has links)
Time-sensitive applications that are also CPU intensive like video games, video playback, eye-candy desktops etc. are increasingly common. These applications run on commodity operating systems that are targeted at diverse hardware, and hence they cannot assume that sufficient CPU is always available. Increasingly, these applications are designed to be adaptive. When executing multiple such applications, the operating system must not only provide good timeliness but also (optionally) allow co-ordinating their adaptations so that applications can deliver uniform fidelity. In this work, we present a starvation-free, fair, process scheduling algorithm that provides predictable and low latency execution without the use of reservations and assists adaptive time sensitive tasks with achieving consistent quality through cooperation. We combine an event-driven application model called cooperative polling with a fair-share scheduler. Cooperative polling allows sharing of timing or priority information across applications via the kernel thus providing good timeliness, and the fair-share scheduler provides fairness and full utilization. Our experiments show that cooperative polling leverages the inherent efficiency advantages of voluntary context switching versus involuntary pre-emption. In CPU saturated conditions, we show that the scheduling responsiveness of cooperative polling is five times better than a well-tuned fair-share scheduler, and orders of magnitude better than the best-effort scheduler used in the mainstream Linux kernel. / Science, Faculty of / Computer Science, Department of / Graduate
5

WORKFLOW SCHEDULING ALGORITHMS IN THE GRID

Dong, FANGPENG 25 April 2009 (has links)
The development of wide-area networks and the availability of powerful computers as low-cost commodity components are changing the face of computation. These progresses in technology make it possible to utilize geographically distributed resources in multiple owner domains to solve large-scale problems in science, engineering and commerce. Research on this topic has led to the emergence of Grid computing. To achieve the promising potentials of tremendous distributed resources in the Grid, effective and efficient scheduling algorithms are fundamentally important. However, scheduling problems are well known for their intractability, and many of instances are in fact NP-Complete. The situation becomes even more challenging in the Grid circumstances due to some unique characteristics of the Grid. Scheduling algorithms in traditional parallel and distributed systems, which usually run on homogeneous and dedicated resources, cannot work well in the new environments. This work focuses on workflow scheduling algorithms in the Grid scenario. New challenges are discussed, previous research in this realm is surveyed, and novel heuristic algorithms addressing the challenges are proposed and tested. The proposed algorithms contribute to the literature by taking the following factors into account when a schedule for a DAG-based workflow is produced: predictable performance fluctuation and non-deterministic performance model of Grid resources, the computation and data staging co-scheduling, the clustering characteristic of Grid resource distribution, and the ability to reschedule according to performance change after the initial schedule is made. The performance of proposed algorithms are tested and analyzed by simulation under different workflow and resource configurations. / Thesis (Ph.D, Computing) -- Queen's University, 2009-04-23 22:30:09.646
6

Batch Scheduling in Optical Burst Switching Networks

Wang, Yichuan 21 April 2009 (has links)
Optical Burst Switching (OBS) is an emerging technology for bearing bursty IP traffic directly over Wavelength Division Multiplexing (WDM) links. In OBS network, a key challenge is to reduce the data loss rate with efficient scheduling algorithms. In this work, we first propose a novel traffic aggregation algorithm, namely Tree-based Burst Aggregation (TBA), which aggregates bursts that are routed within a common tree topology into a composite burst and switch them as a single unit whenever possible. Then we propose another set of algorithms are batch scheduling using interval graphs in the core nodes. The algorithms effectively consider the strong correlations among the multiple bursts, and employ the proposed interval graphs and min-cost circular flow techniques to achieve optimized network performance in terms of data loss rate in OBS networks.
7

Real-Time Scheduling methods for High Performance Signal Processing Applications on Multicore platform

Manoharan, Jegadish, Chandrakumar, Somanathan, Ramachandran, Ajit January 2012 (has links)
High-performance signal processing applications is computational intensive, complex and large amount of data has to be processed at every instance. Now these complex algorithms combined with real-time requirements requires that we perform tasks parallel and this should be done within specified time constraints. Therefore high computational system like multicore system is needed to fulfill these requirements, now problem lies in scheduling these real time tasks in multicore system. In this thesis we have studied and compared the different scheduling algorithms available in multicore platform along with hierarchical memory architecture. We have evaluated the performance by comparing their schedulability using tasks from the HPEC benchmark suite for radar signal processing applications. Apart from the comparison described above, we have proposed a new algorithm based on the PD2 scheduling algorithm which called Hybrid PD2 for hierarchical shared cache multicore platform. We have compared the Hybrid PD2 algorithm along with other scheduling algorithms using four randomly generated task sets.
8

Lifetime-Based Scheduling Algorithms for P2P Collaborative File Distribution

Liu, Yun-Chi 06 August 2008 (has links)
Prior researches in P2P file sharing mostly focus on several topics such as the overlay topology, the content searching, the peer discovery, the sharing fairness, incentive mechanisms, except scheduling algorithms for peer-to-peer collaborative file distribution. The scheduling algorithm specifies how file pieces are distributed among peers. When a peer that has the rarest piece leaves, the other peers probably download the incomplete file in the network. Our algorithm is involved in the lifetime of peers in the P2P networks. We first use the distribution of peer¡¦s lifetime and the demand of each peer to decide which peers send which pieces that are rarities, and then also consider the distribution of peer¡¦s lifetime and the demand of each peer to decide which peers receive these pieces. Our goals are to maximize number of peers which have downloaded an entire file before it leaves, to increase the availability of different file pieces, and to minimize the transmission time of the latest completion. Lastly, we show the comparison of the performances of RPF, MDNF, Lifetime-based RPF, and Lifetime-based MDNF algorithms.
9

Application-aware Scheduling in Multichannel Wireless Networks with Power Control

Nguyen, Minh Duc January 2012 (has links)
Scheduling algorithm is the algorithm to allocate system resources among processes and data flows. Joint channel-assignment and workload-based (CAWS) is a recently developed algorithm for scheduling in the downlink of multi-channel wireless systems, such as OFDM. Compared to well known algorithms, CAWS algorithm has been proved to throughput optimal with flow-level dynamics. In this master thesis project, we design a system that accounts for power control and for the characteristics of common radio channels. We evaluate the efficiency of the algorithm under a diverse set of conditions. We also do analysis of CAWS algorithm under different traffic density.
10

Capturing Successive Interference Cancellation in A Joint Routing and Scheduling Algorithm for Wireless Communication Networks

Rakhshan, Ali 01 January 2013 (has links) (PDF)
Interference limits the throughput of modern wireless communication networks, and thus the successful mitigation of interference can have a significant impact on network performance. Successive interference cancellation (SIC) has emerged as a promising physical layer method, where multiple packets received simultaneously need not be treated as a ``collision'' requiring retransmission; rather, under certain conditions, all of the packets can be decoded. Obviously, using SIC can thus serve as an important design element that can provide higher performance for the network. However, it also requires a rethinking of the way that traditional routing and scheduling algorithms, which are designed for a traditional physical layer, are developed. In order to consider routing and scheduling over a physical layer employing SIC, some tools such as the oft-employed conflict graph need to be modified. In particular, a notion of links interfering with other links ``indirectly'' is required, and this issue has been ignored in many past works. Therefore, considering the dependencies and interferences between links, a joint routing and scheduling algorithm that employs an understanding of the SIC that will be employed at the physical layer is presented and shown to surpass previous algorithms. We know that the maximum throughput scheduling problem is NP-hard. On the other hand, even if we can reach maximum throughput scheduling, while being throughput efficient, it can result in highly unfair rates among the users. Hence, proportional fairness is developed in the proposed algorithm.

Page generated in 0.0608 seconds