Spelling suggestions: "subject:"cheduling."" "subject:"ascheduling.""
261 |
System architecture and hardware implementations for a reconfigurable MPLS routerLi, Sha 30 September 2003
With extremely wide bandwidth and good channel properties, optical fibers have brought fast and reliable data transmission to todays data communications. However, to handle heavy traffic flowing through optical physical links, much faster processing speed is required or else congestion can take place at network nodes. Also, to provide people with voice, data and all categories of multimedia services, distinguishing between different data flows is a requirement. To address these router performance, Quality of Service /Class of Service and traffic engineering issues, Multi-Protocol Label Switching (MPLS) was proposed for IP-based Internetworks. In addition, routers flexible in hardware architecture in order to support ever-evolving protocols and services without causing big infrastructure modification or replacement are also desirable. Therefore, reconfigurable hardware implementation of MPLS was proposed in this project to obtain the overall fast processing speed at network nodes.
The long-term goal of this project is to develop a reconfigurable MPLS router, which uniquely integrates the best features of operations being conducted in software and in run-time-reconfigurable hardware. The scope of this thesis includes system architecture and service algorithm considerations, Verilog coding and testing for an actual device. The hardware and software co-design technique was used to partition and schedule the protocol code for execution on both a general-purpose processor and stream-based hardware. A novel RPS scheme that is practically easy to build and can realize pipelined packet-by-packet data transfer at each output was proposed to take the place of the traditional crossbar switching. In RPS, packets with variable lengths can be switched intelligently without performing packet segmentation and reassembly. Primary theoretical analysis of queuing issues was discussed and an improved multiple queue service scheduling policy UD-WRR was proposed, which can reduce packet-waiting time without sacrificing the performance. In order to have the tests carried out appropriately, dedicated circuitry for the MPLS functional block to interface a specific MAC chip was implemented as well. The hardware designs for all functions were realized with a single Field Programmable Gate Array (FPGA) device in this project.
The main result presented in this thesis was the MPLS function implementation realizing a major part of layer three routing at the reconfigurable hardware level, which advanced a great step towards the goal of building a router that is both fast and flexible.
|
262 |
System architecture and hardware implementations for a reconfigurable MPLS routerLi, Sha 30 September 2003 (has links)
With extremely wide bandwidth and good channel properties, optical fibers have brought fast and reliable data transmission to todays data communications. However, to handle heavy traffic flowing through optical physical links, much faster processing speed is required or else congestion can take place at network nodes. Also, to provide people with voice, data and all categories of multimedia services, distinguishing between different data flows is a requirement. To address these router performance, Quality of Service /Class of Service and traffic engineering issues, Multi-Protocol Label Switching (MPLS) was proposed for IP-based Internetworks. In addition, routers flexible in hardware architecture in order to support ever-evolving protocols and services without causing big infrastructure modification or replacement are also desirable. Therefore, reconfigurable hardware implementation of MPLS was proposed in this project to obtain the overall fast processing speed at network nodes.
The long-term goal of this project is to develop a reconfigurable MPLS router, which uniquely integrates the best features of operations being conducted in software and in run-time-reconfigurable hardware. The scope of this thesis includes system architecture and service algorithm considerations, Verilog coding and testing for an actual device. The hardware and software co-design technique was used to partition and schedule the protocol code for execution on both a general-purpose processor and stream-based hardware. A novel RPS scheme that is practically easy to build and can realize pipelined packet-by-packet data transfer at each output was proposed to take the place of the traditional crossbar switching. In RPS, packets with variable lengths can be switched intelligently without performing packet segmentation and reassembly. Primary theoretical analysis of queuing issues was discussed and an improved multiple queue service scheduling policy UD-WRR was proposed, which can reduce packet-waiting time without sacrificing the performance. In order to have the tests carried out appropriately, dedicated circuitry for the MPLS functional block to interface a specific MAC chip was implemented as well. The hardware designs for all functions were realized with a single Field Programmable Gate Array (FPGA) device in this project.
The main result presented in this thesis was the MPLS function implementation realizing a major part of layer three routing at the reconfigurable hardware level, which advanced a great step towards the goal of building a router that is both fast and flexible.
|
263 |
Timed Petri Net Based Scheduling for Mechanical Assembly : Integration of Planning and SchedulingOKUMA, Shigeru, SUZUKI, Tatsuya, FUJIWARA, Fumiharu, INABA, Akio 20 April 1998 (has links)
No description available.
|
264 |
A Task Selection Based Power-aware Scheduling Algorithm for Applying DVSMori, Yuichiro, Asakura, Koichi, Watanabe, Toyohide 08 November 2009 (has links)
No description available.
|
265 |
Scheduling for Interactive and Parallel Applications on GridsFernández del Castillo, Enol 07 November 2008 (has links)
La computación grid constituye uno de los campos más prometedores de los sistemas informáticos. La próxima generación de aplicaciones científicas se beneficiará de una infraestructura de gran escala y multi organizacional que ofrece más potencia de cómputo de la que puede ofrecer cualquier institicuón de forma individual. Los sistemas grid necesitan planificadores de alto nivel que gestionen de forma adecuada los recursos distribuídos en varias organizaciones. Estos sistemas de gestión de recursos grid deben tomar decisiones de planificación sin realmente poseer los recursos y sin tener control total sobre las aplicaciones que en dichos recursos se ejecutan, introduciendo nuevos desafíos a la hora de realizar la planificación de aplicaciones. Aunque los sistemas grids consisten de muchos recursos y las aplicaciones enviados a estos sistemas pueden aprovechar estos recursos usándolos de forma coordinada, la mayoría de los sistemas de gestión de recursos se han centrado en la ejecución de aplicaciones secuenciales, convirtiendo el grid en un gran sistema multi-sitio donde las aplicaciones se ejecutan de forma batch.Sin embargo, en esta tesis nos hemos centrado en un tipo de aplicaciones que ha recibido poca atención hasta el momento: paralelas e interactivas. Las aplicaciones interactivas requieren la posibilidad de iniciar en un futuro inmediato su ejecución. Además, durante su ejecución es necesario proveer de mecanismos que establezcan un canal de comunicación entre el usuario y la aplicación. En el caso de las aplicaciones paralelas, es necesaria la co-asignación, es decir garantizar la disponibilidad simultánea de los recursos cuando la aplicación necesite usarlos. En este trabajo proponemos una nueva arquitectura para la ejecución de estos tipos de aplicaciones y una implementación de la misma: el gestor de recursos CrossBroker. Esta arquitectura incluye mecanismos que permiten la co-asignación de aplicaciones paralelos y la interacción de los usuarios con las aplicaciones en ejecución. Adicionalmente, con la introducción de un mecanismo de multiprogramación, proporcionamos un inicio de aplicaciones rápido incluso en escenarios de alta ocupación de los recursos. / Grid computing constitutes one of the most promising fields in computer systems. The next generation of scientific applications can profit from a large-scale, multi-organizational infrastructure that offers more computing power than one institution alone is able to afford. Grids need high-level schedulers that can be used to manage the resources spanning different organizations. These Grid Resource Management Systems (GRMS) have to make scheduling decisions without actually owning the grid resources, or having full control over the jobs that are running there. This introduces new challenges in the scheduling process done by the GRMSs. Although grids consist of many resources, and jobs submitted to grid may benefit from using them in a coordinated way, most of the Grid Resource Management Systems have focused on the execution of sequential jobs, with the grid being a large multi-site environment where jobs run in a batch-like way.However, in this work we concentrate on a kind of jobs that have received little attention to date: interactive and parallel jobs. Interactive jobs require the possibility of starting in the immediate future and need mechanisms to establish a communication channel with the user. Parallel applications introduce the need for co-allocation, guaranteeing the simultaneous availability of the resources when they are accessed by the applications. We address the challenges of executing such jobs with a new architecture for a GRMS and an implementation of that architecture called the CrossBroker. Our architecture includes mechanisms to allow the co-allocation of parallel jobs and the interaction of users with running applications. Additionally with the introduction of a multi-programming mechanism, a fast startup of jobs even in high occupancy scenarios is provided.
|
266 |
Reward Scheduling for QoS in Cloud ApplicationsElnably, Ahmed 06 September 2012 (has links)
The growing popularity of multi-tenant, cloud-based computing platforms is increasing
interest in resource allocation models that permit flexible sharing of the underlying
infrastructure. This thesis introduces a novel IO resource allocation model
that better captures the requirements of paying tenants sharing a physical infrastructure.
The model addresses a major concern regarding application performance
stability when clients migrate from a dedicated to a shared platform. Specifically,
while clients would like their applications to behave similarly in both situations, traditional
models of fairness, like proportional share allocation, do not exhibit this
behavior in the context of modern multi-tiered storage architectures.
We also present a scheduling algorithm, the Reward Scheduler, that implements
the new allocation policy, by rewarding clients with better runtime characteristics,
resulting in benefits to both the clients and the service provider. Moreover, the Reward
scheduler also supports weight-based capacity allocation subject to a minimum
reservation and maximum limitation on the IO allocation for each task. Experimental
results indicate that the proposed algorithm proportionally allocates the system
capacity in proportion to their entitlements.
|
267 |
University Timetabling using Genetic AlgorithmMurugan, Anandaraj Soundarya Raja January 2009 (has links)
The field of automated timetabling and scheduling meeting all the requirementsthat we call constraints is always difficult task and already proved as NPComplete. The idea behind my research is to implement Genetic Algorithm ongeneral scheduling problem under predefined constraints and check the validityof results, and then I will explain the possible usage of other approaches likeexpert systems, direct heuristics, network flows, simulated annealing and someother approaches. It is observed that Genetic Algorithm is good solutiontechnique for solving such problems. The program written in C++ and analysisis done with using various tools explained in details later.
|
268 |
Fairness-Aware Uplink Packet Scheduling Based on User Reciprocity for Long Term EvolutionWu, Hsuan-Cheng 03 August 2011 (has links)
none
|
269 |
Real-Time Task Scheduling under Thermal ConstraintsAhn, Youngwoo 2010 August 1900 (has links)
As the speed of integrated circuits increases, so does their power consumption.
Most of this power is turned into heat, which must be dissipated effectively in order
for the circuit to avoid thermal damage. Thermal control therefore has emerged as an
important issue in design and management of circuits and systems. Dynamic speed
scaling, where the input power is temporarily reduced by appropriately slowing down
the circuit, is one of the major techniques to manage power so as to maintain safe
temperature levels.
In this study, we focus on thermally-constrained hard real-time systems, where
timing guarantees must be met without exceeding safe temperature levels within the
microprocessor. Speed scaling mechanisms provided in many of today’s processors
provide opportunities to temporarily increase the processor speed beyond levels that
would be safe over extended time periods. This dissertation addresses the problem
of safely controlling the processor speed when scheduling mixed workloads with both
hard-real-time periodic tasks and non-real-time, but latency-sensitive, aperiodic jobs.
We first introduce the Transient Overclocking Server, which safely reduces the
response time of aperiodic jobs in the presence of hard real-time periodic tasks and
thermal constraints. We then propose a design-time (off-line) execution-budget allocation
scheme for the application of the Transient Overclocking Server. We show
that there is an optimal budget allocation which depends on the temporal character istics of the aperiodic workload. In order to provide a quantitative framework for the
allocation of budget during system design, we present a queuing model and validate
the model with results from a discrete-event simulator.
Next, we describe an on-line thermally-aware transient overclocking method to
reduce the response time of aperiodic jobs efficiently at run-time. We describe a modified
Slack-Stealing algorithm to consider the thermal constraints of systems together
with the deadline constraints of periodic tasks. With the thermal model and temperature
data provided by embedded thermal sensors, we compute slack for aperiodic
workload at run-time that satisfies both thermal and temporal constraints. We show
that the proposed Thermally-Aware Slack-Stealing algorithm minimizes the response
times of aperiodic jobs while guaranteeing both the thermal safety of the system and
the schedulability of the real-time tasks. The two proposed speed control algorithms
are examples of so-called proactive schemes, since they rely on a prediction of the
thermal trajectory to control the temperature before safe levels are exceeded.
In practice, the effectiveness of proactive speed control for the thermal management
of a system relies on the accuracy of the thermal model that underlies the
prediction of the effects of speed scaling and task execution on the temperature of
the processor. Due to variances in the manufacturing of the circuit and of the environment
it is to operate, an accurate thermal model can be gathered at deployment
time only. The absence of power data makes a straightforward derivation of a model
impossible.
We, therefore, study and describe a methodology to infer efficiently the thermal
model based on the monitoring of system temperatures and number of instructions
used for task executions.
|
270 |
Efficient Scheduling In Distributed Computing On GridKaya, Ozgur 01 December 2006 (has links) (PDF)
Today many computing resources distributed geographically are idle much of time. The aim of the grid computing is collecting these resources into a single system. It helps to solve problems that are too complex for a single PC. Scheduling plays a critical role in the efficient and effective management of resources to achieve high performance on grid computing environment. Due to the heterogeneity and highly dynamic nature of grid, developing scheduling algorithms for grid computing involves some challenges. In this work, we concentrate on efficient scheduling of distributed tasks on grid. We propose a novel scheduling heuristic for bag-of-tasks applications. The proposed algorithm primarily makes use of history based runtime estimation. The history stores information about the applications whose runtimes and other specific properties are recorded during the previous executions. Scheduling decisions are made according to similarity between the applications. Definition of similarity is an important aspect of this approach, apart from the best resource allocation. The aim of this scheduling algorithm (HISA-History Injected Scheduling Algorithm) is to define and find the similarity, and assign the job to the most suitable resource, making use of the similarity. In our evaluation, we use Grid simulation tool called GridSim. A number of intensive experiments with various simulation settings have been conducted. Based on the experimental results, the effectiveness of HISA scheduling heuristic is studied and compared to the other scheduling algorithms embedded in GridSim. The results show that history injection improves the performance of future job submissions on a grid.
|
Page generated in 0.0551 seconds