Return to search

Dynamic Control in Stochastic Processing Networks

A stochastic processing network is a system that takes materials of various kinds as inputs, and uses processing resources to produce other materials as outputs. Such a network provides a powerful abstraction of a wide range of real world, complex systems, including semiconductor
wafer fabrication facilities, networks of data switches, and large-scale call centers. Key performance measures of a stochastic processing network include throughput, cycle time, and
holding cost. The network performance can dramatically be affected by the choice of operational policies.

We propose a family of operational policies called maximum pressure policies. The maximum pressure policies are attractive in that their implementation uses minimal state information of the network. The deployment of a resource (server)
is decided based on the queue lengths in its serviceable buffers and the queue lengths in their immediate downstream buffers.
In particular, the decision does not use arrival rate information that is often difficult or impossible to estimate reliably. We prove that a maximum pressure policy can maximize throughput
for a general class of stochastic processing networks. We also establish an asymptotic optimality of maximum pressure policies for stochastic processing networks with a unique
bottleneck. The optimality is in terms of minimizing workload process. A key step in the proof of the asymptotic optimality is to show that the network processes under maximum pressure policies exhibit a state space collapse.

Identiferoai:union.ndltd.org:GATECH/oai:smartech.gatech.edu:1853/7105
Date05 May 2005
CreatorsLin, Wuqin
PublisherGeorgia Institute of Technology
Source SetsGeorgia Tech Electronic Thesis and Dissertation Archive
Languageen_US
Detected LanguageEnglish
TypeDissertation
Format521808 bytes, application/pdf

Page generated in 0.002 seconds