Return to search

Cross-layer adaptive transmission scheduling in wireless networks

A new promising approach for wireless network optimization is from a cross-layer perspective. This thesis focuses on exploiting channel state information (CSI) from the physical layer for optimal transmission scheduling at the medium access control (MAC) layer. The first part of the thesis considers exploiting CSI via a distributed channel-aware MAC protocol. The MAC protocol is analysed using a centralized design approach and a non-cooperative game theoretic approach. Structural results are obtained and provably convergent stochastic approximation algorithms that can estimate the optimal transmission policies are proposed. Especially, in the game theoretic MAC formulation, it is proved that the best response transmission policies are threshold in the channel state and there exists a Nash equilibrium at which every user deploys a threshold transmission policy. This threshold result leads to a particularly efficient stochastic-approximation-based adaptive learning algorithm and a simple distributed implementation of the MAC protocol. Simulations show that the channel-aware MAC protocols result in system throughputs that increase with the number of users.
The thesis also considers opportunistic transmission scheduling from the perspective of a single user using Markov Decision Process (MDP) approaches. Both channel state information and channel memory are exploited for opportunistic transmission. First, a finite horizon MDP transmission scheduling problem is considered. The finite horizon formulation is suitable for short-term delay constraints. It is proved for the finite horizon opportunistic transmission scheduling problem that the optimal transmission policy is threshold in the buffer occupancy state and the transmission time. This two-dimensional threshold structure substantially reduces the computational complexity required to compute and implement the optimal policy. Second, the opportunistic transmission scheduling problem is formulated as an infinite horizon average cost MDP with a constraint on the average waiting cost. An advantage of the infinite horizon formulation is that the optimal policy is stationary. Using the Lagrange dynamic programming theory and the supermodularity method, it is proved that the stationary optimal transmission scheduling policy is a randomized mixture of two policies that are threshold in the buffer occupancy state. A stochastic approximation algorithm and a Q-learning based algorithm that can adaptively estimate the optimal transmission scheduling policies are then proposed.

Identiferoai:union.ndltd.org:LACETR/oai:collectionscanada.gc.ca:BVAU./1626
Date05 1900
CreatorsNgo, Minh Hanh
PublisherUniversity of British Columbia
Source SetsLibrary and Archives Canada ETDs Repository / Centre d'archives des thèses électroniques de Bibliothèque et Archives Canada
LanguageEnglish
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation
Format10249140 bytes, application/pdf

Page generated in 0.0016 seconds