We develop efficient coordination techniques that support inelastic traffic in large-scale distributed dynamic spectrum access DSA networks. By means of any learning algorithm, the proposed techniques enable DSA users to locate and exploit spectrum opportunities effectively, thereby increasing their achieved throughput (or "rewards" to be more general). Basically, learning algorithms allow DSA users to learn by interacting with the environment, and use their acquired knowledge to select the proper actions that maximize their own objectives, thereby "hopefully" maximizing their long-term cumulative received reward/throughput. However, when DSA users' objectives are not carefully coordinated, learning algorithms can lead to poor overall system performance, resulting in lesser per-user average achieved rewards.
In this thesis, we derive efficient objective functions that DSA users
an aim to maximize, and that by doing so, users' collective behavior also leads to good overall system performance, thus maximizing each user's long-term cumulative received rewards. We show that the proposed techniques are: (i) efficient by enabling users to achieve high rewards, (ii) scalable by performing well in systems with a small as well as a large number of users, (iii) learnable by allowing users to reach up high rewards very quickly, and (iv) distributive by being implementable in a decentralized manner. / Graduation date: 2013
Identifer | oai:union.ndltd.org:ORGSU/oai:ir.library.oregonstate.edu:1957/37972 |
Date | 06 March 2013 |
Creators | NoroozOliaee, MohammadJavad |
Contributors | Hamdaoui, Bechir |
Source Sets | Oregon State University |
Language | en_US |
Detected Language | English |
Type | Thesis/Dissertation |
Page generated in 0.002 seconds