371 |
Predictive Radio Access Networks for Vehicular Content DeliveryAbou-zeid, Hatem 01 May 2014 (has links)
An unprecedented era of “connected vehicles” is becoming an imminent reality. This
is driven by advances in vehicular communications, and the development of in-vehicle
telematics systems supporting a plethora of applications. The diversity and multitude
of such developments will, however, introduce excessive congestion across wireless
infrastructure, compelling operators to expand their networks. An alternative to
network expansions is to develop more efficient content delivery paradigms. In particular,
alleviating Radio Access Network (RAN) congestion is important to operators
as it postpones costly investments in radio equipment installations and new spectrum.
Efficient RAN frameworks are therefore paramount to expediting this realm
of vehicular connectivity.
Fortunately, the predictability of human mobility patterns, particularly that of vehicles
traversing road networks, offers unique opportunities to pursue proactive RAN
transmission schemes. Knowing the routes vehicles are going to traverse enables the
network to forecast spatio-temporal demands and predict service outages that specific
users may face. This can be accomplished by coupling the mobility trajectories with
network coverage maps to provide estimates of the future rates users will encounter
along a trip.
In this thesis, we investigate how this valuable contextual information can enable RANs to improve both service quality and operational efficiency. We develop a collection
of methods that leverage mobility predictions to jointly optimize 1) long-term
wireless resource allocation, 2) adaptive video streaming delivery, and 3) energy efficiency in RANs. Extensive simulation results indicate that our approaches provide
significant user experience gains in addition to large energy savings. We emphasize
the applicability of such predictive RAN mechanisms to video streaming delivery, as
it is the predominant source of traffic in mobile networks, with projections of further
growth. Although we focus on exploiting mobility information at the radio access
level, our framework is a direction towards pursuing a predictive end-to-end content
delivery architecture. / Thesis (Ph.D, Electrical & Computer Engineering) -- Queen's University, 2014-04-30 06:15:34.31
|
372 |
Game theoretic models for multiple access and resource allocation in wireless networksAkkarajitsakul, Khajonpong 13 December 2012 (has links)
We first present a non-cooperative auction game to solve the bandwidth allocation problem for non-cooperative channel access in a wireless network. The Nash equilibrium is obtained as a solution of the game. To address this problem of bandwidth sharing under unknown information, we further develop a Bayesian auction game model and then Bayesian Nash equilibrium is then obtained. Next, we present a framework based on coalitional game for cooperative channel access for carry-and-forward-based data delivery. Each mobile node helps others to carry and then forward their data. A coalitional game is proposed to find a stable coalition structure for this cooperative data delivery. We next present static and dynamic coalitional games for carry-and-forward-based data delivery when the behavior of each mobile node is unknown by others. In the dynamic game, each mobile node can update its beliefs about other mobile nodes’ types when the static coalitional game is played repeatedly.
|
373 |
Consequences of architecture and resource allocation for growth dynamics of bunchgrass clones.Tomlinson, Kyle Warwick. January 2005 (has links)
In order to understand how bunchgrasses achieve dominance over other plant growth forms
and how they achieve dominance over one another in different environments, it is first
necessary to develop a detailed understanding of how their growth strategy interacts with
the resource limits of their environment. Two properties which have been studied separately
in limited detail are architecture and disproportionate resource allocation. Architecture is the
structural layout of organs and objects at different hierarchical levels. Disproportionate
resource allocation is the manner in which resources are allocated across objects at each
level of hierarchy. Clonal architecture and disproportionate resource allocation may interact
significantly to determine the growth ability of clonal plants. These interactions have not
been researched in bunchgrasses.
This thesis employs a novel simulation technique, functional-structural plant
modelling, to investigate how bunchgrasses interact with the resource constraints imposed
in humid grasslands. An appropriate functional-structural plant model, the TILLERTREE model, is developed that integrates the architectural growth of bunchgrasses with environmental resource capture and disproportionate resource allocation. Simulations are
conducted using a chosen model species Themeda triandra, and the environment is
parameterised using characteristics of the Southern Tall Grassveld, a humid grassland type
found in South Africa. Behaviour is considered at two levels, namely growth of single
ramets and growth of multiple ramets on single bunchgrass clones.
In environments with distinct growing and non-growing seasons, bunchgrasses are
subjected to severe light depletion during regrowth at the start of each growing season because of the accumulation of dead material in canopy caused by the upright, densely packed manner in which they grow. Simulations conducted here indicate that bunchgrass
tillers overcome this resource bottleneck through structural adaptations (etiolation, nonlinear
blade mass accretion, residual live photosynthetic surface) and disproportionate
resource allocation between roots and shoots of individual ramets that together increase the
temporal resource efficiency of ramets by directing more resources to shoot growth and
promoting extension of new leaves through the overlying dead canopy.
The architectural arrangement of bunchgrasses as collections of tillers and ramets
directly leads to consideration of a critical property of clonal bunchgrasses: tiller
recruitment. Tiller recruitment is a fundamental discrete process limiting the vegetative growth of bunchgrass clones. Tiller recruitment occurs when lateral buds on parent tillers
are activated to grow. The mechanism that controls bud outgrowth has not been elucidated.
Based on a literature review, it is here proposed that lateral bud outgrowth requires suitable
signals for both carbohydrate and nitrogen sufficiency. Subsequent simulations with the
model provide corroborative evidence, in that greatest clonal productivity is achieved when both signals are present. Resource allocation between live structures on clones may be distributed
proportionately in response to sink demand or disproportionately in response to relative
photosynthetic productivity. Model simulations indicate that there is a trade-off between
total clonal growth and individual tiller growth as the level of disproportionate allocation
between ramets on ramet groups and between tillers on ramets increases, because
disproportionate allocation reduces tiller population size and clonal biomass, but increases
individual tiller performance. Consequently it is proposed that different life strategies
employed by bunchgrasses, especially annual versus perennial life strategies, may follow
more proportionate and less proportionate allocation strategies respectively, because the
former favours maximal resource capture and seed production while the latter favours individual competitive ability.
Structural disintegration of clones into smaller physiologically integrated units (here termed ramet groups) that compete with one another for resources is a documented property
of bunchgrasses. Model simulations in which complete clonal integration is enforced are
unable to survive for long periods because resource bottlenecks compromise all structures
equally, preventing them from effectively overcoming resource deficits during periods when
light is restrictive to growth. Productivity during the period of survival is also reduced on
bunchgrass clones with full integration relative to clones that disintegrate because of the
inefficient allocation of resources that arises from clonal integration. This evidence
indicates that clonal disintegration allows bunchgrass clones both to increase growth
efficiency and pre-empt potential death, by promoting the survival of larger ramet groups
and removing smaller ramet groups from the system.
The discrete nature of growth in bunchgrasses and the complex population dynamics that arise from the architectural growth and the temporal resource dynamics of the environment, may explain why different bunchgrass species dominate under different environments. In the final section this idea is explored by manipulating two species tiller traits that have been shown to be associated with species distributions across non-selective in defoliation regimes, namely leaf organ growth rate and tiller size (mass or height). Simulations with these properties indicate that organ growth rate affects daily nutrient demands and therefore the rate at which tillers are terminated, but had only a small effect on
seasonal resource capture. Tiller mass size affects the size of the live tiller population where
smaller tiller clones maintain greater numbers of live tillers, which allows them to them to
sustain greater biomass over winter and therefore to store more reserves for spring
regrowth, suggesting that size may affect seasonal nitrogen capture. The greatest differences
in clonal behaviour are caused by tiller height, where clones with shorter tillers accumulate
substantially more resources than clones with taller tillers. This provides strong evidence
there is trade-off for bunchgrasses between the ability to compete for light and the ability to
compete for nitrogen, which arises from their growth architecture.
Using this evidence it is proposed that bunchgrass species will be distributed across
environments in response to the nitrogen productivity. Shorter species will dominate at low nitrogen productivity, while taller species dominate at high nitrogen productivity. Empirical evidence is provided in support of this proposal. / Thesis (Ph.D.)-University of KwaZulu-Natal, Pietermaritzburg, 2005.
|
374 |
Study of knapsack-based admission and allocation techniquesParra Hernández, Rafael 02 December 2009 (has links)
The allocation of resources among various project, units. or users is accomplished through the use of a systematic mechanism called resource allocation. The types of resources vary, depending upon the system under consideration. For instance, frequency spectrum and transmitter power might be the resources needed to allocate in an efficient manner on a cellular network system, so that the number of mobile users attended is maximized. On a Grid computing system, one needs to allocate resources such as processors, memory, disk space, and so on, in order that computational tasks run in the most efficient manner.
In order to evaluate resource allocation techniques performances, we first need to evaluate whether a particular resource allocation problem can be cast in the mathematical formu-lation we are exploring. We also need to decide which mechanism will be used, or if a new one needs to be constructed to solve the particular formulation. Finally, we need to evaluate whether the solution obtained is better than those obtained from other techniques that might express and solve the allocation problem in different ways.
In this dissertation, we propose a new resource allocation technique for a system described only by a formulation known as the Multichoice Multidimensional Knapsack Problem, or MMKP. We also propose and evaluate resource allocation techniques on two other sys¬tems: a cellular network and a Grid computing system; in this regard, the resource alloca¬tion problem is not expressed as an MMKP, although the formulations used are particular cases of the MMKP. The MMKP formulation is not applied because its use would not have allowed us to make a fair performance comparison with other more commonly used allo¬cation techniques. However, we believe that as more complex tasks are demanded from systems where resource allocation mechanisms are needed, an MMKP formulation could more suitably represent the allocation problem.
Numerical results indicate that the resource allocation techniques explored in this work present a better performance than previous techniques. Numerical results also indicate that the use of the proposed techniques and the use of suitable optimization criteria can be used to achieve a number of resource allocation goals.
|
375 |
Coordinated Transmission for Wireless Interference NetworksFarhadi, Hamed January 2014 (has links)
Wireless interference networks refer to communication systems in which multiple source–destination pairs share the same transmission medium, and each source’s transmission interferes with the reception at non-intended destinations. Optimizing the transmission of each source–destination pair is interrelated with that of the other pairs, and characterizing the performance limits of these networks is a challenging task. Solving the problem of managing the interference and data communications for these networks would potentially make it possible to apply solutions to several existing and emerging communication systems. Wireless devices can carefully coordinate the use of scarce radio resources in order to deal effectively with interference and establish successful communications. In order to enable coordinated transmission, terminals must usually have a certain level of knowledge about the propagation environment; that is, channel state information (CSI). In practice, however, no CSI is a priori available at terminals (transmitters and receivers), and proper channel training mechanisms (such as pilot-based channel training and channel state feedback) should be employed to acquire CSI. This requires each terminal to share available radio resources between channel training and data transmissions. Allocating more resources for channel training leads to an accurate CSI estimation, and consequently, a precise coordination. However, it leaves fewer resources for data transmissions. This creates the need to investigate optimum resource allocation. This thesis investigates an information-theoretic approach towards the performance analysis of interference networks, and employs signal processing techniques to design transmission schemes for achieving these limits in the following scenarios. First, the smallest interference network with two single-input single-output (SISO) source–destination pairs is considered. A fixed-rate transmission is desired between each source–destination pair. Transmission schemes based on point-to-point codes are developed. The transmissions may not always attain successful communication, which means that outage events may be declared. The outage probability is quantified and the ε-outage achievable rate region is characterized. Next, a multi-user SISO interference network is studied. A pilot-assisted ergodic interference alignment (PAEIA) scheme is proposed to conduct channel training, channel state feedback, and data communications. The performance limits are evaluated, and optimum radio resource allocation problems are investigated. The analysis is extended to multi-cell wireless interference networks. A low-complexity pilot-assisted opportunistic user scheduling (PAOUS) scheme is proposed. The proposed scheme includes channel training, one-bit feedback transmission, user scheduling and data transmissions. The achievable rate region is computed, and the optimum number of cells that should be active simultaneously is determined. A multi-user MIMO interference network is also studied. Here, each source sends multiple data streams; specifically, the same number as the degrees of freedom of the network. Distributed transceiver design and power control algorithms are proposed that only require local CSI at terminals. / <p>QC 20141201</p>
|
376 |
Do Regional Models Matter? Resource Allocation to Home Care in the Canadian Provinces of Prince Edward Island, Nova Scotia & New BrunswickConrad, Patricia 30 July 2008 (has links)
Proponents of Canadian health reform in the 1990s argued for regional structures, which enables budget silos to be broken down and integrated budgets to be formed. Although regionalization has been justified on the basis of its potential to increase home care resources, political science draws upon the scope of conflict theory, which instead suggests marginalized actors, such as home care, may be at risk of being cannibalized in order to safeguard the interests of more powerful actors, such as hospitals.
Prince Edward Island, Nova Scotia, and New Brunswick, constitute a natural policy experiment. Each has made different decisions about the regionalization model implemented to restructure health care delivery. The policy question underpinning this research is: What are the implications of the different regional models chosen on the allocation of resources to home care?
Provincial governments are at liberty to fund home care within the limits of their fiscal capacity and there are no federal terms and conditions which must be complied with. This policy analysis used a case comparison research design with mixed methods to collect quantitative and qualitative data. Two financial outcomes were measured: 1) per capita provincial government home care expenditures and 2) the home care share of provincial government health expenditures. Hospital data was used as a comparator. Qualitative data collected from face-to-face, semi-structured interviews with regional elite key informants supplemented the expenditure data.
The findings align with the scope of conflict theory. The trade-off between central control and local autonomy has implications for these findings: 1) home care in Prince Edward Island increased it share from 1.6% to 2.2% of provincial government health spending; 2) maintaining central control over home care in Nova Scotia resulted in an increase in its share from 1.4% to 5.4%, and 3) in New Brunswick, home care share grew from 4.1% to 7.6%. Inertia and entrenchment of spending patterns was strong. Health regions did not appear to undertake resource reallocation to any great extent in either Prince Edward Island or New Brunswick. Resource reallocation did occur in Nova Scotia where the hospital share of government spending went down and was reallocated to home care and nursing homes. But, Nova Scotia is the only province of the three in which home care was not regionalized. Regional interests in maintaining existing levels of in-patient hospital beds was clearly a source of tension between the overarching policy goals formulated for health reform by the provincial governments and the local health regions.
|
377 |
Effective Resource Allocation for Non-cooperative Spectrum SharingJacob-David, Dany D. 13 October 2011 (has links)
Spectrum access protocols have been proposed recently to provide flexible and efficient use
of the available bandwidth. Game theory has been applied to the analysis of the problem
to determine the most effective allocation of the users’ power over the bandwidth. However,
prior analysis has focussed on Shannon capacity as the utility function, even though it is
known that real signals do not, in general, meet the Gaussian distribution assumptions of that metric. In a non-cooperative spectrum sharing environment, the Shannon capacity utility function results in a water-filling solution. In this thesis, the suitability of the water-filling solution is evaluated when using non-Gaussian signalling first in a frequency non-selective environment to focus on the resource allocation problem and its outcomes. It is then extended to a frequency selective environment to examine the proposed algorithm in a more realistic wireless environment. It is shown in both scenarios that more effective resource allocation can be achieved when the utility function takes into account the actual signal characteristics.
Further, it is demonstrated that higher rates can be achieved with lower transmitted power,
resulting in a smaller spectral footprint, which allows more efficient use of the spectrum
overall. Finally, future spectrum management is discussed where the waveform adaptation
is examined as an additional option to the well-known spectrum agility, rate and transmit
power adaptation when performing spectrum sharing.
|
378 |
Randomized Resource Allocaion in Decentralized Wireless NetworksMoshksar, Kamyar January 2011 (has links)
Ad hoc networks and bluetooth systems operating over the unlicensed ISM band are in-stances of decentralized wireless networks. By definition, a decentralized network is com-posed of separate transmitter-receiver pairs where there is no central controller to assign the resources to the users. As such, resource allocation must be performed locally at each node. Users are anonymous to each other, i.e., they are not aware of each other's code-books. This implies that multiuser detection is not possible and users treat each other as noise. Multiuser interference is known to be the main factor that limits the achievable rates in such networks particularly in the high Signal-to-Noise Ratio (SNR) regime. Therefore, all users must follow a distributed signaling scheme such that the destructive effect of interference on each user is minimized, while the resources are fairly shared.
In chapter 2 we consider a decentralized wireless communication network with a fixed number of frequency sub-bands to be shared among several transmitter-receiver pairs. It is assumed that the number of active users is a realization of a random variable with a given probability mass function. Moreover, users are unaware of each other's codebooks and hence, no multiuser detection is possible. We propose a randomized Frequency Hopping (FH) scheme in which each transmitter randomly hops over a subset of sub-bands from transmission slot to transmission slot. Assuming all users transmit Gaussian signals, the distribution of the noise plus interference is mixed Gaussian, which makes calculation of the mutual information between the transmitted and received signals of each user intractable. We derive lower and upper bounds on the mutual information of each user and demonstrate that, for large SNR values, the two bounds coincide. This observation enables us to compute the sum multiplexing gain of the system and obtain the optimum hopping strategy for maximizing this quantity. We compare the performance of the FH system with that of the Frequency Division (FD) system in terms of the following performance measures: average sum multiplexing gain and average minimum multiplexing gain per user. We show that (depending on the probability mass function of the number of active users) the FH system can offer a significant improvement in terms of the aforementioned measures. In the sequel, we consider a scenario where the transmitters are unaware of the number of active users in the network as well as the channel gains. Developing a new upper bound on the differential entropy of a mixed Gaussian random vector and using entropy power inequality, we obtain lower bounds on the maximum transmission rate per user to ensure a specified outage probability at a given SNR level. We demonstrate that the so-called outage capacity can be considerably higher in the FH scheme than in the FD scenario for reasonable distributions on the number of active users. This guarantees a higher spectral efficiency in FH compared to FD.
Chapter 3 addresses spectral efficiency in decentralized wireless networks of separate transmitter-receiver pairs by generalizing the ideas developed in chapter 2. Motivated by random spreading in Code Division Multiple Access (CDMA), a signaling scheme is introduced where each user's code-book consists of two groups of codewords, referred to as signal codewords and signature codewords. Each signal codeword is a sequence of independent Gaussian random variables and each signature codeword is a sequence of independent random vectors constructed over a globally known alphabet. Using a conditional entropy power inequality and a key upper bound on the differential entropy of a mixed Gaussian random vector, we develop an inner bound on the capacity region of the decentralized network. To guarantee consistency and fairness, each user designs its signature codewords based on maximizing the average (with respect to a globally known distribution on the channel gains) of the achievable rate per user. It is demonstrated how the Sum Multiplexing Gain (SMG) in the network (regardless of the number of users) can be made arbitrarily close to the SMG of a centralized network with an orthogonal scheme such as Time Division (TD). An interesting observation is that in general the elements of the vectors in a signature codeword must not be equiprobable over the underlying alphabet in contrast to the use of binary Pseudo-random Noise (PN) signatures in randomly spread CDMA where the chip elements are +1 or -1 with equal probability. The main reason for this phenomenon is the interplay between two factors appearing in the expression of the achievable rate, i.e., multiplexing gain and the so-called interference entropy factor. In the sequel, invoking an information theoretic extremal inequality, we present an optimality result by showing that in randomized frequency hopping which is the main idea in the prevailing bluetooth devices in decentralized networks, transmission of independent signals in consecutive transmission slots is in general suboptimal regardless of the distribution of the signals.
Finally, chapter 4 addresses a decentralized Gaussian interference channel consisting of two block-asynchronous transmitter-receiver pairs. We consider a scenario where the rate of data arrival at the encoders is considerably low and codewords of each user are transmitted at random instants depending on the availability of enough data for transmission. This makes the transmitted signals by each user look like scattered bursts along the time axis. Users are block-asynchronous meaning there exists a delay between their transmitted signal bursts. The proposed model for asynchrony assumes the starting point of an interference burst is uniformly distributed along the transmitted codeword of any user. There is also the possibility that each user does not experience interference on a transmitted codeword at all. Due to the randomness of delay, the channels are non-ergodic in the sense that the transmitters are unaware of the location of interference bursts along their transmitted codewords. In the proposed scheme, upon availability of enough data in its queue, each user follows a locally Randomized Masking (RM) strategy where the transmitter quits transmitting the Gaussian symbols in its codeword independently from symbol interval to symbol interval. An upper bound on the probability of outage per user is developed using entropy power inequality and a key upper bound on the differential entropy of a mixed Gaussian random variable. It is shown that by adopting the RM scheme, the probability of outage is considerably less than the case where both users transmit the Gaussian symbols in their codewords in consecutive symbol intervals, referred to as Continuous Transmission (CT).
|
379 |
Stochastic Mechanisms for Truthfulness and Budget Balance in Computational Social ChoiceDufton, Lachlan Thomas January 2013 (has links)
In this thesis, we examine stochastic techniques for overcoming game theoretic and computational issues in the collective decision making process of self-interested individuals. In particular, we examine truthful, stochastic mechanisms, for settings with a strong budget balance constraint (i.e. there is no net flow of money into or away from the agents). Building on past results in AI and computational social choice, we characterise affine-maximising social choice functions that are implementable in truthful mechanisms for the setting of heterogeneous item allocation with unit demand agents. We further provide a characterisation of affine maximisers with the strong budget balance constraint. These mechanisms reveal impossibility results and poor worst-case performance that motivates us to examine stochastic solutions.
To adequately compare stochastic mechanisms, we introduce and discuss measures that capture the behaviour of stochastic mechanisms, based on techniques used in stochastic algorithm design. When applied to deterministic mechanisms, these measures correspond directly to existing deterministic measures. While these approaches have more general applicability, in this work we assess mechanisms based on overall agent utility (efficiency and social surplus ratio) as well as fairness (envy and envy-freeness).
We observe that mechanisms can (and typically must) achieve truthfulness and strong budget balance using one of two techniques: labelling a subset of agents as ``auctioneers'' who cannot affect the outcome, but collect any surplus; and partitioning agents into disjoint groups, such that each partition solves a subproblem of the overall decision making process. Worst-case analysis of random-auctioneer and random-partition stochastic mechanisms show large improvements over deterministic mechanisms for heterogeneous item allocation. In addition to this allocation problem, we apply our techniques to envy-freeness in the room assignment-rent division problem, for which no truthful deterministic mechanism is possible. We show how stochastic mechanisms give an improved probability of envy-freeness and low expected level of envy for a truthful mechanism. The random-auctioneer technique also improves the worst-case performance of the public good (or public project) problem.
Communication and computational complexity are two other important concerns of computational social choice. Both the random-auctioneer and random-partition approaches offer a flexible trade-off between low complexity of the mechanism, and high overall outcome quality measured, for example, by total agent utility. They enable truthful and feasible solutions to be incrementally improved on as the mechanism receives more information and is allowed more processing time.
The majority of our results are based on optimising worst-case performance, since this provides guarantees on how a mechanism will perform, regardless of the agents that use it. To complement these results, we perform empirical, average-case analyses on our mechanisms. Finally, while strong budget balance is a fixed constraint in our particular social choice problems, we show empirically that this can improve the overall utility of agents compared to a utility-maximising assignment that requires a budget imbalanced mechanism.
|
380 |
Social values and their role in allocating resources for new health technologiesStafinski, Tania 11 1900 (has links)
Every healthcare system faces unlimited demands and limited resources, creating a need to make decisions that may limit access to some new, potentially effective technologies. It has become increasingly clearer that such decisions are more than technical ones. They require social value judgements - statements of the publics distributive preferences for healthcare across the population. However, these value judgements largely remain ill-defined. The purpose of this thesis was to explicate distributive preferences of the public to inform funding/coverage decisions on new health technologies. It contains six papers. The first comprises a systematic review of current coverage processes around the world, including value assumptions embedded within them. The second paper presents findings from an expert workshop and key-informant interviews with senior-level healthcare decision-makers in Canada. A technology funding decision-making framework, informed by the results of the first paper and the experiences of these decision-makers, was developed. Their input also highlighted the lack of and need for information on values that reflect those of the Canadian public. The third paper provides a systematic review of empirical studies attempting to explicate distributive preferences of the public. It also includes an analysis of social value arguments found in appeals to negative coverage decisions. From the results of both components, possible approaches to eliciting social values from the public and a list of factors around which distributive preferences may be sought were compiled. Such factors represented characteristics of unique, competing patient populations. Building on findings from the third paper, the fourth paper describes a citizens jury held to explicate distributive preferences for new health technologies in Alberta, Canada. The jury involved a broadly representative sample of the public, who participated in decision simulation exercises involving trade-offs between patient populations characterized by different combinations of factors. A list of preference statements, demonstrating interactions among such factors, emerged. The fifth and sixth papers address methodological issues related to citizens juries, including the comparability of findings from those carried out in the same way but with different samples of the public, and the extent to which they changed the views of individuals who participate in them.
|
Page generated in 0.1282 seconds