1 |
Network performance evaluation for M2M WSN and SDN based on IOT applicationsTwayej, Wasan Adnan January 2018 (has links)
This thesis introduces different mechanisms for energy efficiency in Wireless Sensor Networks (WSNs) along with maintaining high levels of Network Performance (N.P) with reduced complexity. Firstly, a Machine-to-Machine (M2M) WSN is arranged hierarchically in a fixed infrastructure to support a routing protocol for energy-efficient data transmission among terminal nodes and sink nodes via cluster heads (CHs). A Multi-Level Clustering Multiple Sinks (MLCMS) routing protocol with the IPv6 protocol over Low Wireless Personal Area Networks (6LoWPAN) is proposed to prolong network lifetime. The simulation results show 93% and 147% enhancement in energy efficiency and system lifespan compared to M-LEACH and LEACH, respectively. By utilising 6LoWPAN in the proposed system, the number of packets delivered increases by 7%, with higher accessibility to the M2M nodes and a substantial extension of the network is enabled. Secondly, an adaptive sleep mode with MLCMS for an efficient lifetime of M2M WSN is introduced. The time period of the active and asleep modes for the CHs has been considered according to a mathematical function. The evaluations of the proposed scheme show that the lifetime of the system is doubled and the end-to-end delay is reduced by half. Thirdly, enhanced N.P is achieved through linear integer-based optimisation. A Self-Organising Cluster Head to Sink Algorithm (SOCHSA) is proposed, hosting Discrete Particle Swarm Optimisation (DPSO) and Genetic Algorithm (GA) as Evolutionary Algorithms (EAs) to solve the N.P optimisation problem. N.P is measured based on load fairness and average ratio residual network energy. DPSO and GA are compared with the Exhaustive Search (ES) algorithm to analyse their performances for each benchmark problem. Computational results prove that DPSO outperforms GA regarding complexity and convergence, thus it is best suited for a proactive IoT network. Both algorithms achieved optimum N.P evaluation values of 0.306287 and 0.307731 in the benchmark problems P1 and P2, respectively, for two and three sinks. The proposed mechanism satisfies different N.P requirements of M2M traffic by instant identification and dynamic rerouting to achieve optimum performance. Finally, a Power Model (PM) is essential to investigate the power efficiency of a system. Hence, a Power Consumption (PC) profile for SDN-WISE, based on IoT is developed. The outcomes of the study offer flexibility in managing the structure of an M2M system in IoT. They enable controlling the provided Network Quality of Service (NQoS), precisely by achieving physical layer throughput. In addition, it provides a schematic framework for the Application Quality of Service (AQoS), specifically, the IoT data stream payload size (from the PC point of view). It is composed of two essential parts, i.e., control signalling and data traffic PCs and the results show a 98% PC of the data plane in the total system power, whereas the control plane PC is only 2%, with a minimum Transmission Time Interval (TTI) (5 sec) and a maximum payload size of 92 Bytes.
|
2 |
Energy Constrained Link Adaptation For Multi-hop Relay NetworksZHAO, XIAO 09 February 2011 (has links)
Wireless Sensor Network (WSN) is a widely researched technology that has applications in a broad variety of fields ranging from medical, industrial, automotive and pharmaceutical to even office and home environments. It is composed of a network of self-organizing sensor nodes that operate in complex environments without human intervention for long periods of time. The energy available to these nodes, usually in the form of a battery, is very limited. Consequently, energy saving algorithms that maximize the network lifetime are sought-after. Link adaptation polices can significantly increase the data rate and effectively reduce energy consumption. In this sense, they have been studied for power optimization in WSNs in recent research proposals.
In this thesis, we first examine the Adaptive Modulation (AM) schemes for flat-fading channels, with data rate and transmit power varied to achieve minimum energy consumption. Its variant, Adaptive Modulation with Idle mode (AMI), is also investigated. An Adaptive Sleep with Adaptive Modulation (ASAM) algorithm is then proposed to dynamically adjust the operating durations of both the transmission and sleep stages based on channel conditions in order to minimize energy consumption. Furthermore, adaptive power allocation schemes are developed to improve energy efficiency for multi-hop relay networks.
Experiments indicate that a notable reduction in energy consumption can be achieved by jointly considering the data rate and the transmit power in WSNs. The proposed ASAM algorithm considerably improves node lifetime relative to AM and AMI. Channel conditions play an important role in energy consumption for both AM and ASAM protocols. In addition, the number of modulation stages is also found to substantially affect energy consumption for ASAM. Node lifetime under different profiles of traffic intensity is also investigated. The optimal power control values and optimal power allocation factors are further derived for single-hop networks and multi-hop relay networks, respectively. Results suggest that both policies are more suitable for ASAM than for AM. Finally, the link adaptation techniques are evaluated based on the power levels of commercial IEEE 802.15.4-compliant transceivers, and ASAM consistently outperforms AM and AMI in terms of energy saving, resulting in substantially longer node lifetime. / Thesis (Master, Electrical & Computer Engineering) -- Queen's University, 2011-02-08 18:26:29.222
|
3 |
Resource Allocation for Sequential Decision Making Under Uncertainaty : Studies in Vehicular Traffic Control, Service Systems, Sensor Networks and Mechanism DesignPrashanth, L A January 2013 (has links) (PDF)
A fundamental question in a sequential decision making setting under uncertainty is “how to allocate resources amongst competing entities so as to maximize the rewards accumulated in the long run?”. The resources allocated may be either abstract quantities such as time or concrete quantities such as manpower. The sequential decision making setting involves one or more agents interacting with an environment to procure rewards at every time instant and the goal is to find an optimal policy for choosing actions. Most of these problems involve multiple (infinite) stages and the objective function is usually a long-run performance objective. The problem is further complicated by the uncertainties in the sys-tem, for instance, the stochastic noise and partial observability in a single-agent setting or private information of the agents in a multi-agent setting. The dimensionality of the problem also plays an important role in the solution methodology adopted. Most of the real-world problems involve high-dimensional state and action spaces and an important design aspect of the solution is the choice of knowledge representation.
The aim of this thesis is to answer important resource allocation related questions in different real-world application contexts and in the process contribute novel algorithms to the theory as well. The resource allocation algorithms considered include those from stochastic optimization, stochastic control and reinforcement learning. A number of new algorithms are developed as well. The application contexts selected encompass both single and multi-agent systems, abstract and concrete resources and contain high-dimensional state and control spaces. The empirical results from the various studies performed indicate that the algorithms presented here perform significantly better than those previously proposed in the literature. Further, the algorithms presented here are also shown to theoretically converge, hence guaranteeing optimal performance.
We now briefly describe the various studies conducted here to investigate problems of resource allocation under uncertainties of different kinds:
Vehicular Traffic Control The aim here is to optimize the ‘green time’ resource of the individual lanes in road networks that maximizes a certain long-term performance objective. We develop several reinforcement learning based algorithms for solving this problem. In the infinite horizon discounted Markov decision process setting, a Q-learning based traffic light control (TLC) algorithm that incorporates feature based representations and function approximation to handle large road networks is proposed, see Prashanth and Bhatnagar [2011b]. This TLC algorithm works with coarse information, obtained via graded thresholds, about the congestion level on the lanes of the road network. However, the graded threshold values used in the above Q-learning based TLC algorithm as well as several other graded threshold-based TLC algorithms that we propose, may not be optimal for all traffic conditions. We therefore also develop a new algorithm based on SPSA to tune the associated thresholds to the ‘optimal’ values (Prashanth and Bhatnagar [2012]). Our thresh-old tuning algorithm is online, incremental with proven convergence to the optimal values of thresholds. Further, we also study average cost traffic signal control and develop two novel reinforcement learning based TLC algorithms with function approximation (Prashanth and Bhatnagar [2011c]). Lastly, we also develop a feature adaptation method for ‘optimal’ feature selection (Bhatnagar et al. [2012a]). This algorithm adapts the features in a way as to converge to an optimal set of features, which can then be used in the algorithm.
Service Systems The aim here is to optimize the ‘workforce’, the critical resource of any service system. However, adapting the staffing levels to the workloads in such systems is nontrivial as the queue stability and aggregate service level agreement (SLA) constraints have to be complied with. We formulate this problem as a constrained hidden Markov process with a (discrete) worker parameter and propose simultaneous perturbation based simulation optimization algorithms for this purpose. The algorithms include both first order as well as second order methods and incorporate SPSA based gradient estimates in the primal, with dual ascent for the Lagrange multipliers. All the algorithms that we propose are online, incremental and are easy to implement. Further, they involve a certain generalized smooth projection operator, which is essential to project the continuous-valued worker parameter updates obtained from the SASOC algorithms onto the discrete set. We validate our algorithms on five real-life service systems and compare their performance with a state-of-the-art optimization tool-kit OptQuest. Being ��times faster than OptQuest, our scheme is particularly suitable for adaptive labor staffing. Also, we observe that it guarantees convergence and finds better solutions than OptQuest in many cases.
Wireless Sensor Networks The aim here is to allocate the ‘sleep time’ (resource) of the individual sensors in an intrusion detection application such that the energy consumption from the sensors is reduced, while keeping the tracking error to a minimum. We model this sleep–wake scheduling problem as a partially-observed Markov decision process (POMDP) and propose novel RL-based algorithms -with both long-run discounted and average cost objectives -for solving this problem. All our algorithms incorporate function approximation and feature-based representations to handle the curse of dimensionality. Further, the feature selection scheme used in each of the proposed algorithms intelligently manages the energy cost and tracking cost factors, which in turn, assists the search for the optimal sleeping policy. The results from the simulation experiments suggest that our proposed algorithms perform better than a recently proposed algorithm from Fuemmeler and Veeravalli [2008], Fuemmeler et al. [2011].
Mechanism Design The setting here is of multiple self-interested agents with limited capacities, attempting to maximize their individual utilities, which often comes at the expense of the group’s utility. The aim of the resource allocator here then is to efficiently allocate the resource (which is being contended for, by the agents) and also maximize the social welfare via the ‘right’ transfer of payments. In other words, the problem is to find an incentive compatible transfer scheme following a socially efficient allocation. We present two novel mechanisms with progressively realistic assumptions about agent types aimed at economic scenarios where agents have limited capacities. For the simplest case where agent types consist of a unit cost of production and a capacity that does not change with time, we provide an enhancement to the static mechanism of Dash et al. [2007] that effectively deters misreport of the capacity type element by an agent to receive an allocation beyond its capacity, which thereby damages other agents. Our model incorporates an agent’s preference to harm other agents through a additive factor in the utility function of an agent and the mechanism we propose achieves strategy proofness by means of a novel penalty scheme. Next, we consider a dynamic setting where agent types evolve and the individual agents here again have a preference to harm others via capacity misreports. We show via a counterexample that the dynamic pivot mechanism of Bergemann and Valimaki [2010] cannot be directly applied in our setting with capacity-limited alim¨agents. We propose an enhancement to the mechanism of Bergemann and V¨alim¨aki [2010] that ensures truth telling w.r.t. capacity type element through a variable penalty scheme (in the spirit of the static mechanism). We show that each of our mechanisms is ex-post incentive compatible, ex-post individually rational, and socially efficient
|
Page generated in 0.0715 seconds