Spelling suggestions: "subject:"communmunication engineering"" "subject:"commoncommunication engineering""
171 |
Performance Analysis of Opportunistic Selection and Rate Adaptation in Time Varying ChannelsKona, Rupesh Kumar January 2016 (has links) (PDF)
Opportunistic selection and rate adaptation play a vital role in improving the spectral and power efficiency of current multi-node wireless systems. However, time-variations in wireless channels affect the performance of opportunistic selection and rate-adaptation in the following ways. Firstly, the selected node can become sub-optimal by the time data transmission commences. Secondly, the choice of transmission parameters such as rate and power for the selected node become sub-optimal. Lastly, the channel changes during data transmission.
In this thesis, we develop a comprehensive and tractable analytical framework that accurately accounts for these effects. It differs from the extensive existing literature that primarily focuses on time-variations until the data transmission starts. Firstly, we develop a novel concept of a time-invariant effective signal-to-noise ratio (TIESNR), which tractably and accurately captures the time-variations during the data transmission phase with partial channel state information available at the receiver. Secondly, we model the joint distribution of the signal-to-noise ratio at the time of selection and TIESNR during the data transmission using generalized bivariate gamma distribution.
The above analytical steps facilitate the analysis of the outage probability and average packet error rate (PER) for a given modulation and coding scheme and average throughput with rate adaptation. We also present extensive numerical results to verify the accuracy of each step of our approach and show that ignoring the correlated time variations during the data transmission phase can significantly underestimate the outage probability and average PER, whereas it overestimates the average throughput even for packet durations as low as 1 msec.
|
172 |
Algorithms for Homogeneous Quadratic Minimization And Applications in Wireless NetworksGaurav, Dinesh Dileep January 2016 (has links) (PDF)
Massive proliferation of wireless devices throughout world in the past decade comes with a host of tough and demanding design problems. Noise at receivers and wireless interference are the two major issues which severely limits the received signal quality and the quantity of users that can be simultaneously served. Traditional approaches to this problems are known as Power Control (PC), SINR Balancing (SINRB) and User Selection (US) in Wireless Networks respectively. Interestingly, for a large class of wireless system models, both this problems have a generic form. Thus any approach to this generic optimization problem benefits the transceiver design of all the underlying wireless models. In this thesis, we propose an Eigen approach based on the Joint Numerical Range (JNR) of hermitian matrices for PC, SINRB and US problems for a class of wireless models.
In the beginning of the thesis, we address the PC and SINRB problems. PC problems can be expressed as Homogeneous Quadratic Constrained Quadratic Optimization Problems (HQCQP) which are known to be NP-Hard in general. Leveraging their connection to JNR, we show that when the constraints are fewer, HQCQP problems admit iterative schemes which are considerably fast compared to the state of the art and have guarantees of global convergence. In the general case for any number of constraints, we show that the true solution can be bounded above and below by two convex optimization problems. Our numerical simulations suggested that the bounds are tight in almost all scenarios suggesting the achievement of true solution. Further, the SINRB problems are shown to be intimately related to PC problems, and thus share the same approach. We then proceed on to comment on the convexity of PC problems and SINRB problems in the general case of any number of constraints. We show that they are intimately related to the convexity of joint numerical range. Based on this connection, we derive results on the attainability of solution and comment on the same about the state-of-the-art technique Semi-De nite Relaxation (SDR).
In the subsequent part of the thesis, we address the US problem. We show that the US problem can be formulated as a combinatorial problem of selecting a feasible subset of quadratic constraints. We propose two approaches to the US problem. The first approach is based on the JNR view point which allows us to propose a heuristic approach. The heuristic approach is then shown to be equivalent to a convex optimization problem. In the second approach, we show that the US is equivalent to another non-convex optimization problem. We then propose a convex approximation approach to the latter. Both the approaches are shown to have near optimal performance in simulations.
We conclude the thesis with a discussion on applicability and extension to other class of optimization problems and some open problems which has come out of this work.
|
173 |
Design and Characterization of SRAMs for Ultra Dynamic Voltage Scalable (U-DVS) SystemsViveka, K R January 2016 (has links) (PDF)
The ever expanding range of applications for embedded systems continues to offer new challenges (and opportunities) to chip manufacturers. Applications ranging from exciting high resolution gaming to routine tasks like temperature control need to be supported on increasingly small devices with shrinking dimensions and tighter energy budgets. These systems benefit greatly by having the capability to operate over a wide range of supply voltages, known as ultra dynamic voltage scaling (U-DVS). This refers to systems capable of operating from nominal voltages down to sub-threshold voltages. Memories play an important role in these systems with future chips estimated to have over 80% of chip area occupied by memories.
This thesis presents the design and characterization of an ultra dynamic voltage scalable memory (SRAM) that functions from nominal voltages down to sub-threshold voltages without the need for external support. The key contributions of the thesis are as follows:
1) A variation tolerant reference generation for single ended sensing: We present a reference generator, for U-DVS memories, that tracks the memory over a wide range of voltages and is tunable to allow functioning down to sub-threshold voltages. Replica columns are used to generate the reference voltage which allows the technique to track slow changes such as temperature and aging. A few configurable cells in the replica column are found to be sufficient to cover the whole range of voltages of interest. The use of tunable delay line to generate timing is shown to help in overcoming the effects of process variations.
2) Random-sampling based tuning algorithm: Tuning is necessary to overcome the in-creased effects of variation at lower voltages. We present an random-sampling based BIST tuning algorithm that significantly speed-up the tuning ensuring that the time required to tune is comparable to a single MBIST algorithm. Further, the use of redundancy after delay tuning enables maximum utilization of redundancy infrastructure to reduce power consumption and enhance performance.
3) Testing and Characterization for U-DVS systems: Testing and characterization is an important challenge in U-DVS systems that have remained largely unexplored. We propose an iterative technique that allows realization of an on-chip oscilloscope with minimal area overhead. The all digital nature of the technique makes it simple to design and implement across technology nodes.
Combining the proposed techniques allows the designed 4 Kb SRAM array to function from 1.2 V down to 310 mV with reads functioning down to 190 mV. This would contribute towards moving ultra wide voltage operation a step closer towards implementation in commercial designs.
|
174 |
Photonic Crystal Ring Resonators for Optical Networking and Sensing ApplicationsTupakula, Sreenivasulu January 2016 (has links) (PDF)
Photonic bandgap structures have provided promising platform for miniaturization of modern integrated optical devices. In this thesis, a photonic crystal based ring resonator (PCRR) is proposed and optimized to exhibit high quality factor. Also, force sensing application of the optimized PC ring resonator and Dense Wavelength Division Multiplexing (DWDM) application of the PCRR are discussed. Finally fabrication and characterization of the PCRR is presented.
A photonic crystal ring resonator is designed in a hexagonal lattice of air holes on a silicon slab. A novel approach is used to optimize PCRR to achieve high quality factor. The numerical analysis of the optimized photonic crystal ring resonator is presented in detail. For all electromagnetic computations Finite Difference Time Domain (FDTD) method is used.
The improvement in Q factor is explained by using the physical phenomenon, multipole cancellation of the radiation held of the PCRR cavity. The corresponding mathematical frame work has been included. The forced cancellation of lower order radiation components are verified by plotting far held radiation pattern of the PCRR cavity.
Then, the force sensing application of the optimized PCRR is presented. A high sensitive force sensor based on photonic crystal ring resonator integrated with silicon micro cantilever is presented. The design and modelling of the device, including the mechanics of the cantilever, FEM (Finite Element Method) analysis of the cantilever beam with PC and without PC integrated on it. The force sensing characteristics are presented for forces in the range of 0 to 1 N. For forces which are in the range of few tens of N, a force sensor with bilayer cantilever is considered. PC ring resonator on the bilayer of 220nm thick silicon and 600nm thick SiO2 plays the role of sensing element. Force sensing characteristics of the bilayer cantilever for forces in the range of 0 to 10 N are presented.
Fabrication and characterization of PCRR is also carried out. This experimental work is done mainly to understand practical issues in study of photonic crystal ring resonators. It is proved that Q factor of PCRR can be signi cantly improved by varying the PCRR parameters by the proposed method.
Dense Wavelength Division Multiplexing (DWDM) application of PC ring resonator is included. A novel 4-channel PC based demultiplexer is proposed and optimized in order to tolerate the fabrication errors and exhibit optimal cross talk, coupling efficiency between resonator and various channels of the device. Since the intention of this design is, to achieve the device performance that is independent of the unavoidable fabrication errors, the tolerance studies are made on the performance of the device towards the fabrication errors in the dimension of various related parameters.
In conclusion we summarize major results, applications including computations and practical measurements of this work and suggest future work that may be carried out later.
|
175 |
Capacity Bounds for Small-World and Dual Radio NetworksCosta, Rui Filipe Mendes Alves da, João Barros January 2007 (has links)
Grafos aleatórios do tipo Small-World e Power-Law têm sido utilizados como modelos para um número elevado de redes naturais e redes tecnológicas, porque capturam algumas das suas propriedades fundamentais. Em redes de comunicação, pensa-se que topologias Small-World navegáveis, i.e. aquelas que admitem algoritmos distribuídos de encaminhamento, sejam particularmente eficientes, por exemplo, em tarefas de descoberta de recursos e aplicações peer-to-peer. Apesar do potencial evidenciado por topologias deste tipo em redes de comunicação, a abordagem tradicional a redes Small-World privilegia parâmetros relacionados com a conectividade. Assim, torna-se crucial saber quais são os limites fundamentais de comunicação em redes que exploram este tipo de topologia. Com o objectivo de estudar esses limites, na primeira parte desta tese estudamos a capacidade destas redes do ponto de vista de fluxos de informação em redes. As nossas contribuições incluem limites superiores e inferiores para a capacidade de redes do tipo Small-World, incluindo um resultado surpreendente, que pode ter a seguinte interpretação: alterar aleatoriamente os extremos de algumas ligações não altera a capacidade da rede, com probabilidade convergente para 1.
Na segunda parte desta tese, motivados pela proliferação de aparelhos com duas interfaces de rádio, consideramos redes de comunicação em que os aparelhos são deste tipo. Com o objectivo de estudar os ganhos ao utilizar de uma forma combinada as duas interfaces de rádio, definimos um modelo para redes sem fios em que todos os aparelhos partilham uma tecnologia sem fios de curto alcance e alguns possuem uma
segunda tecnologia sem fios, esta de longo alcance. Para a classe de grafos definida pelo modelo, apresentamos limites superiores e inferiores tanto para a probabilidade de uma instância do modelo ser conexa, como para a sua capacidade. A conclusão mais interessante a retirar dos nossos resultados é o facto de a capacidade desta classe
de grafos crescer quadraticamente com a proporção de aparelhos que possuem as duas tecnologias sem fios, indicando assim que apenas uma pequena percentagem destes aparelhos é suficiente para melhorar significativamente a capacidade da rede. / Recent results from statistical physics show that large classes of complex networks,
both man-made and of natural origin, are characterized by high clustering properties
yet strikingly short path lengths between pairs of nodes. This class of networks are
said to have a small-world topology. In the context of communication networks,
navigable small-world topologies, i.e. those which admit efficient distributed routing
algorithms, are deemed particularly effective, for example, in resource discovery tasks
and peer-to-peer applications. Breaking with the traditional approach to small-world
topologies that privileges graph parameters pertaining to connectivity, and intrigued
by the fundamental limits of communication in networks that exploit this type of
topology, in the first part of this thesis we investigate the capacity of these networks
from the perspective of network information flow. Our contribution includes upper
and lower bounds for the capacity of standard and navigable small-world models, and
the somewhat surprising result that, with high probability, random rewiring does not
alter the capacity of a small-world network.
Motivated by the proliferation of dual radio devices, we consider, in the second part
of this thesis, communication networks in which the devices have two radio interfaces.
With the goal of studying the performance gains in this networks when using the two
radio interfaces in a combined manner, we define a wireless network model in which
all devices have short-range transmission capability, but a subset of the nodes has
a secondary long-range wireless interface. For the resulting class of random graph
models, we present analytical bounds for both the connectivity and the max-flow mincut
capacity. The most striking conclusion to be drawn from our results is that the
capacity of this class of networks grows quadratically with the fraction of dual radio
devices, thus indicating that a small percentage of such devices is sufficient to improve
significantly the capacity of the network.
|
176 |
Topics In Modeling, Analysis And Optimisation Of Wireless NetworksRamaiyan, Venkatesh 01 1900 (has links)
The work in this thesis is concerned with two complementary aspects of wireless networks research; performance analysis and resource optimization. The first part of the thesis focusses on the performance analysis of IEEE 802.11(e) wireless local area networks. We study the distributed coordination function (DCF) and the enhanced distributed channel access (EDCA) MAC of the IEEE 802.11(e) standard. We consider n IEEE 802.11(e) DCF (EDCA) nodes operating as a single cell; by single cell, we mean that every packet transmission can be heard by every other node. Packet loss is attributed only to simultaneous transmissions by the nodes (i.e., collisions). Using the well known decoupling approximation [19], we characterize the collision behaviour and the throughput performance of the WLAN with a set of fixed point equations involving the backoff parameters of the nodes. We observe that the fixed point equations can have multiple solutions, and in such cases, the system exhibits multistability and short-term unfairness of throughput. Also, the fixed point analysis fails to characterize the average system behaviour when the system has multiple solutions. We then obtain sufficient conditions (in terms of the backoff parameters of the nodes) under which the fixed point equations have a unique solution. For such cases, using simulations, we observe that the fixed point analysis predicts the long term time average throughput behaviour accurately. Then, using the fixed point analysis, we study throughput differentiation provided by the different backoff parameters, including minimum contention window (CWmin), persistence factor and arbitration interframe space (AIFS) of the IEEE 802.11e standard. Finally, we extend the above results to the case where the receiver supports physical layer capture.
In the second part of the thesis, we study resource allocation and optimization problems for a variety of wireless network scenarios. For a dense wireless network, deployed over a small area and with a network average power constraint, we show that single cell operation (the channel supports only one successful transmission at any time) is throughput efficient in the asymptotic regime (in which the network average power is made large). We show that, for a realistic path loss model and a physical interference model (SINR based), the maximum aggregate bit rate among arbitrary transmitter-receiver pairs scales only as Θ(log(¯P)), where¯P
is the network average power. Spatial reuse is ineffective and direct transmission between source destination pairs is the throughput optimal strategy. Then, operating the network with only a single successful transmission permitted at a time, and with CSMA being used to select the successful transmitter-receiver pair, we consider the situation in which there is stationary spatiotemporal channel fading. We study the optimal hop length (routing strategy) and power control (for a fading channel) that maximizes the network aggregate throughput for a given network power constraint. For a fixed transmission time scheme, we study the throughput maximizing schedule under homogeneous traffic and MAC assumptions. We also characterize the optimal operating point (hop length and power control) in terms of the network power constraint and the channel fade distribution.
It is now well understood that in a multihop network, performance can be enhanced if, instead of just forwarding packets, the network nodes create output packets by judiciously combining their input packets, a strategy that is called “network coding.” For a two link slotted wireless network employing a network coding strategy and with fading channels, we study the optimal power control and optimal exploitation of network coding opportunities that minimizes the average power required to support a given arrival rate. We also study the optimal power-delay tradeoff for the network.
Finally, we study a vehicular network problem, where vehicles are used as relays to transfer data between a pair of stationary source and destination nodes. The source node has a file to transfer to the destination node and we are interested in the delay minimizing schedule for the vehicular network. We characterize the average queueing delay (at the
source node) and the average transit delay of the packets (at the relay vehicles) in terms of the vehicular speeds and their interarrival times, and study the asymptotically optimal tradeoff achievable between them.
|
177 |
A Dynamic Security And Authentication System For Mobile Transactions : A Cognitive Agents Based ApproachBabu, B Sathish 05 1900 (has links)
In the world of high mobility, there is a growing need for people to communicate with each other and have timely access to information regardless of the location of the individuals or the information. This need is supported by the advances in the technologies of networking, wireless communications, and portable computing devices with reduction in the physical size of computers, lead to the rapid development in mobile communication infrastructure. Hence, mobile and wireless networks present many challenges to application, hardware, software and network designers and implementers. One of the biggest challenge is to provide a secure mobile environment. Security plays a more important role in mobile communication systems than in systems that use wired communication. This is mainly because of the ubiquitous nature of the wireless medium that makes it more susceptible to security attacks than wired communications.
The aim of the thesis is to develop an integrated dynamic security and authentication system for mobile transactions. The proposed system operates at the transactions-level of a mobile application, by intelligently selecting the suitable security technique and authentication protocol for ongoing transaction. To do this, we have designed two schemes: the transactions-based security selection scheme and the transactions-based authentication selection scheme. These schemes use transactions sensitivity levels and the usage context, which includes users behaviors, network used, device used, and so on, to decide the required security and authentication levels. Based on this analysis, requisite security technique, and authentication protocols are applied for the trans-action in process. The Behaviors-Observations-Beliefs (BOB) model is developed using cognitive agents to supplement the working of the security and authentication selection schemes. A transaction classification model is proposed to classify the transactions into various sensitivity levels.
The BOB model
The BOB model is a cognitive theory based model, to generate beliefs over a user, by observing various behaviors exhibited by a user during transactions. The BOB model uses two types of Cognitive Agents (CAs), the mobile CAs (MCAs) and the static CAs (SCAs). The MCAs are deployed over the client devices to formulate beliefs by observing various behaviors of a user during the transaction execution. The SCA performs belief analysis, and identifies the belief deviations w.r.t. established beliefs. We have developed four constructs to implement the BOB model, namely: behaviors identifier, observations generator, beliefs formulator, and beliefs analyser. The BOB model is developed by giving emphasis on using the minimum computation and minimum code size, by keeping the resource restrictiveness of the mobile devices and infrastructure. The knowledge organisation using cognitive factors, helps in selecting the rational approach for deciding the legitimacy of a user or a session. It also reduces the solution search space by consolidating the user behaviors into an high-level data such as beliefs, as a result the decision making time reduces considerably.
The transactions classification model
This model is proposed to classify the given set of transactions of an application service into four sensitivity levels. The grouping of transactions is based on the operations they perform, and the amount of risk/loss involved if they are misused. The four levels are namely, transactions who’s execution may cause no-damage (level-0), minor-damage (level-1), significant-damage (level-2) and substantial-damage (level-3). A policy-based transaction classifier is developed and incorporated in the SCA to decide the transaction sensitivity level of a given transaction.
Transactions-based security selection scheme (TBSS-Scheme)
The traditional security schemes at application-level are either session or transaction or event based. They secure the application-data with prefixed security techniques on mobile transactions or events. Generally mobile transactions possesses different security risk profiles, so, empirically we may find that there is a need for various levels of data security schemes for the mobile communications environment, which face the resource insufficiency in terms of bandwidth, energy, and computation capabilities.
We have proposed an intelligent security techniques selection scheme at the application-level, which dynamically decides the security technique to be used for a given transaction in real-time. The TBSS-Scheme uses the BOB model and transactions classification model, while deciding the required security technique. The selection is purely based on the transaction sensitivity level, and user behaviors. The Security techniques repository is used in the proposed scheme, organised under three levels based on the complexity of security techniques. The complexities are decided based on time and space complexities, and the strength of the security technique against some of the latest security attacks. The credibility factors are computed using the credibility module, over transaction network, and transaction device are also used while choosing the security technique from a particular level of security repository. Analytical models are presented on beliefs analysis, security threat analysis, and average security cost incurred during the transactions session. The results of this scheme are compared with regular schemes, and advantageous and limitations of the proposed scheme are discussed. A case study on application of the proposed security selection scheme is conducted over mobile banking application, and results are presented.
Transactions-based authentication selection scheme (TBAS-Scheme)
The authentication protocols/schemes are used at the application-level to authenticate the genuine users/parties and devices used in the application. Most of these protocols challenges the user/device to get the authentication information, rather than deploying the methods to identify the validity of a user/device. Therefore, there is a need for an authentication scheme, which intelligently authenticates a user by continuously monitoring the genuinity of the activities/events/ behaviors/transactions through out the session.
Transactions-based authentication selection scheme provides a new dimension in authenticating users of services. It enables strong authentication at the transaction level, based on sensitivity level of the given transaction, and user behaviors. The proposed approach intensifies the procedure of authentication by selecting authentication schemes by using the BOB-model and transactions classification models. It provides effective authentication solution, by relieving the conventional authentication systems, from being dependent only on the strength of authentication identifiers. We have made a performance comparison between transactions-based authentication selection scheme with session-based authentication scheme in terms of identification of various active attacks, and average authentication delay and average authentication costs are analysed. We have also shown the working of the proposed scheme in inter-domain and intra-domain hand-off scenarios, and discussed the merits of the scheme comparing it with mobile IP authentication scheme. A case study on application of the proposed authentication selection scheme for authenticating personalized multimedia services is presented.
Implementation of the TBSS and the TBAS schemes for mobile commerce application
We have implemented the integrated working of both the TBSS and TBAS schemes for a mo-bile commerce application. The details on identifying vendor selection, day of purchase, time of purchase, transaction value, frequency of purchase behaviors are given. A sample list of mobile commerce transactions is presented along with their classification into various sensitivity levels. The working of the system is discussed using three cases of purchases, and the results on trans-actions distribution, deviation factor generation, security technique selection, and authentication challenge generation are presented.
In summary, we have developed an integrated dynamic security and authentication system using, the above mentioned selection schemes for mobile transactions, and by incorporating the BOB model, transactions classification model, and credibility modules. We have successfully implemented the proposed schemes using cognitive agents based middleware. The results of experiments suggest that incorporating user behaviors, and transaction sensitivity levels will bring dynamism and adaptiveness to security and authentication system. Through which the mobile communication security could be made more robust to attacks, and resource savvy in terms of reduced bandwidth and computation requirements by using an appropriate security and authentication technique/protocol.
|
178 |
Performance Analysis Of Multiuser/Cooperative OFDM Systems With Carrier Frequency And Timing OffsetsRaghunath, K 12 1900 (has links)
Multiuser and cooperative orthogonal frequency division multiplexing(OFDM) systems are being actively researched and adopted in wireless standards, owing to their advantages of robustness to multipath fading, modularity, and ability to achieve high data rates. In OFDM based systems, perfect frequency and timing synchronization is essential to maintain orthogonality among the subcarriers at the receiver. In multiuser OFDM on the uplink, timing offsets (TOs) and/or carrier frequency offsets (CFOs) of different users, caused due to path delay differences between different users, Doppler and/or poor oscillator alignment, can destroy orthogonality among subcarriers at the receiver. This results in multiuser interference (MUI)and consequent performance degradation. In this thesis, we are concerned with the analysis and mitigation of the effect of large CFOs and TOs in multiuser OFDM systems, including uplink orthogonal frequency division multiple access (OFDMA),uplink single-carrier frequency division multiple access(SC-FDMA), and cooperative OFDM.
Uplink OFDMA: In the first part of this thesis, we analytically quantify the effect of large CFOs and TOs on the signal-to-interference plus noise ratio(SINR) and uncoded bit error rate(BER) performance of uplink OFDMA on Rayleigh and Rician fading channels, and show analytical results to closely match with simulation results. Such an SINR/BER analysis for uplink OFDMA in the presence of both large CFOs as well as TOs has not been reported before. We also propose interference cancelling(IC) receivers to mitigate the performance degradation caused due to large CFOs and TOs of different users.
SC-FDMA versus OFDMA: An issue with uplink OFDMA is its high peak-to-average power ratio(PAPR).Uplink SC-FDMA is proposed in the standards as a good low-PAPR alternative to uplink OFDMA; e.g., SC-FDMA has been adopted in the uplink of 3GPP LTE. A comparative investigation of uplink SC-FDMA and OFDMA from a sensitivity to large CFOs and TOs view point has not been reported in the literature. Consequently, in the second part of the thesis, we carry out a comparative study of the sensitivity of SC-FDMA and OFDMA schemes to large CFOs and TOs of different users on the uplink. Our results show that while SC-FDMA achieves better performance due to its inherent frequency diversity advantage compared to OFDMA in the case of perfect synchronization, its performance can get worse than that of OFDMA in the presence of large CFOs and TOs. We further show that use of low-complexity multistage IC techniques, with the knowledge of CFOs and TOs of different users at the receiver, can restore the performance advantage of SC-FDMA over OFDMA.
Cooperative OFDM: Cooperative OFDM is becoming popular because of its ability to provide spatial diversity in systems where each node has only one antenna. In most studies on cooperative communications, perfect time synchronization among cooperating nodes is assumed. This implies that the transmissions from different cooperating nodes reach the destination receiver in orthogonal time slots. In practice, however, due to imperfect time synchronization, orthogonality among different nodes’ signals at the destination receiver can be lost, causing inter-symbol interference(ISI).In the third part of the thesis, we investigate cooperative OFDM communications using amplify-and-forward(AF) protocol at the relay, in the presence of imperfect timing synchronization. We derive analytical expressions for the ISI as function of timing offset for cooperative OFDM with AF protocol, and propose an IC receiver to mitigate the effects of timing offset induced ISI.
|
179 |
Rate-Distortion Performance And Complexity Optimized Structured Vector QuantizationChatterjee, Saikat 07 1900 (has links)
Although vector quantization (VQ) is an established topic in communication, its practical utility has been limited due to (i) prohibitive complexity for higher quality and bit-rate, (ii) structured VQ methods which are not analyzed for optimum performance, (iii) difficulty of mapping theoretical performance of mean square error (MSE) to perceptual measures. However, an ever increasing demand for various source signal compression, points to VQ as the inevitable choice for high efficiency. This thesis addresses all the three above issues, utilizing the power of parametric stochastic modeling of the signal source, viz., Gaussian mixture model (GMM) and proposes new solutions. Addressing some of the new requirements of source coding in network applications, the thesis also presents solutions for scalable bit-rate, rate-independent complexity and decoder scalability.
While structured VQ is a necessity to reduce the complexity, we have developed, analyzed and compared three different schemes of compensation for the loss due to structured VQ. Focusing on the widely used methods of split VQ (SVQ) and KLT based transform domain scalar quantization (TrSQ), we develop expressions for their optimum performance using high rate quantization theory. We propose the use of conditional PDF based SVQ (CSVQ) to compensate for the split loss in SVQ and analytically show that it achieves coding gain over SVQ. Using the analytical expressions of complexity, an algorithm to choose the optimum splits is proposed. We analyze these techniques for their complexity as well as perceptual distortion measure, considering the specific case of quantizing the wide band speech line spectrum frequency (LSF) parameters. Using natural speech data, it is shown that the new conditional PDF based methods provide better perceptual distortion performance than the traditional methods.
Exploring the use of GMMs for the source, we take the approach of separately estimating the GMM parameters and then use the high rate quantization theory in a simplified manner to derive closed form expressions for optimum MSE performance. This has led to the development of non-linear prediction for compensating the split loss (in contrast to the linear prediction using a Gaussian model). We show that the GMM approach can improve the recently proposed adaptive VQ scheme of switched SVQ (SSVQ). We derive the optimum performance expressions for SSVQ, in both variable bit rate and fixed bit rate formats, using the simplified approach of GMM in high rate theory.
As a third scheme for recovering the split loss in SVQ and reduce the complexity, we propose a two stage SVQ (TsSVQ), which is analyzed for minimum complexity as well as perceptual distortion. Utilizing the low complexity of transform domain SVQ (TrSVQ) as well as the two stage approach in a universal coding framework, it is shown that we can achieve low complexity as well as better performance than SSVQ. Further, the combination of GMM and universal coding led to the development of a highly scalable coder which can provide both bit-rate scalability, decoder scalability and rate-independent low complexity. Also, the perceptual distortion performance is comparable to that of SSVQ.
Since GMM is a generic source model, we develop a new method of predicting the performance bound for perceptual distortion using VQ. Applying this method to LSF quantization, the minimum bit rates for quantizing telephone band LSF (TB-LSF) and wideband LSF (WB-LSF) are derived.
|
180 |
Optimum Event Detection In Wireless Sensor NetworksKarumbu, Premkumar 11 1900 (has links) (PDF)
We investigate sequential event detection problems arising in Wireless Sensor Networks (WSNs). A number of battery–powered sensor nodes of the same sensing modality are deployed in a region of interest(ROI). By an event we mean a random time(and, for spatial events, a random location) after which the random process being observed by the sensor field experiences a change in its probability law. The sensors make measurements at periodic time instants, perform some computations, and then communicate the results of their computations to the fusion centre. The decision making algorithm in the fusion centre employs a procedure that makes a decision on whether the event has occurred or not based on the information it has received until the current decision instant. We seek event detection algorithms in various scenarios, that are optimal in the sense that the mean detection delay (delay between the event occurrence time and the alarm time) is minimum under certain detection error constraints.
In the first part of the thesis, we study event detection problems in a small extent network where the sensing coverage of any sensor includes the ROI. In particular, we are interested in the following problems: 1) quickest event detection with optimal control of the number of sensors that make observations(while the others sleep),2) quickest event detection on wireless ad hoc networks, and3) optimal transient change detection. In the second part of the thesis, we study the problem of quickest detection and isolation of an event in a large extent sensor network where the sensing coverage of any sensor is only a small portion of the ROI.
One of the major applications envisioned for WSNs is detecting any abnormal activity or intrusions in the ROI. An intrusion is typically a rare event, and hence, much of the energy of sensors gets drained away in the pre–intrusion period. Hence, keeping all the sensors in the awake state is wasteful of resources and reduces the lifetime of the WSN. This motivates us to consider the problem of sleep–wake scheduling of sensors along with quickest event detection. We formulate the Bayesian quickest event detection problem with the objective of minimising the expected total cost due to i)the detection delay and ii) the usage of sensors, subject to the constraint that the probability of false alarm is upper bounded by .We obtain optimal event detection procedures, along with optimal closed loop and open loop control for the sleep–wake scheduling of sensors.
In the classical change detection problem, at each sampling instant, a batch of samples(where is the number of sensors deployed in the ROI) is generated at the sensors and reaches the fusion centre instantaneously. However, in practice, the communication between the sensors and the fusion centre is facilitated by a wireless ad hoc network based on a random access mechanism such as in IEEE802.11 or IEEE802.15.4. Because of the medium access control(MAC)protocol of the wireless network employed, different samples of the same batch reach the fusion centre after random delays. The problem is to detect the occurrence of an event as early as possible subject to a false alarm constraint.
In this more realistic situation, we consider a design in which the fusion centre comprises a sequencer followed by a decision maker. In earlier work from our research group, a Network Oblivious Decision Making (NODM) was considered. In NODM, the decision maker in the fusion centre is presented with complete batches of observations as if the network was not present and makes a decision only at instants at which these batches are presented. In this thesis, we consider the design in which the decision maker makes a decision at all time instants based on the samples of all the complete batches received thus far, and the samples, if any, that it has received from the next (partial) batch. We show that for optimal decision making the network–state is required by the decision maker. Hence, we call this setting Network Aware Decision Making (NADM). Also, we obtain a mean delay optimal NADM procedure, and show that it is a network–state dependent threshold rule on the a posteriori probability of change.
In the classical change detection problem, the change is persistent, i.e., after the change–point, the state of nature remains in the in–change state for ever. However, in applications like intrusion detection, the event which causes the change disappears after a finite time, and the system goes to an out–of–change state. The distribution of observations in the out–of–change state is the same as that in the pre–change state. We call this short–lived change a transient change. We are interested in detecting whether a change has occurred, even after the change has disappeared at the time of detection.
We model the transient change and formulate the problem of quickest transient change detection under the constraint that the probability of false alarm is bounded by . We also formulate a change detection problem which maximizes the probability of detection (i.e., probability of stopping in the in–change state) subject to the probability of false alarm being bounded by . We obtain optimal detection rules and show that they are threshold d rules on the a posteriori probability of pre–change, where the threshold depends on the a posteriori probabilities of pre–change, in–change, and out–of–change states.
Finally, we consider the problem of detecting an event in a large extent WSN, where the event influences the observations of sensors only in the vicinity of where it occurs. Thus, in addition to the problem of event detection, we are faced with the problem of locating the event, also called the isolation problem. Since the distance of the sensor from the event affects the mean signal level that the sensor node senses, we consider a realistic signal propagation model in which the signal strength decays with distance. Thus, the post–change mean of the distribution of observations across sensors is different, and is unknown as the location of the event is unknown, making the problem highly challenging. Also, for a large extent WSN, a distributed solution is desirable. Thus, we are interested in obtaining distributed detection/isolation procedures which are detection delay optimal subject to false alarm and false isolation constraints.
For this problem, we propose the following local decision rules, MAX, HALL, and ALL, which are based on the CUSUM statistic, at each of the sensor nodes. We identify corroborating sets of sensor nodes for event location, and propose a global rule for detection/isolation based on the local decisions of sensors in the corroborating sets. Also, we show the minimax detection delay optimality of the procedures HALL and ALL.
|
Page generated in 0.1641 seconds