71 |
Hybrid Compressed-and-Forward Relaying Based on Compressive Sensing and Distributed LDPC CodesLin, Yu-Liang 26 July 2012 (has links)
Cooperative communication has been shown that it is an effective way to combat the outage caused by channel fading; that is, it provides the spatial diversity for communication. Except for amplify-and-forward (AF) and decode-and-forward (DF), compressed-and-forward (CF) is also an efficient forwarding strategy. In this thesis, we proposed a new CF scheme. In the existing CF protocol, the relay will switch to the DF mode when the source transmitted signal can be recovered by the relay completely; no further compression is made in this scheme. In our proposed, the relay will estimate if the codeword in a block is succeeded decoded, choose the corresponding forwarding methods with LDPC coding; those are based on joint source-channel coding or compressive sensing. At the decode side, a joint decoder with side information that performs sum-product algorithm (SPA) to decode the source message. Simulation results show that the proposed CF scheme can acquire the spatial diversity and outperform AF and DF schemes.
|
72 |
Digitally-Assisted Mixed-Signal Wideband Compressive SensingYu, Zhuizhuan 2011 May 1900 (has links)
Digitizing wideband signals requires very demanding analog-to-digital conversion (ADC) speed and resolution specifications. In this dissertation, a mixed-signal parallel compressive sensing system is proposed to realize the sensing of wideband sparse signals at sub-Nqyuist rate by exploiting the signal sparsity. The mixed-signal compressive sensing is realized with a parallel segmented compressive sensing (PSCS) front-end, which not only can filter out the harmonic spurs that leak from the local random generator, but also provides a tradeoff between the sampling rate and the system complexity such that a practical hardware implementation is possible. Moreover, the signal randomization in the
system is able to spread the spurious energy due to ADC nonlinearity along the signal bandwidth rather than concentrate on a few frequencies as it is the case for a conventional ADC. This important new property relaxes the ADC SFDR requirement when sensing frequency-domain
sparse signals.
The mixed-signal compressive sensing system performance is greatly impacted by the accuracy of analog circuit components, especially with the scaling of CMOS technology. In this dissertation, the effect of the circuit imperfection in the mixed-signal compressive
sensing system based on the PSCS front-end is investigated in detail, such as the finite settling
time, the timing uncertainty and so on. An iterative background calibration algorithm based on LMS (Least Mean Square) is proposed, which is shown to be able to effectively calibrate the error due to the circuit nonideal factors.
A low-speed prototype built with off-the-shelf components is presented. The prototype is able to sense sparse analog signals with up to 4 percent sparsity at 32 percent of the Nqyuist rate. Many practical constraints that arose during building the prototype such as circuit nonidealities are addressed in detail, which provides good insights for a future high-frequency integrated
circuit implementation. Based on that, a high-frequency sub-Nyquist rate receiver exploiting the parallel compressive sensing is designed and fabricated with IBM90nm CMOS technology, and measurement results are presented to show the capability of wideband
compressive sensing at sub-Nyquist rate. To the best of our knowledge, this prototype is the first reported integrated chip for wideband mixed-signal compressive sensing. The proposed prototype achieves 7 bits ENOB and 3 GS/s equivalent sampling rate in simulation assuming a 0.5 ps state-of-art jitter variance, whose FOM beats the FOM of the high speed state-of-the-art Nyquist ADCs by 2-3 times.
The proposed mixed-signal compressive sensing system can be applied in various fields. In particular, its applications for wideband spectrum sensing for cognitive radios and spectrum analysis in RF tests are discussed in this work.
|
73 |
Mobile localization : approach and applicationsRallapalli, Swati 09 February 2015 (has links)
Localization is critical to a number of wireless network applications. In many situations GPS is not suitable. This dissertation (i) develops novel localization schemes for wireless networks by explicitly incorporating mobility information and (ii) applies localization to physical analytics i.e., understanding shoppers' behavior within retail spaces by leveraging inertial sensors, Wi-Fi and vision enabled by smart glasses. More specifically, we first focus on multi-hop mobile networks, analyze real mobility traces and observe that they exhibit temporal stability and low-rank structure. Motivated by these observations, we develop novel localization algorithms to effectively capture and also adapt to different degrees of these properties. Using extensive simulations and testbed experiments, we demonstrate the accuracy and robustness of our new schemes. Second, we focus on localizing a single mobile node, which may not be connected with multiple nodes (e.g., without network connectivity or only connected with an access point). We propose trajectory-based localization using Wi-Fi or magnetic field measurements. We show that these measurements have the potential to uniquely identify a trajectory. We then develop a novel approach that leverages multi-level wavelet coefficients to first identify the trajectory and then localize to a point on the trajectory. We show that this approach is highly accurate and power efficient using indoor and outdoor experiments. Finally, localization is a critical step in enabling a lot of applications --- an important one is physical analytics. Physical analytics has the potential to provide deep-insight into shoppers' interests and activities and therefore better advertisements, recommendations and a better shopping experience. To enable physical analytics, we build ThirdEye system which first achieves zero-effort localization by leveraging emergent devices like the Google-Glass to build AutoLayout that fuses video, Wi-Fi, and inertial sensor data, to simultaneously localize the shoppers while also constructing and updating the product layout in a virtual coordinate space. Further, ThirdEye comprises of a range of schemes that use a combination of vision and inertial sensing to study mobile users' behavior while shopping, namely: walking, dwelling, gazing and reaching-out. We show the effectiveness of ThirdEye through an evaluation in two large retail stores in the United States. / text
|
74 |
Coding-Based System Primitives for Airborne Cloud ComputingLin, Chit-Kwan January 2011 (has links)
The recent proliferation of sensors in inhospitable environments such as disaster or battle zones has not been matched by in situ data processing capabilities due to a lack of computing infrastructure in the field. We envision a solution based on small, low-altitude unmanned aerial vehicles (UAVs) that can deploy elastically-scalable computing infrastructure anywhere, at any time. This airborne compute cloud—essentially, micro-data centers hosted on UAVs—would communicate with terrestrial assets over a bandwidth-constrained wireless network with variable, unpredictable link qualities. Achieving high performance over this ground-to-air mobile radio channel thus requires making full and efficient use of every single transmission opportunity. To this end, this dissertation presents two system primitives that improve throughput and reduce network overhead by using recent distributed coding methods to exploit natural properties of the airborne environment (i.e., antenna beam diversity and anomaly sparsity). We first built and deployed an UAV wireless networking testbed and used it to characterize the ground-to-UAV wireless channel. Our flight experiments revealed that antenna beam diversity from using multiple SISO radios boosts reception range and aggregate throughput. This observation led us to develop our first primitive: ground-to-UAV bulk data transport. We designed and implemented FlowCode, a reliable link layer for uplink data transport that uses network coding to harness antenna beam diversity gains. Via flight experiments, we show that FlowCode can boost reception range and TCP throughput as much as 4.5-fold. Our second primitive permits low-overhead cloud status monitoring. We designed CloudSense, a network switch that compresses cloud status streams in-network via compressive sensing. CloudSense is particularly useful for anomaly detection tasks requiring global relative comparisons (e.g., MapReduce straggler detection) and can achieve up to 16.3-fold compression as well as early detection of the worst anomalies. Our efforts have also shed light on the close relationship between network coding and compressive sensing. Thus, we offer FlowCode and CloudSense not only as first steps toward the airborne compute cloud, but also as exemplars of two classes of applications—approximation intolerant and tolerant—to which network coding and compressive sensing should be judiciously and selectively applied. / Engineering and Applied Sciences
|
75 |
Efficient, provably secure code constructionsAgrawal, Shweta Prem 31 May 2011 (has links)
The importance of constructing reliable and efficient methods for securing digital information in the modern world cannot be overstated. The urgency of this need is reflected in mainstream media--newspapers and websites are full of news about critical user information, be it credit card numbers, medical data, or social security information, being compromised and used illegitimately. According to news reports, hackers probe government computer networks millions of times a day, about 9 million Americans have their identities stolen each year and cybercrime costs large American businesses 3.8 million dollars a year. More than 1 trillion worth of intellectual property has already been stolen from American businesses. It is this evergrowing problem of securing valuable information that our thesis attempts to address (in part). In this thesis, we study methods to secure information that are fast, convenient and reliable. Our overall contribution has four distinct threads. First, we construct efficient, "expressive" Public Key Encryption systems (specifically, Identity Based Encryption systems) based on the hardness of lattice problems. In Identity Based Encryption (IBE), any arbitrary string such as the user's email address or name can be her public key. IBE systems are powerful and address several problems faced by the deployment of Public Key Encryption. Our constructions are secure in the standard model. Next, we study secure communication over the two-user interference channel with an eavesdropper. We show that using lattice codes helps enhance the secrecy rate of this channel in the presence of an eavesdropper. Thirdly, we analyze the security requirements of network coding. Network Coding is an elegant method of data transmission which not only helps achieve capacity in several networks, but also has a host of other benefits. However, network coding is vulnerable to "pollution attacks" when there are malicious users in the system. We design mechanisms to prevent pollution attacks. In this setting, we provide two constructions -- a homomorphic Message Authentication Code (HMAC) and a Digital Signature, to secure information that is transmitted over such networks. Finally, we study the benefits of using Compressive Sensing for secure communication over the Wyner wiretap channel. Compressive Sensing has seen an explosion of interest in the last few years with its elegant mathematics and plethora of applications. So far however, Compressive Sensing had not found application in the domain of secrecy. Given its inherent assymetry, we ask (and answer in the affirmative) the question of whether it can be deployed to enable secure communication. Our results allow linear encoding and efficient decoding (via LASSO) at the legitimate receiver, along with infeasibility of message recovery (via an information theoretic analysis) at the eavesdropper, regardless of decoding strategy. / text
|
76 |
Coding and Signal Processing Techniques for High Efficiency Data Storage and Transmission SystemsPan, Lu January 2013 (has links)
Generally speaking, a communication channel refers to a medium through which an information-bearing signal is corrupted by noise and distortion. A communication channel may result from data storage over time or data transmission through space. A primary task for communication engineers is to mathematically characterize the channel to facilitate the design of appropriate detection and coding systems. In this dissertation, two different channel modeling challenges for ultra-high density magnetic storage are investigated: two-dimensional magnetic recording (TDMR) and bit-patterned magnetic recording (BPMR). In the case of TDMR, we characterize the error mechanisms during the write/read process of data on a TDMR medium by a finite-state machine, and then design a state-based detector that provides soft decisions for use by an outer decoder. In the case of BPMR, we employ an insertion/deletion (I/D) model. We propose a LDPC-CRC product coding scheme that enables the error detection without the involvement of Marker codes specifically designed for an I/D channel. We also propose a generalized Gilbert-Elliott (GE) channel to approximate the I/D channel in the sense of an equivalent I/D event rate. A lower bound of the channel capacity for the BPMR channel is derived, which supports our claim that commonly used error-correction codes are effective on the I/D channel under the assumption that I/D events are limited to a finite length. Another channel model we investigated is perpendicular magnetic recording model. Advanced signal processing for the pattern-dependent-noise-predictive channel detectors is our focus. Specifically, we propose an adaptive scheme for a hardware design that reduces the complexity of the detector and the truncation/saturation error caused by a fix-point representation of values in the detector. Lastly, we designed a sequence detector for compressively sampled Bluetooth signals, thus allowing data recovery via sub-Nyquist sampling. This detector skips the conventional step of reconstructing the original signal from compressive samples prior to detection. We also propose an adaptive design of the sampling matrix, which almost achieves Nyquist sampling performance with a relatively high compression ratio. Additionally, this adaptive scheme can automatically choose an appropriate compression ratio as a function of E(b)/N₀ without explicit knowledge of it.
|
77 |
Scalable video transmission over wireless networksXiang, Siyuan 12 March 2013 (has links)
With the increasing demand of video applications in wireless networks, how to
better support video transmission over wireless networks has drawn much
attention to the research community. Time-varying and error-prone nature of
wireless channel makes video transmission in wireless networks a challenging
task to provide the users with satisfactory watching experience. For different
video applications, we choose different video coding techniques accordingly.
E.g., for Internet video streaming, we choose standardized H.264 video codec;
for video transmission in sensor networks or multicast, we choose simple and
energy-conserving video coding technique based on compressive sensing. Thus, the
challenges for different video transmission applications are different.
Therefore, This dissertation tackles video transmission problem in three
different applications.
First, for dynamic adaptive streaming over HTTP (DASH), we investigate the
streaming strategy. Specifically, we focus on the rate adaptation algorithm for
streaming scalable video (H.264/SVC) in wireless networks. We model the rate
adaptation problem as a Markov Decision Process (MDP), aiming to find an optimal
streaming strategy in terms of user-perceived quality of experience (QoE) such
as playback interruption, average playback quality and playback smoothness. We
then obtain the optimal MDP solution using dynamic programming. However, the
optimal solution requires the knowledge of the available bandwidth statistics
and has a large number of states, which makes it difficult to obtain the optimal
solution in real time. Therefore, we further propose an online algorithm which
integrates the learning and planning process. The proposed online algorithm
collects bandwidth statistics and makes streaming decisions in real time. A
reward parameter has been defined in our proposed streaming strategy, which can
be adjusted to make a good trade-off between the average playback quality and
playback smoothness.We also use a simple testbed to validate our proposed
algorithm.
Second, for video transmission in wireless sensor networks, we consider a
wireless sensor node monitoring the environment and it is equipped with a
compressive-sensing based, single-pixel image camera and other sensors such as
temperature and humidity sensors. The wireless node needs to send the data out
in a timely and energy efficient way. This transmission control problem is
challenging in that we need to jointly consider perceived video quality, quality
variation, power consumption and transmission delay requirements, and the
wireless channel uncertainty. We address the above issues by first building a
rate-distortion model for compressive sensing video. Then we formulate the
deterministic and stochastic optimization problems and design the transmission
control algorithm which jointly performs rate control, scheduling and power
control.
Third, we propose a low-complex, scalable video coding architecture based on
compressive sensing (SVCCS) for wireless unicast and multicast transmissions.
SVCCS achieves good scalability, error resilience and coding efficiency. SVCCS
encoded bitstream is divided into base and enhancement layers. The layered
structure provides quality and temporal scalability. While in the enhancement
layer, the CS measurements provide fine granular quality scalability. We also
investigate the rate allocation problem for multicasting SVCCS encoded bitstream
to a group of receivers with heterogeneous channel conditions. Specifically, we
study how to allocate rate between the base and enhancement layer to improve the
overall perceived video quality for all the receivers. / Graduate / 0984 / siyxiang@ece.uvic.ca
|
78 |
Statistical Filtering for Multimodal Mobility Modeling in Cyber Physical SystemsTabibiazar, Arash 30 January 2013 (has links)
A Cyber-Physical System integrates computations and dynamics of physical processes. It is an engineering discipline focused on technology with a strong foundation in mathematical abstractions. It shares many of these abstractions with engineering and computer science, but still requires adaptation to suit the dynamics of the physical world.
In such a dynamic system, mobility management is one of the key issues against developing a new service. For example, in the study of a new mobile network, it is necessary to simulate and evaluate a protocol before deployment in the system. Mobility models characterize mobile agent movement patterns. On the other hand, they describe the conditions of the mobile services.
The focus of this thesis is on mobility modeling in cyber-physical systems. A macroscopic model that captures the mobility of individuals (people and vehicles) can facilitate an unlimited number of applications. One fundamental and obvious example is traffic profiling. Mobility in most systems is a dynamic process and small non-linearities can lead to substantial errors in the model.
Extensive research activities on statistical inference and filtering methods for data modeling in cyber-physical systems exist. In this thesis, several methods are employed for multimodal data fusion, localization and traffic modeling. A novel energy-aware sparse signal processing method is presented to process massive sensory data.
At baseline, this research examines the application of statistical filters for mobility modeling and assessing the difficulties faced in fusing massive multi-modal sensory data. A statistical framework is developed to apply proposed methods on available measurements in cyber-physical systems. The proposed methods have employed various statistical filtering schemes (i.e., compressive sensing, particle filtering and kernel-based optimization) and applied them to multimodal data sets, acquired from intelligent transportation systems, wireless local area networks, cellular networks and air quality monitoring systems. Experimental results show the capability of these proposed methods in processing multimodal sensory data. It provides a macroscopic mobility model of mobile agents in an energy efficient way using inconsistent measurements.
|
79 |
Data-guided statistical sparse measurements modeling for compressive sensingSchwartz, Tal Shimon January 2013 (has links)
Digital image acquisition can be a time consuming process for situations where high spatial resolution is required. As such, optimizing the acquisition mechanism is of high importance for many measurement applications. Acquiring such data through a dynamically small subset of measurement locations can address this problem. In such a case, the measured information can be regarded as incomplete, which necessitates the application of special reconstruction tools to recover the original data set. The reconstruction can be performed based on the concept of sparse signal representation. Recovering signals and images from their sub-Nyquist measurements forms the core idea of compressive sensing (CS). In this work, a CS-based data-guided statistical sparse measurements method is presented, implemented and evaluated. This method significantly improves image reconstruction from sparse measurements. In the data-guided statistical sparse measurements approach, signal sampling distribution is optimized for improving image reconstruction performance. The sampling distribution is based on underlying data rather than the commonly used uniform random distribution. The optimal sampling pattern probability is accomplished by learning process through two methods - direct and indirect. The direct method is implemented for learning a nonparametric probability density function directly from the dataset. The indirect learning method is implemented for cases where a mapping between extracted features and the probability density function is required. The unified model is implemented for different representation domains, including frequency domain and spatial domain. Experiments were performed for multiple applications such as optical coherence tomography, bridge structure vibration, robotic vision, 3D laser range measurements and fluorescence microscopy. Results show that the data-guided statistical sparse measurements method significantly outperforms the conventional CS reconstruction performance. Data-guided statistical sparse measurements method achieves much higher reconstruction signal-to-noise ratio for the same compression rate as the conventional CS. Alternatively, Data-guided statistical sparse measurements method achieves similar reconstruction signal-to-noise ratio as the conventional CS with significantly fewer samples.
|
80 |
Data-Driven Network Analysis and ApplicationsTao, Narisu 14 September 2015 (has links)
No description available.
|
Page generated in 0.0309 seconds