• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 85
  • 20
  • 11
  • 8
  • 2
  • 1
  • Tagged with
  • 147
  • 147
  • 46
  • 40
  • 26
  • 22
  • 21
  • 20
  • 18
  • 18
  • 14
  • 14
  • 13
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Digitally-Assisted Mixed-Signal Wideband Compressive Sensing

Yu, Zhuizhuan 2011 May 1900 (has links)
Digitizing wideband signals requires very demanding analog-to-digital conversion (ADC) speed and resolution specifications. In this dissertation, a mixed-signal parallel compressive sensing system is proposed to realize the sensing of wideband sparse signals at sub-Nqyuist rate by exploiting the signal sparsity. The mixed-signal compressive sensing is realized with a parallel segmented compressive sensing (PSCS) front-end, which not only can filter out the harmonic spurs that leak from the local random generator, but also provides a tradeoff between the sampling rate and the system complexity such that a practical hardware implementation is possible. Moreover, the signal randomization in the system is able to spread the spurious energy due to ADC nonlinearity along the signal bandwidth rather than concentrate on a few frequencies as it is the case for a conventional ADC. This important new property relaxes the ADC SFDR requirement when sensing frequency-domain sparse signals. The mixed-signal compressive sensing system performance is greatly impacted by the accuracy of analog circuit components, especially with the scaling of CMOS technology. In this dissertation, the effect of the circuit imperfection in the mixed-signal compressive sensing system based on the PSCS front-end is investigated in detail, such as the finite settling time, the timing uncertainty and so on. An iterative background calibration algorithm based on LMS (Least Mean Square) is proposed, which is shown to be able to effectively calibrate the error due to the circuit nonideal factors. A low-speed prototype built with off-the-shelf components is presented. The prototype is able to sense sparse analog signals with up to 4 percent sparsity at 32 percent of the Nqyuist rate. Many practical constraints that arose during building the prototype such as circuit nonidealities are addressed in detail, which provides good insights for a future high-frequency integrated circuit implementation. Based on that, a high-frequency sub-Nyquist rate receiver exploiting the parallel compressive sensing is designed and fabricated with IBM90nm CMOS technology, and measurement results are presented to show the capability of wideband compressive sensing at sub-Nyquist rate. To the best of our knowledge, this prototype is the first reported integrated chip for wideband mixed-signal compressive sensing. The proposed prototype achieves 7 bits ENOB and 3 GS/s equivalent sampling rate in simulation assuming a 0.5 ps state-of-art jitter variance, whose FOM beats the FOM of the high speed state-of-the-art Nyquist ADCs by 2-3 times. The proposed mixed-signal compressive sensing system can be applied in various fields. In particular, its applications for wideband spectrum sensing for cognitive radios and spectrum analysis in RF tests are discussed in this work.
72

Mobile localization : approach and applications

Rallapalli, Swati 09 February 2015 (has links)
Localization is critical to a number of wireless network applications. In many situations GPS is not suitable. This dissertation (i) develops novel localization schemes for wireless networks by explicitly incorporating mobility information and (ii) applies localization to physical analytics i.e., understanding shoppers' behavior within retail spaces by leveraging inertial sensors, Wi-Fi and vision enabled by smart glasses. More specifically, we first focus on multi-hop mobile networks, analyze real mobility traces and observe that they exhibit temporal stability and low-rank structure. Motivated by these observations, we develop novel localization algorithms to effectively capture and also adapt to different degrees of these properties. Using extensive simulations and testbed experiments, we demonstrate the accuracy and robustness of our new schemes. Second, we focus on localizing a single mobile node, which may not be connected with multiple nodes (e.g., without network connectivity or only connected with an access point). We propose trajectory-based localization using Wi-Fi or magnetic field measurements. We show that these measurements have the potential to uniquely identify a trajectory. We then develop a novel approach that leverages multi-level wavelet coefficients to first identify the trajectory and then localize to a point on the trajectory. We show that this approach is highly accurate and power efficient using indoor and outdoor experiments. Finally, localization is a critical step in enabling a lot of applications --- an important one is physical analytics. Physical analytics has the potential to provide deep-insight into shoppers' interests and activities and therefore better advertisements, recommendations and a better shopping experience. To enable physical analytics, we build ThirdEye system which first achieves zero-effort localization by leveraging emergent devices like the Google-Glass to build AutoLayout that fuses video, Wi-Fi, and inertial sensor data, to simultaneously localize the shoppers while also constructing and updating the product layout in a virtual coordinate space. Further, ThirdEye comprises of a range of schemes that use a combination of vision and inertial sensing to study mobile users' behavior while shopping, namely: walking, dwelling, gazing and reaching-out. We show the effectiveness of ThirdEye through an evaluation in two large retail stores in the United States. / text
73

Coding-Based System Primitives for Airborne Cloud Computing

Lin, Chit-Kwan January 2011 (has links)
The recent proliferation of sensors in inhospitable environments such as disaster or battle zones has not been matched by in situ data processing capabilities due to a lack of computing infrastructure in the field. We envision a solution based on small, low-altitude unmanned aerial vehicles (UAVs) that can deploy elastically-scalable computing infrastructure anywhere, at any time. This airborne compute cloud—essentially, micro-data centers hosted on UAVs—would communicate with terrestrial assets over a bandwidth-constrained wireless network with variable, unpredictable link qualities. Achieving high performance over this ground-to-air mobile radio channel thus requires making full and efficient use of every single transmission opportunity. To this end, this dissertation presents two system primitives that improve throughput and reduce network overhead by using recent distributed coding methods to exploit natural properties of the airborne environment (i.e., antenna beam diversity and anomaly sparsity). We first built and deployed an UAV wireless networking testbed and used it to characterize the ground-to-UAV wireless channel. Our flight experiments revealed that antenna beam diversity from using multiple SISO radios boosts reception range and aggregate throughput. This observation led us to develop our first primitive: ground-to-UAV bulk data transport. We designed and implemented FlowCode, a reliable link layer for uplink data transport that uses network coding to harness antenna beam diversity gains. Via flight experiments, we show that FlowCode can boost reception range and TCP throughput as much as 4.5-fold. Our second primitive permits low-overhead cloud status monitoring. We designed CloudSense, a network switch that compresses cloud status streams in-network via compressive sensing. CloudSense is particularly useful for anomaly detection tasks requiring global relative comparisons (e.g., MapReduce straggler detection) and can achieve up to 16.3-fold compression as well as early detection of the worst anomalies. Our efforts have also shed light on the close relationship between network coding and compressive sensing. Thus, we offer FlowCode and CloudSense not only as first steps toward the airborne compute cloud, but also as exemplars of two classes of applications—approximation intolerant and tolerant—to which network coding and compressive sensing should be judiciously and selectively applied. / Engineering and Applied Sciences
74

Efficient, provably secure code constructions

Agrawal, Shweta Prem 31 May 2011 (has links)
The importance of constructing reliable and efficient methods for securing digital information in the modern world cannot be overstated. The urgency of this need is reflected in mainstream media--newspapers and websites are full of news about critical user information, be it credit card numbers, medical data, or social security information, being compromised and used illegitimately. According to news reports, hackers probe government computer networks millions of times a day, about 9 million Americans have their identities stolen each year and cybercrime costs large American businesses 3.8 million dollars a year. More than 1 trillion worth of intellectual property has already been stolen from American businesses. It is this evergrowing problem of securing valuable information that our thesis attempts to address (in part). In this thesis, we study methods to secure information that are fast, convenient and reliable. Our overall contribution has four distinct threads. First, we construct efficient, "expressive" Public Key Encryption systems (specifically, Identity Based Encryption systems) based on the hardness of lattice problems. In Identity Based Encryption (IBE), any arbitrary string such as the user's email address or name can be her public key. IBE systems are powerful and address several problems faced by the deployment of Public Key Encryption. Our constructions are secure in the standard model. Next, we study secure communication over the two-user interference channel with an eavesdropper. We show that using lattice codes helps enhance the secrecy rate of this channel in the presence of an eavesdropper. Thirdly, we analyze the security requirements of network coding. Network Coding is an elegant method of data transmission which not only helps achieve capacity in several networks, but also has a host of other benefits. However, network coding is vulnerable to "pollution attacks" when there are malicious users in the system. We design mechanisms to prevent pollution attacks. In this setting, we provide two constructions -- a homomorphic Message Authentication Code (HMAC) and a Digital Signature, to secure information that is transmitted over such networks. Finally, we study the benefits of using Compressive Sensing for secure communication over the Wyner wiretap channel. Compressive Sensing has seen an explosion of interest in the last few years with its elegant mathematics and plethora of applications. So far however, Compressive Sensing had not found application in the domain of secrecy. Given its inherent assymetry, we ask (and answer in the affirmative) the question of whether it can be deployed to enable secure communication. Our results allow linear encoding and efficient decoding (via LASSO) at the legitimate receiver, along with infeasibility of message recovery (via an information theoretic analysis) at the eavesdropper, regardless of decoding strategy. / text
75

Coding and Signal Processing Techniques for High Efficiency Data Storage and Transmission Systems

Pan, Lu January 2013 (has links)
Generally speaking, a communication channel refers to a medium through which an information-bearing signal is corrupted by noise and distortion. A communication channel may result from data storage over time or data transmission through space. A primary task for communication engineers is to mathematically characterize the channel to facilitate the design of appropriate detection and coding systems. In this dissertation, two different channel modeling challenges for ultra-high density magnetic storage are investigated: two-dimensional magnetic recording (TDMR) and bit-patterned magnetic recording (BPMR). In the case of TDMR, we characterize the error mechanisms during the write/read process of data on a TDMR medium by a finite-state machine, and then design a state-based detector that provides soft decisions for use by an outer decoder. In the case of BPMR, we employ an insertion/deletion (I/D) model. We propose a LDPC-CRC product coding scheme that enables the error detection without the involvement of Marker codes specifically designed for an I/D channel. We also propose a generalized Gilbert-Elliott (GE) channel to approximate the I/D channel in the sense of an equivalent I/D event rate. A lower bound of the channel capacity for the BPMR channel is derived, which supports our claim that commonly used error-correction codes are effective on the I/D channel under the assumption that I/D events are limited to a finite length. Another channel model we investigated is perpendicular magnetic recording model. Advanced signal processing for the pattern-dependent-noise-predictive channel detectors is our focus. Specifically, we propose an adaptive scheme for a hardware design that reduces the complexity of the detector and the truncation/saturation error caused by a fix-point representation of values in the detector. Lastly, we designed a sequence detector for compressively sampled Bluetooth signals, thus allowing data recovery via sub-Nyquist sampling. This detector skips the conventional step of reconstructing the original signal from compressive samples prior to detection. We also propose an adaptive design of the sampling matrix, which almost achieves Nyquist sampling performance with a relatively high compression ratio. Additionally, this adaptive scheme can automatically choose an appropriate compression ratio as a function of E(b)/N₀ without explicit knowledge of it.
76

Scalable video transmission over wireless networks

Xiang, Siyuan 12 March 2013 (has links)
With the increasing demand of video applications in wireless networks, how to better support video transmission over wireless networks has drawn much attention to the research community. Time-varying and error-prone nature of wireless channel makes video transmission in wireless networks a challenging task to provide the users with satisfactory watching experience. For different video applications, we choose different video coding techniques accordingly. E.g., for Internet video streaming, we choose standardized H.264 video codec; for video transmission in sensor networks or multicast, we choose simple and energy-conserving video coding technique based on compressive sensing. Thus, the challenges for different video transmission applications are different. Therefore, This dissertation tackles video transmission problem in three different applications. First, for dynamic adaptive streaming over HTTP (DASH), we investigate the streaming strategy. Specifically, we focus on the rate adaptation algorithm for streaming scalable video (H.264/SVC) in wireless networks. We model the rate adaptation problem as a Markov Decision Process (MDP), aiming to find an optimal streaming strategy in terms of user-perceived quality of experience (QoE) such as playback interruption, average playback quality and playback smoothness. We then obtain the optimal MDP solution using dynamic programming. However, the optimal solution requires the knowledge of the available bandwidth statistics and has a large number of states, which makes it difficult to obtain the optimal solution in real time. Therefore, we further propose an online algorithm which integrates the learning and planning process. The proposed online algorithm collects bandwidth statistics and makes streaming decisions in real time. A reward parameter has been defined in our proposed streaming strategy, which can be adjusted to make a good trade-off between the average playback quality and playback smoothness.We also use a simple testbed to validate our proposed algorithm. Second, for video transmission in wireless sensor networks, we consider a wireless sensor node monitoring the environment and it is equipped with a compressive-sensing based, single-pixel image camera and other sensors such as temperature and humidity sensors. The wireless node needs to send the data out in a timely and energy efficient way. This transmission control problem is challenging in that we need to jointly consider perceived video quality, quality variation, power consumption and transmission delay requirements, and the wireless channel uncertainty. We address the above issues by first building a rate-distortion model for compressive sensing video. Then we formulate the deterministic and stochastic optimization problems and design the transmission control algorithm which jointly performs rate control, scheduling and power control. Third, we propose a low-complex, scalable video coding architecture based on compressive sensing (SVCCS) for wireless unicast and multicast transmissions. SVCCS achieves good scalability, error resilience and coding efficiency. SVCCS encoded bitstream is divided into base and enhancement layers. The layered structure provides quality and temporal scalability. While in the enhancement layer, the CS measurements provide fine granular quality scalability. We also investigate the rate allocation problem for multicasting SVCCS encoded bitstream to a group of receivers with heterogeneous channel conditions. Specifically, we study how to allocate rate between the base and enhancement layer to improve the overall perceived video quality for all the receivers. / Graduate / 0984 / siyxiang@ece.uvic.ca
77

Statistical Filtering for Multimodal Mobility Modeling in Cyber Physical Systems

Tabibiazar, Arash 30 January 2013 (has links)
A Cyber-Physical System integrates computations and dynamics of physical processes. It is an engineering discipline focused on technology with a strong foundation in mathematical abstractions. It shares many of these abstractions with engineering and computer science, but still requires adaptation to suit the dynamics of the physical world. In such a dynamic system, mobility management is one of the key issues against developing a new service. For example, in the study of a new mobile network, it is necessary to simulate and evaluate a protocol before deployment in the system. Mobility models characterize mobile agent movement patterns. On the other hand, they describe the conditions of the mobile services. The focus of this thesis is on mobility modeling in cyber-physical systems. A macroscopic model that captures the mobility of individuals (people and vehicles) can facilitate an unlimited number of applications. One fundamental and obvious example is traffic profiling. Mobility in most systems is a dynamic process and small non-linearities can lead to substantial errors in the model. Extensive research activities on statistical inference and filtering methods for data modeling in cyber-physical systems exist. In this thesis, several methods are employed for multimodal data fusion, localization and traffic modeling. A novel energy-aware sparse signal processing method is presented to process massive sensory data. At baseline, this research examines the application of statistical filters for mobility modeling and assessing the difficulties faced in fusing massive multi-modal sensory data. A statistical framework is developed to apply proposed methods on available measurements in cyber-physical systems. The proposed methods have employed various statistical filtering schemes (i.e., compressive sensing, particle filtering and kernel-based optimization) and applied them to multimodal data sets, acquired from intelligent transportation systems, wireless local area networks, cellular networks and air quality monitoring systems. Experimental results show the capability of these proposed methods in processing multimodal sensory data. It provides a macroscopic mobility model of mobile agents in an energy efficient way using inconsistent measurements.
78

Data-guided statistical sparse measurements modeling for compressive sensing

Schwartz, Tal Shimon January 2013 (has links)
Digital image acquisition can be a time consuming process for situations where high spatial resolution is required. As such, optimizing the acquisition mechanism is of high importance for many measurement applications. Acquiring such data through a dynamically small subset of measurement locations can address this problem. In such a case, the measured information can be regarded as incomplete, which necessitates the application of special reconstruction tools to recover the original data set. The reconstruction can be performed based on the concept of sparse signal representation. Recovering signals and images from their sub-Nyquist measurements forms the core idea of compressive sensing (CS). In this work, a CS-based data-guided statistical sparse measurements method is presented, implemented and evaluated. This method significantly improves image reconstruction from sparse measurements. In the data-guided statistical sparse measurements approach, signal sampling distribution is optimized for improving image reconstruction performance. The sampling distribution is based on underlying data rather than the commonly used uniform random distribution. The optimal sampling pattern probability is accomplished by learning process through two methods - direct and indirect. The direct method is implemented for learning a nonparametric probability density function directly from the dataset. The indirect learning method is implemented for cases where a mapping between extracted features and the probability density function is required. The unified model is implemented for different representation domains, including frequency domain and spatial domain. Experiments were performed for multiple applications such as optical coherence tomography, bridge structure vibration, robotic vision, 3D laser range measurements and fluorescence microscopy. Results show that the data-guided statistical sparse measurements method significantly outperforms the conventional CS reconstruction performance. Data-guided statistical sparse measurements method achieves much higher reconstruction signal-to-noise ratio for the same compression rate as the conventional CS. Alternatively, Data-guided statistical sparse measurements method achieves similar reconstruction signal-to-noise ratio as the conventional CS with significantly fewer samples.
79

Data-Driven Network Analysis and Applications

Tao, Narisu 14 September 2015 (has links)
No description available.
80

System Reconstruction via Compressive Sensing, Complex-Network Dynamics and Electron Transport in Graphene Systems

January 2012 (has links)
abstract: Complex dynamical systems consisting interacting dynamical units are ubiquitous in nature and society. Predicting and reconstructing nonlinear dynamics of units and the complex interacting networks among them serves the base for the understanding of a variety of collective dynamical phenomena. I present a general method to address the two outstanding problems as a whole based solely on time-series measurements. The method is implemented by incorporating compressive sensing approach that enables an accurate reconstruction of complex dynamical systems in terms of both nodal equations that determines the self-dynamics of units and detailed coupling patterns among units. The representative advantages of the approach are (i) the sparse data requirement which allows for a successful reconstruction from limited measurements, and (ii) general applicability to identical and nonidentical nodal dynamics, and to networks with arbitrary interacting structure, strength and sizes. Another two challenging problem of significant interest in nonlinear dynamics: (i) predicting catastrophes in nonlinear dynamical systems in advance of their occurrences and (ii) predicting the future state for time-varying nonlinear dynamical systems, can be formulated and solved in the framework of compressive sensing using only limited measurements. Once the network structure can be inferred, the dynamics behavior on them can be investigated, for example optimize information spreading dynamics, suppress cascading dynamics and traffic congestion, enhance synchronization, game dynamics, etc. The results can yield insights to control strategies design in the real-world social and natural systems. Since 2004, there has been a tremendous amount of interest in graphene. The most amazing feature of graphene is that there exists linear energy-momentum relationship when energy is low. The quasi-particles inside the system can be treated as chiral, massless Dirac fermions obeying relativistic quantum mechanics. Therefore, the graphene provides one perfect test bed to investigate relativistic quantum phenomena, such as relativistic quantum chaotic scattering and abnormal electron paths induced by klein tunneling. This phenomenon has profound implications to the development of graphene based devices that require stable electronic properties. / Dissertation/Thesis / Ph.D. Electrical Engineering 2012

Page generated in 0.1128 seconds