• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 72
  • 12
  • 10
  • 6
  • 2
  • 1
  • Tagged with
  • 117
  • 117
  • 42
  • 38
  • 20
  • 20
  • 14
  • 14
  • 13
  • 13
  • 13
  • 11
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Remote-Sensed LIDAR Using Random Impulsive Scans

Castorena, Juan 10 1900 (has links)
Third generation full-waveform (FW) LIDAR systems image an entire scene by emitting laser pulses in particular directions and measuring the echoes. Each of these echoes provides range measurements about the objects intercepted by the laser pulse along a specified direction. By scanning through a specified region using a series of emitted pulses and observing their echoes, connected 1D profiles of 3D scenes can be readily obtained. This extra information has proven helpful in providing additional insight into the scene structure which can be used to construct effective characterizations and classifications. Unfortunately, massive amounts of data are typically collected which impose storage, processing and transmission limitations. To address these problems, a number of compression approaches have been developed in the literature. These, however, generally require the initial acquisition of large amounts of data only to later discard most of it by exploiting redundancies, thus sampling inefficiently. Based on this, our main goal is to apply efficient and effective LIDAR sampling schemes that achieve acceptable reconstruction quality of the 3D scenes. To achieve this goal, we propose on using compressive sampling by emitting pulses only into random locations within the scene and collecting only the corresponding returned FW signals. Under this framework, the number of emissions would typically be much smaller than what traditional LIDAR systems require. Application of this requires, however, that scenes contain many degrees of freedom. Fortunately, such a requirement is satisfied in most natural and man-made scenes. Here, we propose to use a measure of rank as the measure of degrees of freedom. To recover the connected 1D profiles of the 3D scene, matrix completion is applied to the tensor slices. In this paper, we test our approach by showing that recovery of compressively sampled 1D profiles of actual 3D scenes is possible using only a subset of measurements.
22

Compressive Measurement of Spread Spectrum Signals

Liu, Feng January 2015 (has links)
Spread Spectrum (SS) techniques are methods used in communication systems where the spectra of the signal is spread over a much wider bandwidth. The large bandwidth of the resulting signals make SS signals difficult to intercept using conventional methods based on Nyquist sampling. Recently, a novel concept called compressive sensing has emerged. Compressive sensing theory suggests that a signal can be reconstructed from much fewer measurements than suggested by the Shannon Nyquist theorem, provided that the signal can be sparsely represented in a dictionary. In this work, motivated by this concept, we study compressive approaches to detect and decode SS signals. We propose compressive detection and decoding systems based both on random measurements (which have been the main focus of the CS literature) as well as designed measurement kernels that exploit prior knowledge of the SS signal. Compressive sensing methods for both Frequency-Hopping Spread Spectrum (FHSS) and Direct Sequence Spread Spectrum (DSSS) systems are proposed.
23

Intercarrier interference reduction and channel estimation in OFDM systems

Zhang, Yihai 16 August 2011 (has links)
With the increasing demand for more wireless multimedia applications, it is desired to design a wireless system with higher data rate. Furthermore, the frequency spectrum has become a limited and valuable resource, making it necessary to utilize the available spectrum efficiently and coexist with other wireless systems. Orthogonal frequency division multiplexing (OFDM) modulation is widely used in communication systems to meet the demand for ever increasing data rates. The major advantage of OFDM over single-carrier transmission is its ability to deal with severe channel conditions without complex equalization. However, OFDM systems suffer from a high peak to average power ratio, and they are sensitive to carrier frequency offset and Doppler spread. This dissertation first focuses on the development of intercarrier interference (ICI) reduction and signal detection algorithms for OFDM systems over time-varying channels. Several ICI reduction algorithms are proposed for OFDM systems over doubly-selective channels. The OFDM ICI reduction problem over time-varying channels is formulated as a combinatorial optimization problem based on the maximum likelihood (ML) criterion. First, two relaxation methods are utilized to convert the ICI reduction problem into convex quadratic programming (QP) problems. Next, a low complexity ICI reduction algorithm applicable to $M$-QAM signal constellations for OFDM systems is proposed. This formulates the ICI reduction problem as a QP problem with non-convex constraints. A successive method is then utilized to deduce a sequence of reduced-size QP problems. For the proposed algorithms, the QP problems are solved by limiting the search in the 2-dimensional subspace spanned by its steepest-descent and Newton directions to reduce the computational complexity. Furthermore, a low-bit descent search (LBDS) is employed to improve the system performance. Performance results are given to demonstrate that the proposed ICI reduction algorithms provide excellent performance with reasonable computational complexity. A low complexity joint semiblind detection algorithm based on the channel correlation and noise variance is proposed which does not require channel state information. The detection problem is relaxed to a continuous non-convex quadratic programming problem. Then an iterative method is utilized to deduce a sequence of reduced-size quadratic programming problems. A LBDS method is also employed to improve the solution of the derived QP problems. Results are given which demonstrate that the proposed algorithm provides similar performance with lower computational complexity compared to that of a sphere decoder. A major challenge to OFDM systems is how to obtain accurate channel state information for coherent detection of the transmitted signals. Thus several channel estimation algorithms are proposed for OFDM systems over time-invariant channels. A channel estimation method is developed to utilize the noncircularity of the input signals to obtain an estimate of the channel coefficients. It takes advantage of the nonzero cyclostationary statistics of the transmitted signals, which in turn allows blind polynomial channel estimation using second-order statistics of the OFDM symbol. A set of polynomial equations are formulated based on the correlation of the received signal which can be used to obtain an estimate of the time domain channel coefficients. Performance results are presented which show that the proposed algorithm provides better performance than the least minimum mean-square error (LMMSE) algorithm at high signal to noise ratios (SNRs), with low computational complexity. Near-optimal performance can be achieved with large OFDM systems. Finally, a CS-based time-domain channel estimation method is presented for OFDM systems over sparse channels. The channel estimation problem under consideration is formulated as a small-scale $l_1$-minimization problem which is convex and admits fast and reliable solvers for the globally optimal solution. It is demonstrated that the magnitudes as well as delays of the significant taps of a sparse channel model can be estimated with satisfactory accuracy by using fewer pilot tones than the channel length. Moreover, it is shown that a fast Fourier transform (FFT) matrix of extended size can be used as a set of appropriate basis vectors to enhance the channel sparsity. This technique allows the proposed method to be applicable to less-sparse OFDM channels. In addition, a total-variation (TV) minimization based method is introduced to provide an alternative way to solve the original sparse channel estimation problem. The performance of the proposed method is compared to several established channel estimation algorithms. / Graduate
24

Compressive sensing using lp optimization

Pant, Jeevan Kumar 26 April 2012 (has links)
Three problems in compressive sensing, namely, recovery of sparse signals from noise-free measurements, recovery of sparse signals from noisy measurements, and recovery of so called block-sparse signals from noisy measurements, are investigated. In Chapter 2, the reconstruction of sparse signals from noise-free measurements is investigated and three algorithms are developed. The first and second algorithms minimize the approximate L0 and Lp pseudonorms, respectively, in the null space of the measurement matrix using a sequential quasi-Newton algorithm. An efficient line search based on Banach's fixed-point theorem is developed and applied in the second algorithm. The third algorithm minimizes the approximate Lp pseudonorm in the null space by using a sequential conjugate-gradient (CG) algorithm. Simulation results are presented which demonstrate that the proposed algorithms yield improved signal reconstruction performance relative to that of the iterative reweighted (IR), smoothed L0 (SL0), and L1-minimization based algorithms. They also require a reduced amount of computations relative to the IR and L1-minimization based algorithms. The Lp-minimization based algorithms require less computation than the SL0 algorithm. In Chapter 3, the reconstruction of sparse signals and images from noisy measurements is investigated. First, two algorithms for the reconstruction of signals are developed by minimizing an Lp-pseudonorm regularized squared error as the objective function using the sequential optimization procedure developed in Chapter 2. The first algorithm minimizes the objective function by taking steps along descent directions that are computed in the null space of the measurement matrix and its complement space. The second algorithm minimizes the objective function in the time domain by using a CG algorithm. Second, the well known total variation (TV) norm has been extended to a nonconvex version called the TVp pseudonorm and an algorithm for the reconstruction of images is developed that involves minimizing a TVp-pseudonorm regularized squared error using a sequential Fletcher-Reeves' CG algorithm. Simulation results are presented which demonstrate that the first two algorithms yield improved signal reconstruction performance relative to the IR, SL0, and L1-minimization based algorithms and require a reduced amount of computation relative to the IR and L1-minimization based algorithms. The TVp-minimization based algorithm yields improved image reconstruction performance and a reduced amount of computation relative to Romberg's algorithm. In Chapter 4, the reconstruction of so-called block-sparse signals is investigated. The L2/1 norm is extended to a nonconvex version, called the L2/p pseudonorm, and an algorithm based on the minimization of an L2/p-pseudonorm regularized squared error is developed. The minimization is carried out using a sequential Fletcher-Reeves' CG algorithm and the line search described in Chapter 2. A reweighting technique for the reduction of amount of computation and a method to use prior information about the locations of nonzero blocks for the improvement in signal reconstruction performance are also proposed. Simulation results are presented which demonstrate that the proposed algorithm yields improved reconstruction performance and requires a reduced amount of computation relative to the L2/1-minimization based, block orthogonal matching pursuit, IR, and L1-minimization based algorithms. / Graduate
25

Design of Low-Power Front End Compressive Sensing Circuitry and Energy Harvesting Transducer Modeling for Self-Powered Motion Sensor

Kakaraparty, Karthikeya Anil Kumar 08 1900 (has links)
Compressed sensing (CS) is an innovative approach of signal processing that facilitates sub-Nyquist processing of bio-signals, such as a neural signal, electrocardiogram (ECG), and electroencephalogram (EEG). This strategy can be used to lower the data rate to realize ultra-low-power performance, As the count of recording channels increases, data volume is increased resulting in impermissible transmitting power. This thesis work presents the implementation of a CMOS-based front-end design with the CS in the standard 180 nm CMOS process. A novel pseudo-random sequence generator is proposed, which consists of two different types of D flip-flops that are used for obtaining a completely random sequence. This thesis work also includes the (reverse electrowetting-on-dielectric) REWOD based energy harvesting model for self-powered bio-sensor which utilizes the electrical energy generated through the process of conversion of mechanical energy to electrical energy. This REWOD based energy harvesting model can be a good alternative to battery usage, particularly for the bio-wearable applications. The comparative analysis of the results generated for voltage, current and capacitance of the rough surface model is compared to that of results of planar surface REWOD.
26

Τεχνικές επεξεργασίας αραιών σημάτων και εφαρμογές σε προβλήματα τηλεπικοινωνιών

Μπερμπερίδης, Δημήτρης 08 January 2013 (has links)
Η παρούσα εργασία χωρίζεται σε δύο μέρη. Στο πρώτο μέρος μελετάμε το αντικείμενο της Συμπιεσμένης καταγραφής. Το κείμενο εστιάζει στα βασικότερα σημεία της θεωρίας γύρω από την ανακατασκευή αραιών σημάτων από λίγες μετρήσεις, ενώ γίνεται και μία ανασκόπηση των τεχνικών ανακατασκευής. Στο δεύτερο μέρος παρουσιάζονται τα αποτελέσματα της ερευνητικής προσπάθειας γύρω από συγκεκριμένα προβλήματα ανακατασκευής. / -
27

Reconstruction-free Inference from Compressive Measurements

January 2015 (has links)
abstract: As a promising solution to the problem of acquiring and storing large amounts of image and video data, spatial-multiplexing camera architectures have received lot of attention in the recent past. Such architectures have the attractive feature of combining a two-step process of acquisition and compression of pixel measurements in a conventional camera, into a single step. A popular variant is the single-pixel camera that obtains measurements of the scene using a pseudo-random measurement matrix. Advances in compressive sensing (CS) theory in the past decade have supplied the tools that, in theory, allow near-perfect reconstruction of an image from these measurements even for sub-Nyquist sampling rates. However, current state-of-the-art reconstruction algorithms suffer from two drawbacks -- They are (1) computationally very expensive and (2) incapable of yielding high fidelity reconstructions for high compression ratios. In computer vision, the final goal is usually to perform an inference task using the images acquired and not signal recovery. With this motivation, this thesis considers the possibility of inference directly from compressed measurements, thereby obviating the need to use expensive reconstruction algorithms. It is often the case that non-linear features are used for inference tasks in computer vision. However, currently, it is unclear how to extract such features from compressed measurements. Instead, using the theoretical basis provided by the Johnson-Lindenstrauss lemma, discriminative features using smashed correlation filters are derived and it is shown that it is indeed possible to perform reconstruction-free inference at high compression ratios with only a marginal loss in accuracy. As a specific inference problem in computer vision, face recognition is considered, mainly beyond the visible spectrum such as in the short wave infra-red region (SWIR), where sensors are expensive. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2015
28

Compressive Visual Question Answering

January 2017 (has links)
abstract: Compressive sensing theory allows to sense and reconstruct signals/images with lower sampling rate than Nyquist rate. Applications in resource constrained environment stand to benefit from this theory, opening up many possibilities for new applications at the same time. The traditional inference pipeline for computer vision sequence reconstructing the image from compressive measurements. However,the reconstruction process is a computationally expensive step that also provides poor results at high compression rate. There have been several successful attempts to perform inference tasks directly on compressive measurements such as activity recognition. In this thesis, I am interested to tackle a more challenging vision problem - Visual question answering (VQA) without reconstructing the compressive images. I investigate the feasibility of this problem with a series of experiments, and I evaluate proposed methods on a VQA dataset and discuss promising results and direction for future work. / Dissertation/Thesis / Masters Thesis Computer Engineering 2017
29

ELASTIC NET FOR CHANNEL ESTIMATION IN MASSIVE MIMO

Peken, Ture, Tandon, Ravi, Bose, Tamal 10 1900 (has links)
Next generation wireless systems will support higher data rates, improved spectral efficiency, and less latency. Massive multiple-input multiple-output (MIMO) is proposed to satisfy these demands. In massive MIMO, many benefits come from employing hundreds of antennas at the base station (BS) and serving dozens of user terminals (UTs) per cell. As the number of antennas increases at the BS, the channel becomes sparse. By exploiting sparse channel in massive MIMO, compressive sensing (CS) methods can be implemented to estimate the channel. In CS methods, the length of pilot sequences can be shortened compared to pilot-based methods. In this paper, a novel channel estimation algorithm based on a CS method called elastic net is proposed. Channel estimation accuracy of pilot-based, lasso, and elastic-net based methods in massive MIMO are compared. It is shown that the elastic-net based method gives the best performance in terms of error for the less pilot symbols and SNR values.
30

Linearized inversion frameworks toward high-resolution seismic imaging

Aldawood, Ali 09 1900 (has links)
Seismic exploration utilizes controlled sources, which emit seismic waves that propagate through the earth subsurface and get reflected off subsurface interfaces and scatterers. The reflected and scattered waves are recorded by recording stations installed along the earth surface or down boreholes. Seismic imaging is a powerful tool to map these reflected and scattered energy back to their subsurface scattering or reflection points. Seismic imaging is conventionally based on the single-scattering assumption, where only energy that bounces once off a subsurface scatterer and recorded by a receiver is projected back to its subsurface position. The internally multiply scattered seismic energy is considered as unwanted noise and is usually suppressed or removed from the recorded data. Conventional seismic imaging techniques yield subsurface images that suffer from low spatial resolution, migration artifacts, and acquisition fingerprint due to the limited acquisition aperture, number of sources and receivers, and bandwidth of the source wavelet. Hydrocarbon traps are becoming more challenging and considerable reserves are trapped in stratigraphic and pinch-out traps, which require highly resolved seismic images to delineate them. This thesis focuses on developing and implementing new advanced cost-effective seismic imaging techniques aiming at enhancing the resolution of the migrated images by exploiting the sparseness of the subsurface reflectivity distribution and utilizing the multiples that are usually neglected when imaging seismic data. I first formulate the seismic imaging problem as a Basis pursuit denoise problem, which I solve using an L1-minimization algorithm to obtain the sparsest migrated image corresponding to the recorded data. Imaging multiples may illuminate subsurface zones, which are not easily illuminated by conventional seismic imaging using primary reflections only. I then develop an L2-norm (i.e. least-squares) inversion technique to image internally multiply scattered seismic waves to obtain highly resolved images delineating vertical faults that are otherwise not easily imaged by primaries. Seismic interferometry is conventionally based on the cross-correlation and convolution of seismic traces to transform seismic data from one acquisition geometry to another. The conventional interferometric transformation yields virtual data that suffers from low temporal resolution, wavelet distortion, and correlation/convolution artifacts. I therefore incorporate a least-squares datuming technique to interferometrically transform vertical-seismic-profile surface-related multiples to surface-seismic-profile primaries. This yields redatumed data with high temporal resolution and less artifacts, which are subsequently imaged to obtain highly resolved subsurface images. Tests on synthetic examples demonstrate the efficiency of the proposed techniques, yielding highly resolved migrated sections compared with images obtained by imaging conventionally redatumed data. I further advance the recently developed cost-effective Generalized Interferometric Multiple Imaging procedure, which aims to not only image first but also higher-order multiples as well. I formulate this procedure as a linearized inversion framework and solve it as a least-squares problem. Tests of the least-squares Generalized Interferometric Multiple imaging framework on synthetic datasets and demonstrate that it could provide highly resolved migrated images and delineate vertical fault planes compared with the standard procedure. The results support the assertion that this linearized inversion framework can illuminate subsurface zones that are mainly illuminated by internally scattered energy.

Page generated in 0.0461 seconds