• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 1
  • Tagged with
  • 5
  • 5
  • 5
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A framework for fast and efficient algorithms for sparse recovery problems / CUHK electronic theses & dissertations collection

January 2015 (has links)
The sparse recovery problem aims to reconstruct a high-dimensional sparse signal from its low-dimensional measurements given a carefully designed measuring process. This thesis presents a framework for graphical-model based sparse recovery algorithms. Differing measurement processes lead to specific problems. The sparse recovery problems studied in this thesis include compressive sensing, network tomography, group testing and compressive phase retrieval. For compressive sensing and network tomography, the measurement processes are linear (freely chosen, and topology constrained measurements respectively). For group testing and compressivephase retrieval, the processes are non-linear (disjunctive, and intensity measurements respectively). For all the problems in this thesis, we present algorithms whose measurement structures are based on bipartite graphs. By studying the properties of bipartite graphs and designing novel measuring process and corresponding decoding algorithms, the number of measurements and computational decoding complexities of all the algorithms are information-theoretically either order-optimal or nearly order-optimal. / 稀疏還原問題旨在通過精心設計的低維度度量重建高維度稀疏信號。這篇論文提出了一個基於圖模型的稀疏還原演算法的框架。研究的稀疏還原問題包括了壓縮感知,網路斷層掃描,組測試和壓縮相位恢復。對於壓縮感知和網路斷層掃描,度量過程是線性的(分別是無約束的度量和拓撲結構約束的度量)。對於組測試和壓縮相位恢復,度量過程是非線性的(分別是邏輯度量和強度度量)。對於提到的問題,這篇論文提出的演算法的度量結構基於二部圖。通過學習二部圖的性質,我們提出了新穎的度量方法和相對應的解碼演算法。對於這些演算法,它們的度量維度和解碼演算法的運算複雜度都是(或接近於)資訊理論最優解。 / Cai, Sheng. / Thesis (Ph.D.)--Chinese University of Hong Kong, 2015. / Includes bibliographical references (leaves 229-247). / Abstracts also in Chinese. / Title from PDF title page (viewed on 05, October, 2016). / Detailed summary in vernacular field only.
2

Time-domain Compressive Beamforming for Medical Ultrasound Imaging

David, Guillaume January 2016 (has links)
Over the past 10 years, Compressive Sensing has gained a lot of visibility from the medical imaging research community. The most compelling feature for the use of Compressive Sensing is its ability to perform perfect reconstructions of under-sampled signals using l1-minimization. Of course, that counter-intuitive feature has a cost. The lacking information is compensated for by a priori knowledge of the signal under certain mathematical conditions. This technology is currently used in some commercial MRI scanners to increase the acquisition rate hence decreasing discomfort for the patient while increasing patient turnover. For echography, the applications could go from fast 3D echocardiography to simplified, cheaper echography systems. Real-time ultrasound imaging scanners have been available for nearly 50 years. During these 50 years of existence, much has changed in their architecture, electronics, and technologies. However one component remains present: the beamformer. From analog beamformers to software beamformers, the technology has evolved and brought much diversity to the world of beam formation. Currently, most commercial scanners use several focalized ultrasonic pulses to probe tissue. The time between two consecutive focalized pulses is not compressible, limiting the frame rate. Indeed, one must wait for a pulse to propagate back and forth from the probe to the deepest point imaged before firing a new pulse. In this work, we propose to outline the development of a novel software beamforming technique that uses Compressive Sensing. Time-domain Compressive Beamforming (t-CBF) uses computational models and regularization to reconstruct de-cluttered ultrasound images. One of the main features of t-CBF is its use of only one transmit wave to insonify the tissue. Single-wave imaging brings high frame rates to the modality, for example allowing a physician to see precisely the movements of the heart walls or valves during a heart cycle. t-CBF takes into account the geometry of the probe as well as its physical parameters to improve resolution and attenuate artifacts commonly seen in single-wave imaging such as side lobes. In this thesis, we define a mathematical framework for the beamforming of ultrasonic data compatible with Compressive Sensing. Then, we investigate its capabilities on simple simulations in terms of resolution and super-resolution. Finally, we adapt t-CBF to real-life ultrasonic data. In particular, we reconstruct 2D cardiac images at a frame rate 100-fold higher than typical values.
3

Interpretable Machine Learning and Sparse Coding for Computer Vision

Landecker, Will 01 August 2014 (has links)
Machine learning offers many powerful tools for prediction. One of these tools, the binary classifier, is often considered a black box. Although its predictions may be accurate, we might never know why the classifier made a particular prediction. In the first half of this dissertation, I review the state of the art of interpretable methods (methods for explaining why); after noting where the existing methods fall short, I propose a new method for a particular type of black box called additive networks. I offer a proof of trustworthiness for this new method (meaning a proof that my method does not "make up" the logic of the black box when generating an explanation), and verify that its explanations are sound empirically. Sparse coding is part of a family of methods that are believed, by many researchers, to not be black boxes. In the second half of this dissertation, I review sparse coding and its application to the binary classifier. Despite the fact that the goal of sparse coding is to reconstruct data (an entirely different goal than classification), many researchers note that it improves classification accuracy. I investigate this phenomenon, challenging a common assumption in the literature. I show empirically that sparse reconstruction is not necessarily the right intermediate goal, when our ultimate goal is classification. Along the way, I introduce a new sparse coding algorithm that outperforms competing, state-of-the-art algorithms for a variety of important tasks.
4

A Flexible RFIC Architecture for High-Sensitivity Reception and Compressed-Sampling Wideband Detection

Haque, Tanbir January 2019 (has links)
Compressed sensing (CS) is a new signal processing approach that has disrupted the Shannon-Nyquist limit based design methodology and has opened promising avenues for building energy-efficient radio frequency integrated circuits (RFICs) for detecting and estimating particular classes (i.e. sparse) of signals. Whether in application domains where naturally occurring signals are sparse or where representations of signals subject to the fidelity limits or configuration settings of the radio equipment are often found to be sparse, the emergence of CS has forced us to re-imagine the radio receiver. While realizing some of the potential benefits promised by theory, CS-RFIC architectures proposed in earlier research were not particularly suitable for mass-market applications. This thesis demonstrates how to take a new signal processing technique all the way to the hardware level. So far, the main focus in literature has been how CS offers a significant advantage for signal processing. This work will show how CS techniques drive novel architectures down to the integrated circuit level. This requires close collaboration between communication system developers, integrated circuit designers and signal processing experts. The trans-disciplinary approach presented here has led to the unification of CS-inspired architectures for wideband signal detection with robust, legacy architectures for high-sensitivity signal reception. The result is a functionally flexible and rapidly reconfigurable CMOS RFIC compactly implemented on silicon with the potential to achieve the cost, size and power targets in mass-market applications. While the focus of this thesis is RF signal finding and reception in frequency, the CS-based RFIC design approach presented here is applicable to a wide range of other applications like direction-of-arrival and range finding. We begin by developing a signal-model driven approach for optimizing the performance of CS RF frontends (RFFEs). We consider sparse multiband signals with supports contained within a frequency span extending from fMIN to fMAX. The resulting quadrature analog-to-information converter (QAIC) is a flexible-bandwidth, blind sub-Nyquist sampling architecture optimized for energy consumption and sensitivity performance. The QAIC addresses key drawbacks of earlier CS RFFE architectures like the modulated wideband converter (MWC) that implement frequency spans extending from 0 to fMAX. While these earlier architectures, a direct implementation of CS signal processing theory, have several beneficial properties, the true cost of their proposed analog frontend significantly diminishes the sensitivity performance and energy savings that CS methods have the potential to deliver. They use periodic pseudo-random bit sequence (PRBS) generators where the clock frequency fPRBS scales up with the maximum signal frequency fMAX. In contrast, fPRBS in the QAIC RFFE scales up with the instantaneous bandwidth IBW, where IBW = ( fMAX − fMIN ). This results in significant performance advantages in terms of energy consumption and sensitivity performance. The QAIC uncouples fPRBS from fMAX by performing wideband quadrature downconversion ahead of analog mixing with PRBSs at an intermediate frequency (IF). However, the dual heterodyne architecture of the QAIC suffers from spurious responses at IF caused by gain and phase imbalance in its wideband downconverter. We then show how the direct RF-to-information converter (DRF2IC) compactly adds CS wideband detection to a direct conversion frequency-translational noise-cancelling (FTNC) receiver by introducing pseudo-random modulation of the local oscillator (LO) signals and by consolidating multiple CS measurements into one hardware branch. The DRF2IC inherits benefits of the FTNC receiver in signal reception mode. In CS wideband detection mode, the DRF2IC inherits key advantages from both the earlier lowpass CS architectures and the QAIC while avoiding the drawbacks of both. It uncouples fPRBS from fMAX in contrast with the MWC. In contrast with the QAIC, the DRF2IC employs a direct conversion RF chain with narrow bandwidth analog components at baseband thereby avoiding frequency-dependent gain and phase imbalance. The DRF2IC chip occupies 0.56mm2 area in 65nm CMOS. In reception mode, it consumes 46.5mW from 1.15V and delivers 40MHz RF bandwidth, 41.5dB conversion gain, 3.6dB noise figure (NF) and -2dBm blocker 1dB compression point (B1dB). In CS wideband detection mode, 66dB operational dynamic range, 40dB instantaneous dynamic range and 1.43GHz instantaneous bandwidth are demonstrated and 6 interferers each 10MHz wide scattered over a 1.27GHz span are detected in 1.2us consuming 58.5mW.
5

Numerical algorithms for the mathematics of information

Mendoza-Smith, Rodrigo January 2017 (has links)
This thesis presents a series of algorithmic innovations in Combinatorial Compressed Sensing and Persistent Homology. The unifying strategy across these contributions is in translating structural patterns in the underlying data into specific algorithmic designs in order to achieve: better guarantees in computational complexity, the ability to operate on more complex data, highly efficient parallelisations, or any combination of these.

Page generated in 0.1753 seconds