• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 88
  • 20
  • 12
  • 6
  • 6
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 172
  • 172
  • 54
  • 41
  • 38
  • 29
  • 28
  • 27
  • 22
  • 21
  • 18
  • 18
  • 18
  • 17
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

LDPC Codes over Large Alphabets and Their Applications to Compressed Sensing and Flash Memory

Zhang, Fan 2010 August 1900 (has links)
This dissertation is mainly focused on the analysis, design and optimization of Low-density parity-check (LDPC) codes over channels with large alphabet sets and the applications on compressed sensing (CS) and flash memories. Compared to belief-propagation (BP) decoding, verification-based (VB) decoding has significantly lower complexity and near optimal performance when the channel alphabet set is large. We analyze the verification-based decoding of LDPC codes over the q-ary symmetric channel (q-SC) and propose the list-message-passing (LMP) decoding which off ers a good tradeoff between complexity and decoding threshold. We prove that LDPC codes with LMP decoding achieve the capacity of the q-SC when q and the block length go to infinity. CS is a newly emerging area which is closely related to coding theory and information theory. CS deals with the sparse signal recovery problem with small number of linear measurements. One big challenge in CS literature is to reduce the number of measurements required to reconstruct the sparse signal. In this dissertation, we show that LDPC codes with verification-based decoding can be applied to CS system with surprisingly good performance and low complexity. We also discuss modulation codes and error correcting codes (ECC’s) design for flash memories. We design asymptotically optimal modulation codes and discuss their improvement by using the idea from load-balancing theory. We also design LDPC codes over integer rings and fields with large alphabet sets for flash memories.
22

Source and Channel Coding for Compressed Sensing and Control

Shirazinia, Amirpasha January 2014 (has links)
Rapid advances in sensor technologies have fueled massive torrents of data streaming across networks. Such large volume of information, indeed, restricts the operational performance of data processing, causing inefficiency in sensing, computation, communication and control. Hence, classical data processing techniques need to be re-analyzed and re-designed prior to be applied to modern networked data systems. This thesis aims to understand and characterize fundamental principles and interactions in and among sensing, compression, communication, computation and control, involved in networked data systems. In this regard, the thesis investigates four problems. The common theme is the design and analysis of optimized low-delay transmission strategies with affordable complexity for reliable communication of acquired data over networks with the objective of providing high quality of service for users. In the first three problems considered in the thesis, an emerging framework for data acquisition, namely, compressed sensing, is used which performs acquisition and compression simultaneously. The first problem considers the design of iterative encoding schemes, based on scalar quantization, for transmission of compressed sensing measurements over rate-limited links. Our approach is based on an analysis-by-synthesis principle, where the motivation is to reflect non-linearity in reconstruction, raised by compressed sensing, via synthesis, on choosing the best quantized value for encoding, via analysis. Our design shows significant reconstruction performance compared to schemes that only consider direct quantization of compressed sensing measurements. In the second problem, we investigate the design and analysis of encoding--decoding schemes, based on vector quantization, for transmission of compressed sensing measurements over rate-limited noisy links. In so realizing, we take an approach adapted from joint source-channel coding framework. We show that the performance of the studied system can approach the fundamental theoretical bound by optimizing the encoder-decoder pair. The price, however, is increased complexity at the encoder. To address the encoding complexity of the vector quantizer, we propose to use low-complexity multi-stage vector quantizer whose optimized design shows promising performance. In the third problem considered in the thesis, we take one step forward, and study joint source-channel coding schemes, based on vector quantization, for distributed transmission of compressed sensing measurements over noisy rate-limited links. We design optimized distributed coding schemes, and analyze theoretical bounds for such topology. Under certain conditions, our results reveal that the performance of the optimized schemes approaches the analytical bounds. In the last problem and in the context of control under communication constraints, we bring the notion of system dynamicity into the picture. Particularly, we study relations among stability in dynamical networked control systems, performance of real-time coding schemes and the coding complexity. For this purpose, we take approaches adapted from separate source-channel coding, and derive theoretical bounds on the performance of two types of coding schemes: dynamic repetition codes, and dynamic Fountain codes. We analytically and numerically show that the dynamic Fountain codes, over binary-input symmetric channels, with belief propagation decoding, are able to provide system stability in a networked control system. The results in the thesis evidently demonstrate that impressive performance gain is feasible by employing tools from communication and information theory to control and sensing. The insights offered through the design and analysis will also reveal fundamental pieces for understanding real-world networked data puzzle. / <p>QC 20140407</p>
23

Improved compressed sensing algorithm for sparse-view CT

2013 October 1900 (has links)
In computed tomography (CT) there are many situations where reconstruction may need to be performed with sparse-view data. In sparse-view CT imaging, strong streak artifacts may appear in conventionally reconstructed images due to the limited sampling rate, compromising image quality. Compressed sensing (CS) algorithm has shown potential to accurately recover images from highly undersampled data. In the past few years, total variation (TV)-base compressed sensing algorithms have been proposed to suppress the streak artifact in CT image reconstruction. In this paper, we formulate the problem of CT imaging under transform sparsity and sparse-view constraints, and propose a novel compressed sensing-based algorithm for CT image reconstruction from few-view data, in which we simultaneously minimize the ℓ1 norm, total variation and a least square measure. The main feature of our algorithm is the use of two sparsity transforms: discrete wavelet transform and discrete gradient transform, both of which are proven to be powerful sparsity transforms. Experiments with simulated and real projections were performed to evaluate and validate the proposed algorithm. The reconstructions using the proposed approach have less streak artifacts and reconstruction errors than other conventional methods.
24

Metamaterials for Computational Imaging

Hunt, John January 2013 (has links)
<p>Metamaterials extend the design space, flexibility, and control of optical material systems and so yield fundamentally new computational imaging systems. A computational imaging system relies heavily on the design of measurement modes. Metamaterials provide a great deal of control over the generation of the measurement modes of an aperture. On the other side of the coin, computational imaging uses the data that that can be measured by an imaging system, which may limited, in an optimal way thereby producing the best possible image within the physical constraints of a system. The synergy of these two technologies - metamaterials and computational imaging - allows for entirely novel imaging systems. These contributions are realized in the concept of a frequency-diverse metamaterial imaging system that will be presented in this thesis. This 'metaimager' uses the same electromagnetic flexibility that metamaterials have shown in many other contexts to construct an imaging aperture suitable for single-pixel operation that can measure arbitrary measurement modes, constrained only by the size of the aperture and resonant elements. It has no lenses, no moving parts, a small form-factor, and is low-cost.</p><p>In this thesis we present an overview of work done by the author in the area of metamaterial imaging systems. We first discuss novel transformation-optical lenses enabled by metamaterials which demonstrate the electromagnetic flexibility of metamaterials. We then introduce the theory of computational and compressed imaging using the language of Fourier optics, and derive the forward model needed to apply computational imaging to the metaimager system. We describe the details of the metamaterials used to construct the metaimager and their application to metamaterial antennas. The experimental tools needed to characterize the metaimager, including far-field and near-field antenna characterization, are described. We then describe the design, operation, and characterization of a one-dimensional metaimager capable of collecting two-dimensional images, and then a two-dimensional metaimager capable of collecting two-dimensional images. The imaging results for the one-dimensional metaimager are presented including two-dimensional (azimuth and range) images of point scatters, and video-rate imaging. The imaging results for the two-dimensional metaimager are presented including analysis of the system's resolution, signal-to-noise sensitivity, acquisition rate, human targets, and integration of optical and structured-light sensors. Finally, we discuss explorations into methods of tuning metamaterial radiators which could be employed to significantly increase the capabilities of such a metaimaging system, and describe several systems that have been designed for the integration of tuning into metamaterial imaging systems.</p> / Dissertation
25

Improving SLI Performance in Optically Challenging Environments

Dedrick, Eric 01 January 2011 (has links)
The construction of 3D models of real-world scenes using non-contact methods is an important problem in computer vision. Some of the more successful methods belong to a class of techniques called structured light illumination (SLI). While SLI methods are generally very successful, there are cases where their performance is poor. Examples include scenes with a high dynamic range in albedo or scenes with strong interreflections. These scenes are referred to as optically challenging environments. The work in this dissertation is aimed at improving SLI performance in optically challenging environments. A new method of high dynamic range imaging (HDRI) based on pixel-by-pixel Kalman filtering is developed. Using objective metrics, it is show to achieve as much as a 9.4 dB improvement in signal-to-noise ratio and as much as a 29% improvement in radiometric accuracy over a classic method. Quality checks are developed to detect and quantify multipath interference and other quality defects using phase measuring profilometry (PMP). Techniques are established to improve SLI performance in the presence of strong interreflections. Approaches in compressed sensing are applied to SLI, and interreflections in a scene are modeled using SLI. Several different applications of this research are also discussed.
26

Compressed Sensing for 3D Laser Radar / Compressed Sensing för 3D Laserradar

Fall, Erik January 2014 (has links)
High resolution 3D images are of high interest in military operations, where data can be used to classify and identify targets. The Swedish defence research agency (FOI) is interested in the latest research and technologies in this area. A draw- back with normal 3D-laser systems are the lack of high resolution for long range measurements. One technique for high long range resolution laser radar is based on time correlated single photon counting (TCSPC). By repetitively sending out short laser pulses and measure the time of flight (TOF) of single reflected pho- tons, extremely accurate range measurements can be done. A drawback with this method is that it is hard to create single photon detectors with many pixels and high temporal resolution, hence a single detector is used. Scanning an entire scene with one detector is very time consuming and instead, as this thesis is all about, the entire scene can be measured with less measurements than the number of pixels. To do this a technique called compressed sensing (CS) is introduced. CS utilizes that signals normally are compressible and can be represented sparse in some basis representation. CS sets other requirements on the sampling compared to the normal Shannon-Nyquist sampling theorem. With a digital micromirror device (DMD) linear combinations of the scene can be reflected onto the single photon detector, creating scalar intensity values as measurements. This means that fewer DMD-patterns than the number of pixels can reconstruct the entire 3D-scene. In this thesis a computer model of the laser system helps to evaluate different CS reconstruction methods with different scenarios of the laser system and the scene. The results show how many measurements that are required to reconstruct scenes properly and how the DMD-patterns effect the results. CS proves to enable a great reduction, 85 − 95 %, of the required measurements com- pared to pixel-by-pixel scanning system. Total variation minimization proves to be the best choice of reconstruction method. / Högupplösta 3D-bilder är väldigt intressanta i militära operationer där data kan utnyttjas för klassificering och identifiering av mål. Det är av stort intresse hos Totalförsvarets forskningsinstitut (FOI) att undersöka de senaste teknikerna in- om detta område. Ett stort problem med vanliga 3D-lasersystem är att de saknar hög upplösning för långa mätavstånd. En teknik som har hög avståndsupplös- ning är tidskorrelerande enfotonräknare, som kan räkna enstaka fotoner med extremt bra noggrannhet. Ett sådant system belyser en scen med laserljus och mäter sedan reflektionstiden för enstaka fotoner och kan på så sätt mäta avstånd. Problemet med denna metod är att göra detektion av många pixlar när man bara kan använda en detektor. Att skanna en hel scen med en detektor tar väldigt lång tid och istället handlar det här exjobbet om att göra färre mätningar än antalet pixlar, men ändå återskapa hela 3D-scenen. För att åstadkomma detta används en ny teknik kallad Compressed Sensing (CS). CS utnyttjar att mätdata normalt är komprimerbar och skiljer sig från det traditionella Shannon-Nyquists krav på sampling. Med hjälp av ett Digital Micromirror Device (DMD) kan linjärkombi- nationer av scenen speglas ner på enfotondetektorn och med färre DMD-mönster än antalet pixlar kan hela 3D-scenen återskapas. Med hjälp av en egenutvecklad lasermodell evalueras olika CS rekonstruktionsmetoder och olika scenarier av la- sersystemet. Arbetet visar att basrepresentationen avgör hur många mätningar som behövs och hur olika uppbyggnader av DMD-mönstren påverkar resultatet. CS visar sig möjliggöra att 85 − 95 % färre mätningar än antalet pixlar behövs för att avbilda hela 3D-scener. Total variation minimization visar sig var det bästa valet av rekonstruktionsmetod.
27

Channelized facies recovery based on weighted sparse regularization

Calderón Amor, Hernán Alberto January 2016 (has links)
Comprender los fenómenos de nuestro planeta es esencial en diversos problemas de estimación y predicción, tales como minería, hidrología y extracción de petróleo. El principal inconveniente para resolver problemas inversos en geoestadística es la falta de datos, lo que imposibilita la generación de modelos estadísticos confiables. Debido a esto, es necesario incorporar información adicional para estimar las variables de interés en locaciones no medidas, como por ejemplo utilizando imágenes de entrenamiento. Esta Tesis aborda el problema de interpolación espacial de estructuras de canal basada en teorías de representación sparse de señales y estadísticos multipuntos. El trabajo se inspira en la teoría de Compressed Sensing (CS), la cual ofrece un nuevo paradigma de adquisición y reconstrucción de señales, y simulación multipunto (MPS), técnica que provee realizaciones realistas de diversas estructuras geológicas. Esta Tesis se motiva por estos dos enfoques, explorando la fusión de ambas fuentes de información, tanto geológica-estructural como la descomposición de dicha estructura en un dominio transformado. La principal contribución de este trabajo es el uso de algoritmos MPS para incorporar información a priori al algoritmo de reconstrucción, convirtiendo información geológica en información de señal. El algoritmo MPS es utilizado para estimar el soporte de la estructura subyacente, identificando las posiciones de los coeficientes transformados significativos y generando un ranking para los elementos de la base DCT (Discrete cosine transform). Este ranking es usado para la creación de una matriz de pesos, la cual impone una particular estructura directamente en el algoritmo de reconstrucción. Esta metodología es validada mediante el estudio de tres modelos de canal. Respecto a los resultados, primero se estudian diversas definiciones de la matriz de pesos para determinar la mejor configuración. Segundo, se estudia un enfoque multiescala de regularización sparse con el propósito de mejorar los desempeños clásicos de minimización en norma l1-ponderada. Con ello, se valida el uso de varias reconstrucciones a distintos niveles de escala para reducir los artefactos inducidos en la reconstrucción a imagen completa. Finalmente, el método es comparado con diversas técnicas de interpolación. De este análisis, se observa que el método propuesto supera a las técnicas convencionales de regularización en norma l1, tanto con pesos como sin ponderadores, al igual que el algoritmo multipunto utilizado. Esto valida la hipótesis sobre la complementariedad de las informaciones de patrones estadísticos y estructuras de señal. Para cada modelo, el método es capaz de inferir la estructura predominante de canal, incluso en un escenario inferior al 1% de datos adquiridos. Finalmente, se han identificado algunas posibles áreas de investigación futura. Algunas de estas posibles aristas son: CS binario, dada la naturaleza binaria de los modelos estudiados; algoritmos greedy para la extensión al análisis 3D; y métodos adaptativos de CS para resolver simultáneamente el problema de localización de sondajes y reconstrucción de señales.
28

Compressed Sensing Accelerated Magnetic Resonance Spectroscopic Imaging

January 2016 (has links)
abstract: Magnetic resonance spectroscopic imaging (MRSI) is a valuable technique for assessing the in vivo spatial profiles of metabolites like N-acetylaspartate (NAA), creatine, choline, and lactate. Changes in metabolite concentrations can help identify tissue heterogeneity, providing prognostic and diagnostic information to the clinician. The increased uptake of glucose by solid tumors as compared to normal tissues and its conversion to lactate can be exploited for tumor diagnostics, anti-cancer therapy, and in the detection of metastasis. Lactate levels in cancer cells are suggestive of altered metabolism, tumor recurrence, and poor outcome. A dedicated technique like MRSI could contribute to an improved assessment of metabolic abnormalities in the clinical setting, and introduce the possibility of employing non-invasive lactate imaging as a powerful prognostic marker. However, the long acquisition time in MRSI is a deterrent to its inclusion in clinical protocols due to associated costs, patient discomfort (especially in pediatric patients under anesthesia), and higher susceptibility to motion artifacts. Acceleration strategies like compressed sensing (CS) permit faithful reconstructions even when the k-space is undersampled well below the Nyquist limit. CS is apt for MRSI as spectroscopic data are inherently sparse in multiple dimensions of space and frequency in an appropriate transform domain, for e.g. the wavelet domain. The objective of this research was three-fold: firstly on the preclinical front, to prospectively speed-up spectrally-edited MRSI using CS for rapid mapping of lactate and capture associated changes in response to therapy. Secondly, to retrospectively evaluate CS-MRSI in pediatric patients scanned for various brain-related concerns. Thirdly, to implement prospective CS-MRSI acquisitions on a clinical magnetic resonance imaging (MRI) scanner for fast spectroscopic imaging studies. Both phantom and in vivo results demonstrated a reduction in the scan time by up to 80%, with the accelerated CS-MRSI reconstructions maintaining high spectral fidelity and statistically insignificant errors as compared to the fully sampled reference dataset. Optimization of CS parameters involved identifying an optimal sampling mask for CS-MRSI at each acceleration factor. It is envisioned that time-efficient MRSI realized with optimized CS acceleration would facilitate the clinical acceptance of routine MRSI exams for a quantitative mapping of important biomarkers. / Dissertation/Thesis / Doctoral Dissertation Bioengineering 2016
29

Image Reconstruction, Classification, and Tracking for Compressed Sensing Imaging and Video

January 2016 (has links)
abstract: Compressed sensing (CS) is a novel approach to collecting and analyzing data of all types. By exploiting prior knowledge of the compressibility of many naturally-occurring signals, specially designed sensors can dramatically undersample the data of interest and still achieve high performance. However, the generated data are pseudorandomly mixed and must be processed before use. In this work, a model of a single-pixel compressive video camera is used to explore the problems of performing inference based on these undersampled measurements. Three broad types of inference from CS measurements are considered: recovery of video frames, target tracking, and object classification/detection. Potential applications include automated surveillance, autonomous navigation, and medical imaging and diagnosis. Recovery of CS video frames is far more complex than still images, which are known to be (approximately) sparse in a linear basis such as the discrete cosine transform. By combining sparsity of individual frames with an optical flow-based model of inter-frame dependence, the perceptual quality and peak signal to noise ratio (PSNR) of reconstructed frames is improved. The efficacy of this approach is demonstrated for the cases of \textit{a priori} known image motion and unknown but constant image-wide motion. Although video sequences can be reconstructed from CS measurements, the process is computationally costly. In autonomous systems, this reconstruction step is unnecessary if higher-level conclusions can be drawn directly from the CS data. A tracking algorithm is described and evaluated which can hold target vehicles at very high levels of compression where reconstruction of video frames fails. The algorithm performs tracking by detection using a particle filter with likelihood given by a maximum average correlation height (MACH) target template model. Motivated by possible improvements over the MACH filter-based likelihood estimation of the tracking algorithm, the application of deep learning models to detection and classification of compressively sensed images is explored. In tests, a Deep Boltzmann Machine trained on CS measurements outperforms a naive reconstruct-first approach. Taken together, progress in these three areas of CS inference has the potential to lower system cost and improve performance, opening up new applications of CS video cameras. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2016
30

Atomic-scale and three-dimensional transmission electron microscopy of nanoparticle morphology

Leary, Rowan Kendall January 2015 (has links)
The burgeoning field of nanotechnology motivates comprehensive elucidation of nanoscale materials. This thesis addresses transmission electron microscope characterisation of nanoparticle morphology, concerning specifically the crystal- lographic status of novel intermetallic GaPd2 nanocatalysts and advancement of electron tomographic methods for high-fidelity three-dimensional analysis. Going beyond preceding analyses, high-resolution annular dark-field imaging is used to verify successful nano-sizing of the intermetallic compound GaPd2. It also reveals catalytically significant and crystallographically intriguing deviations from the bulk crystal structure. So-called ‘non-crystallographic’ five-fold twinned nanoparticles are observed, adding a new perspective in the long standing debate over how such morphologies may be achieved. The morphological complexity of the GaPd2 nanocatalysts, and many cognate nanoparticle systems, demands fully three-dimensional analysis. It is illustrated how image processing techniques applied to electron tomography reconstructions can facilitate more facile and objective quantitative analysis (‘nano-metrology’). However, the fidelity of the analysis is limited ultimately by artefacts in the tomographic reconstruction. Compressed sensing, a new sampling theory, asserts that many signals can be recovered from far fewer measurements than traditional theories dictate are necessary. Compressed sensing is applied here to electron tomographic reconstruction, and is shown to yield far higher fidelity reconstructions than conventional algorithms. Reconstruction from extremely limited data, more robust quantitative analysis and novel three-dimensional imaging are demon- strated, including the first three-dimensional imaging of localised surface plasmon resonances. Many aspects of transmission electron microscopy characterisation may be enhanced using a compressed sensing approach.

Page generated in 0.561 seconds