111 |
Computational spectral microscopy and compressive millimeter-wave holographyFernandez, Christy Ann January 2010 (has links)
<p>This dissertation describes three computational sensors. The first sensor is a scanning multi-spectral aperture-coded microscope containing a coded aperture spectrometer that is vertically scanned through a microscope intermediate image plane. The spectrometer aperture-code spatially encodes the object spectral data and nonnegative</p>
<p>least squares inversion combined with a series of reconfigured two-dimensional (2D spatial-spectral) scanned measurements enables three-dimensional (3D) (x, y, λ) object estimation. The second sensor is a coded aperture snapshot spectral imager that employs a compressive optical architecture to record a spectrally filtered projection</p>
<p>of a 3D object data cube onto a 2D detector array. Two nonlinear and adapted TV-minimization schemes are presented for 3D (x,y,λ) object estimation from a 2D compressed snapshot. Both sensors are interfaced to laboratory-grade microscopes and</p>
<p>applied to fluorescence microscopy. The third sensor is a millimeter-wave holographic imaging system that is used to study the impact of 2D compressive measurement on 3D (x,y,z) data estimation. Holography is a natural compressive encoder since a 3D</p>
<p>parabolic slice of the object band volume is recorded onto a 2D planar surface. An adapted nonlinear TV-minimization algorithm is used for 3D tomographic estimation from a 2D and a sparse 2D hologram composite. This strategy aims to reduce scan time costs associated with millimeter-wave image acquisition using a single pixel receiver.</p> / Dissertation
|
112 |
Compressed Sensing Based Image Restoration Algorithm with Prior Information: Software and Hardware Implementations for Image Guided TherapyJian, Yuchuan January 2012 (has links)
<p>Based on the compressed sensing theorem, we present the integrated software and hardware platform for developing a total-variation based image restoration algorithm by applying prior image information and free-form deformation fields for image guided therapy. The core algorithm we developed solves the image restoration problem for handling missing structures in one image set with prior information, and it enhances the quality of the image and the anatomical information of the volume of the on-board computed tomographic (CT) with limited-angle projections. Through the use of the algorithm, prior anatomical CT scans were used to provide additional information to help reduce radiation doses associated with the improved quality of the image volume produced by on-board Cone-Beam CT, thus reducing the total radiation doses that patients receive and removing distortion artifacts in 3D Digital Tomosynthesis (DTS) and 4D-DTS. The proposed restoration algorithm enables the enhanced resolution of temporal image and provides more anatomical information than conventional reconstructed images.</p><p>The performance of the algorithm was determined and evaluated by two built-in parameters in the algorithm, i.e., B-spline resolution and the regularization factor. These parameters can be adjusted to meet different requirements in different imaging applications. Adjustments also can determine the flexibility and accuracy during the restoration of images. Preliminary results have been generated to evaluate the image similarity and deformation effect for phantoms and real patient's case using shifting deformation window. We incorporated a graphics processing unit (GPU) and visualization interface into the calculation platform, as the acceleration tools for medical image processing and analysis. By combining the imaging algorithm with a GPU implementation, we can make the restoration calculation within a reasonable time to enable real-time on-board visualization, and the platform potentially can be applied to solve complicated, clinical-imaging algorithms.</p> / Dissertation
|
113 |
Dynamics and correlations in sparse signal acquisitionCharles, Adam Shabti 08 June 2015 (has links)
One of the most important parts of engineered and biological systems is the ability to acquire and interpret information from the surrounding world accurately and in time-scales relevant to the tasks critical to system performance. This classical concept of efficient signal acquisition has been a cornerstone of signal processing research, spawning traditional sampling theorems (e.g. Shannon-Nyquist sampling), efficient filter designs (e.g. the Parks-McClellan algorithm), novel VLSI chipsets for embedded systems, and optimal tracking algorithms (e.g. Kalman filtering). Traditional techniques have made minimal assumptions on the actual signals that were being measured and interpreted, essentially only assuming a limited bandwidth. While these assumptions have provided the foundational works in signal processing, recently the ability to collect and analyze large datasets have allowed researchers to see that many important signal classes have much more regularity than having finite bandwidth.
One of the major advances of modern signal processing is to greatly improve on classical signal processing results by leveraging more specific signal statistics. By assuming even very broad classes of signals, signal acquisition and recovery can be greatly improved in regimes where classical techniques are extremely pessimistic. One of the most successful signal assumptions that has gained popularity in recet hears is notion of sparsity. Under the sparsity assumption, the signal is assumed to be composed of a small number of atomic signals from a potentially large dictionary. This limit in the underlying degrees of freedom (the number of atoms used) as opposed to the ambient dimension of the signal has allowed for improved signal acquisition, in particular when the number of measurements is severely limited.
While techniques for leveraging sparsity have been explored extensively in many contexts, typically works in this regime concentrate on exploring static measurement systems which result in static measurements of static signals. Many systems, however, have non-trivial dynamic components, either in the measurement system's operation or in the nature of the signal being observed. Due to the promising prior work leveraging sparsity for signal acquisition and the large number of dynamical systems and signals in many important applications, it is critical to understand whether sparsity assumptions are compatible with dynamical systems. Therefore, this work seeks to understand how dynamics and sparsity can be used jointly in various aspects of signal measurement and inference.
Specifically, this work looks at three different ways that dynamical systems and sparsity assumptions can interact. In terms of measurement systems, we analyze a dynamical neural network that accumulates signal information over time. We prove a series of bounds on the length of the input signal that drives the network that can be recovered from the values at the network nodes~[1--9]. We also analyze sparse signals that are generated via a dynamical system (i.e. a series of correlated, temporally ordered, sparse signals). For this class of signals, we present a series of inference algorithms that leverage both dynamics and sparsity information, improving the potential for signal recovery in a host of applications~[10--19]. As an extension of dynamical filtering, we show how these dynamic filtering ideas can be expanded to the broader class of spatially correlated signals. Specifically, explore how sparsity and spatial correlations can improve inference of material distributions and spectral super-resolution in hyperspectral imagery~[20--25]. Finally, we analyze dynamical systems that perform optimization routines for sparsity-based inference. We analyze a networked system driven by a continuous-time differential equation and show that such a system is capable of recovering a large variety of different sparse signal classes~[26--30].
|
114 |
Near real-time estimation of the seismic source parameters in a compressed domainVera Rodriguez, Ismael A. Unknown Date
No description available.
|
115 |
GPR data processing for reinforced concrete bridge decksWei, Xiangmin 12 January 2015 (has links)
In this thesis, several aspects of GPR data processing for RC bridge decks are studied. First, autofocusing techniques are proposed to replace the previous expensive and unreliable human visual inspections during the iterative migration process for the estimation of the velocity/dielectric permittivity distribution from GPR data. Second, F-K filtering with dip relaxation is proposed for interference removal that is important for both imaging and the performance of post-processing techniques including autofocusing techniques and CS-based migration studied in this thesis. The targeted interferes here are direct waves and cross rebar reflections. The introduced dip relaxation is for accommodating surface roughness and medium inhomogeneity. Third, the newly developed CS-based migration is modified and evaluated on GPR data from RC bridge decks. A more accurate model by accounting for impulse waveform distortion that leads to less modeling errors is proposed. The impact of the selection of the regularization parameter on the comparative amplitude reservation and the imaging performance is also investigated, and an approach to preserve the comparative amplitude information while still maintaining a clear image is proposed. Moreover, the potential of initially sampling the time-spatial data with uniform sampling rates lower than that required by traditional migration methods is evaluated.
|
116 |
Feature detection algorithms in computed imagesGurbuz, Ali Cafer 07 July 2008 (has links)
The problem of sensing a medium by several sensors and retrieving
interesting features is a very general one. The basic framework of the
problem is generally the same for applications from MRI,
tomography, Radar SAR imaging to subsurface imaging, even though the
data acquisition processes, sensing geometries and sensed properties are
different. In this thesis we introduced a new perspective to the
problem of remote sensing and information retrieval by studying the
problem of subsurface imaging using GPR and seismic sensors.
We have shown that if the sensed medium is sparse in some domain then it can be imaged using many fewer measurements than required by the standard methods. This leads to much lower data acquisition times and better images representing the medium. We have used the ideas from Compressive Sensing, which show that a small number of random measurements about a signal is sufficient to completely characterize it, if the signal is sparse or compressible in some domain. Although we have applied our ideas to the subsurface imaging problem, our results are general and can be extended to other remote sensing applications.
A second objective in remote sensing is information retrieval
which involves searching for important features in the computed image of
the medium. In this thesis we focus on detecting buried structures like
pipes, and tunnels in computed GPR or seismic images. The problem of
finding these structures in high clutter and noise conditions, and
finding them faster than the standard shape detecting methods like the
Hough transform is analyzed.
One of the most important contributions of this thesis is, where the
sensing and the information retrieval stages are unified in a single
framework using compressive sensing. Instead of taking lots of standard
measurements to compute the image of the medium and search the
necessary information in the computed image, a much smaller number of
measurements as random projections are taken. The
data acquisition and information retrieval stages are unified by using a
data model dictionary that connects the information to the sensor data.
|
117 |
Channel estimation techniques applied to massive MIMO systems using sparsity and statistics approachesAraújo, Daniel Costa 29 September 2016 (has links)
ARAÚJO, D. C. Channel estimation techniques applied to massive MIMO systems using sparsity and statistics approaches. 2016. 124 f. Tese (Doutorado em Engenharia de Teleinformática)–Centro de
Tecnologia, Universidade Federal do Ceará, Fortaleza, 2016. / Submitted by Renato Vasconcelos (ppgeti@ufc.br) on 2017-06-21T13:52:26Z
No. of bitstreams: 1
2016_tese_dcaraújo.pdf: 1832588 bytes, checksum: a4bb5d44287b92a9321d5fcc3589f22e (MD5) / Approved for entry into archive by Marlene Sousa (mmarlene@ufc.br) on 2017-06-21T16:17:55Z (GMT) No. of bitstreams: 1
2016_tese_dcaraújo.pdf: 1832588 bytes, checksum: a4bb5d44287b92a9321d5fcc3589f22e (MD5) / Made available in DSpace on 2017-06-21T16:17:55Z (GMT). No. of bitstreams: 1
2016_tese_dcaraújo.pdf: 1832588 bytes, checksum: a4bb5d44287b92a9321d5fcc3589f22e (MD5)
Previous issue date: 2016-09-29 / Massive MIMO has the potential of greatly increasing the system spectral efficiency
by employing many individually steerable antenna elements at the base station (BS).
This potential can only be achieved if the BS has sufficient channel state information
(CSI) knowledge. The way of acquiring it depends on the duplexing mode employed
by the communication system. Currently, frequency division duplexing (FDD) is the
most used in the wireless communication system. However, the amount of overhead
necessary to estimate the channel scales with the number of antennas which poses a
big challenge in implementing massive MIMO systems with FDD protocol. To enable
both operating together, this thesis tackles the channel estimation problem by proposing
methods that exploit a compressed version of the massive MIMO channel. There are mainly
two approaches used to achieve such a compression: sparsity and second order statistics. To
derive sparsity-based techniques, this thesis uses a compressive sensing (CS) framework to
extract a sparse-representation of the channel. This is investigated initially in a flat channel
and afterwards in a frequency-selective one. In the former, we show that the Cramer-Rao
lower bound (CRLB) for the problem is a function of pilot sequences that lead to a
Grassmannian matrix. In the frequency-selective case, a novel estimator which combines
CS and tensor analysis is derived. This new method uses the measurements obtained of the
pilot subcarriers to estimate a sparse tensor channel representation. Assuming a Tucker3
model, the proposed solution maps the estimated sparse tensor to a full one which describes
the spatial-frequency channel response. Furthermore, this thesis investigates the problem of
updating the sparse basis that arises when the user is moving. In this study, an algorithm
is proposed to track the arrival and departure directions using very few pilots. Besides
the sparsity-based techniques, this thesis investigates the channel estimation performance
using a statistical approach. In such a case, a new hybrid beamforming (HB) architecture
is proposed to spatially multiplex the pilot sequences and to reduce the overhead. More
specifically, the new solution creates a set of beams that is jointly calculated with the
channel estimator and the pilot power allocation using the minimum mean square error
(MMSE) criterion. We show that this provides enhanced performance for the estimation
process in low signal-noise ratio (SNR) scenarios. / Pesquisas em sistemas MIMO massivo (do inglês multiple-input multiple-output) ganha-
ram muita atenção da comunidade científica devido ao seu potencial em aumentar a
eficiência espectral do sistema comunicações sem-fio utilizando centenas de elementos de
antenas na estação de base (EB). Porém, tal potencial só poderá é obtido se a EB possuir
suficiente informação do estado de canal. A maneira de adquiri-lo depende de como os
recursos de comunicação tempo-frequência são empregados. Atualmente, a solução mais
utilizada em sistemas de comunicação sem fio é a multiplexação por divisão na frequência
(FDD) dos pilotos. Porém, o grande desafio em implementar esse tipo solução é porque
a quantidade de tons pilotos exigidos para estimar o canal aumenta com o número de
antenas. Isso resulta na perda do eficiência espectral prometido pelo sistema massivo.
Esta tese apresenta métodos de estimação de canal que demandam uma quantidade de
tons pilotos reduzida, mas mantendo alta precisão na estimação do canal. Esta redução
de tons pilotos é obtida porque os estimadores propostos exploram a estrutura do canal
para obter uma redução das dimensões do canal. Nesta tese, existem essencialmente duas
abordagens utilizadas para alcançar tal redução de dimensionalidade: uma é através da
esparsidade e a outra através das estatísticas de segunda ordem. Para derivar as soluções
que exploram a esparsidade do canal, o estimador de canal é obtido usando a teoria
de “compressive sensing” (CS) para extrair a representação esparsa do canal. A teoria
é aplicada inicialmente ao problem de estimação de canais seletivos e não-seletivos em
frequência. No primeiro caso, é mostrado que limitante de Cramer-Rao (CRLB) é definido
como uma função das sequências pilotos que geram uma matriz Grassmaniana. No segundo
caso, CS e a análise tensorial são combinado para derivar um novo algoritmo de estimatição
baseado em decomposição tensorial esparsa para canais com seletividade em frequência.
Usando o modelo Tucker3, a solução proposta mapeia o tensor esparso para um tensor
cheio o qual descreve a resposta do canal no espaço e na frequência. Além disso, a tese
investiga a otimização da base de representação esparsa propondo um método para estimar
e corrigir as variações dos ângulos de chegada e de partida, causados pela mobilidade do
usuário. Além das técnicas baseadas em esparsidade, esta tese investida aquelas que usam
o conhecimento estatístico do canal. Neste caso, uma nova arquitetura de beamforming
híbrido é proposta para realizar multiplexação das sequências pilotos. A nova solução
consite em criar um conjunto de feixes, que são calculados conjuntamente com o estimator
de canal e alocação de potência para os pilotos, usand o critério de minimização erro
quadrático médio. É mostrado que esta solução reduz a sequencia pilot e mostra bom
desempenho e cenários de baixa relação sinal ruído (SNR).
|
118 |
Apprentissage de modèles de mélange à large échelle par Sketching / Sketching for large-scale learning of mixture modelsKeriven, Nicolas 12 October 2017 (has links)
Les bases de données modernes sont de très grande taille, parfois divisées et distribuées sur plusieurs lieux de stockage, ou encore sous forme de flux de données : ceci soulève de nouveaux défis majeurs pour les méthodes d’apprentissage statistique. Une des méthodes récentes capable de s’adapter à ces situations consiste à d’abord compresser les données en une structure appelée sketch linéaire, puis ensuite de réaliser la tâche d’apprentissage en utilisant uniquement ce sketch, ce qui est extrêmement rapide si celui-ci est de petite taille. Dans cette thèse, nous définissons une telle méthode pour estimer un modèle de mélange de distributions de probabilités à partir des données, en utilisant uniquement un sketch de celles-ci. Ce sketch est défini en s’inspirant de plusieurs notions venant du domaine des méthodes à noyaux : le plongement par noyau moyen et les approximations aléatoires de noyaux. Défini comme tel, le sketch correspond à des mesures linéaires de la distribution de probabilité sous-jacente aux données. Ainsi nous analysons le problème en utilisant des outils venant du domaine de l’acquisition comprimée, dans lequel un signal est mesuré aléatoirement sans perte d’information, sous certaines conditions. Nous étendons certains résultats de l’acquisition comprimée à la dimension infinie, donnons des conditions génériques garantissant le succès de notre méthode d’estimation de modèles de mélanges, et les appliquons à plusieurs problèmes, dont notamment celui d’estimer des mélanges de distributions stables multivariées, pour lequel il n’existait à ce jour aucun estimateur. Notre analyse est basée sur la construction d’opérateurs de sketch construits aléatoirement, qui satisfont une Propriété d’Isométrie Restreinte dans l’espace de Banach des mesures finies signées avec forte probabilité. Dans une second partie, nous introduisons un algorithme glouton capable heuristiquement d’estimer un modèle de mélange depuis un sketch linéaire. Cet algorithme est appliqué sur données simulées et réelles à trois problèmes : l’estimation de centres significatifs dans les données, pour lequel on constate que la méthode de sketch est significativement plus rapide qu’un algorithme de k-moyennes classique, l’estimation de mélanges de Gaussiennes, pour lequel elle est plus rapide qu’un algorithme d’Espérance-Maximisation, et enfin l’estimation de mélange de distributions stables multivariées, pour lequel il n’existait à ce jour, à notre connaissance, aucun algorithme capable de réaliser une telle tâche. / Learning parameters from voluminous data can be prohibitive in terms of memory and computational requirements. Furthermore, new challenges arise from modern database architectures, such as the requirements for learning methods to be amenable to streaming, parallel and distributed computing. In this context, an increasingly popular approach is to first compress the database into a representation called a linear sketch, that satisfies all the mentioned requirements, then learn the desired information using only this sketch, which can be significantly faster than using the full data if the sketch is small. In this thesis, we introduce a generic methodology to fit a mixture of probability distributions on the data, using only a sketch of the database. The sketch is defined by combining two notions from the reproducing kernel literature, namely kernel mean embedding and Random Features expansions. It is seen to correspond to linear measurements of the underlying probability distribution of the data, and the estimation problem is thus analyzed under the lens of Compressive Sensing (CS), in which a (traditionally finite-dimensional) signal is randomly measured and recovered. We extend CS results to our infinite-dimensional framework, give generic conditions for successful estimation and apply them analysis to many problems, with a focus on mixture models estimation. We base our method on the construction of random sketching operators such that some Restricted Isometry Property (RIP) condition holds in the Banach space of finite signed measures with high probability. In a second part we introduce a flexible heuristic greedy algorithm to estimate mixture models from a sketch. We apply it on synthetic and real data on three problems: the estimation of centroids from a sketch, for which it is seen to be significantly faster than k-means, Gaussian Mixture Model estimation, for which it is more efficient than Expectation-Maximization, and the estimation of mixtures of multivariate stable distributions, for which, to our knowledge, it is the only algorithm capable of performing such a task.
|
119 |
TIME-FREQUENCY ANALYSIS TECHNIQUES FOR NON-STATIONARY SIGNALS USING SPARSITYAMIN, VAISHALI, 0000-0003-0873-3981 January 2022 (has links)
Non-stationary signals, particularly frequency modulated (FM) signals which arecharacterized by their time-varying instantaneous frequencies (IFs), are fundamental
to radar, sonar, radio astronomy, biomedical applications, image processing, speech
processing, and wireless communications. Time-frequency (TF) analyses of such signals
provide two-dimensional mapping of time-domain signals, and thus are regarded
as the most preferred technique for detection, parameter estimation, analysis and
utilization of such signals.
In practice, these signals are often received with compressed measurements as a
result of either missing samples, irregular samplings, or intentional under-sampling of
the signals. These compressed measurements induce undesired noise-like artifacts in
the TF representations (TFRs) of such signals. Compared to random missing data,
burst missing samples present a more realistic, yet a more challenging, scenario for
signal detection and parameter estimation through robust TFRs. In this dissertation,
we investigated the effects of burst missing samples on different joint-variable domain
representations in detail.
Conventional TFRs are not designed to deal with such compressed observations.
On the other hand, sparsity of such non-stationary signals in the TF domain facilitates
utilization of sparse reconstruction-based methods. The limitations of conventional
TF approaches and the sparsity of non-stationary signals in TF domain motivated us
to develop effective TF analysis techniques that enable improved IF estimation of such
signals with high resolution, mitigate undesired effects of cross terms and artifacts
and achieve highly concentrated robust TFRs, which is the goal of this dissertation.
In this dissertation, we developed several TF analysis techniques that achieved
the aforementioned objectives. The developed methods are mainly classified into two
three broad categories: iterative missing data recovery, adaptive local filtering based TF approach, and signal stationarization-based approaches. In the first category,
we recovered the missing data in the instantaneous auto-correlation function (IAF)
domain in conjunction with signal-adaptive TF kernels that are adopted to mitigate
undesired cross-terms and preserve desired auto-terms. In these approaches, we took
advantage of the fact that such non-stationary signals become stationary in the IAF
domain at each time instant. In the second category, we developed a novel adaptive
local filtering-based TF approach that involves local peak detection and filtering of
TFRs within a window of a specified length at each time instant. The threshold for
each local TF segment is adapted based on the local maximum values of the signal
within that segment. This approach offers low-complexity, and is particularly
useful for multi-component signals with distinct amplitude levels. Finally, we developed
knowledge-based TFRs based on signal stationarization and demonstrated
the effectiveness of the proposed TF techniques in high-resolution Doppler analysis
of multipath over-the-horizon radar (OTHR) signals. This is an effective technique
that enables improved target parameter estimation in OTHR operations. However,
due to high proximity of these Doppler signatures in TF domain, their separation
poses a challenging problem. By utilizing signal self-stationarization and ensuring IF
continuity, the developed approaches show excellent performance to handle multiple
signal components with variations in their amplitude levels. / Electrical and Computer Engineering
|
120 |
Robustness, Resilience, and Scalability of State Estimation AlgorithmsShiraz Khan (8782250) 30 November 2023 (has links)
<p dir="ltr">State estimation is a type of an <i>inverse problem</i> in which some amount of observed data needs to be processed using computer algorithms (which are designed using analytical techniques) to infer or reconstruct the underlying model that produced the data. Due to the ubiquity of data and interconnected control systems in the present day, many engineering domains have become replete with inverse problems that can be formulated as state estimation problems. The interconnectedness of these control systems imparts the associated state estimation problems with distinctive structural properties that must be taken into consideration. For instance, the observed data could be high-dimensional and have a dependency structure that is best described by a graph. Furthermore, the control systems of today interface with each other and with the internet, bringing in new possibilities for large-scale collaborative sensor fusion, while also (potentially) introducing new sources of disturbances, faults, and cyberattacks. </p><p dir="ltr">The main thesis of this document is to investigate the unique challenges related to the issues of robustness, resilience (to faults and cyberattacks), and scalability of state estimation algorithms. These correspond to research questions such as, <i>"Does the state estimation algorithm retain its performance when the measurements are perturbed by unknown disturbances or adversarial inputs?"</i> and <i>"Does the algorithm have any bottlenecks that restrict the size/dimension of the problems that it could be applied to?".</i> Most of these research questions are motivated by a singular domain of application: autonomous navigation of unmanned aerial vehicles (UAVs). Nevertheless, the mathematical methods and research philosophy employed herein are quite general, making the results of this document applicable to a variety of engineering tasks, including anomaly detection in time-series data, autonomous remote sensing, traffic monitoring, coordinated motion of dynamical systems, and fault-diagnosis of wireless sensor networks (WSNs), among others.</p>
|
Page generated in 0.3207 seconds