• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 4
  • 2
  • 1
  • Tagged with
  • 16
  • 16
  • 8
  • 7
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Arrays de microfones para medida de campos acústicos. / Microphone arrays for acoustic field measurements.

Flávio Protásio Ribeiro 23 January 2012 (has links)
Imageamento acústico é um problema computacionalmente caro e mal-condicionado, que envolve estimar distribuições de fontes com grandes arranjos de microfones. O método clássico para imageamento acústico utiliza beamforming, e produz a distribuição de fontes de interesse convoluída com a função de espalhamento do arranjo. Esta convolução borra a imagem ideal, significativamente diminuindo sua resolução. Convoluções podem ser evitadas com técnicas de ajuste de covariância, que produzem estimativas de alta resolução. Porém, estas têm sido evitadas devido ao seu alto custo computacional. Nesta tese, admitimos um arranjo bidimensional com geometria separável, e desenvolvemos transformadas rápidas para acelerar imagens acústicas em várias ordens de grandeza. Estas transformadas são genéricas, e podem ser aplicadas para acelerar beamforming, algoritmos de deconvolução e métodos de mínimos quadrados regularizados. Assim, obtemos imagens de alta resolução com algoritmos estado-da-arte, mantendo baixo custo computacional. Mostramos que arranjos separáveis produzem estimativas competitivas com as de geometrias espirais logaritmicas, mas com enormes vantagens computacionais. Finalmente, mostramos como estender este método para incorporar calibração, um modelo para propagação em campo próximo e superfícies focais arbitrárias, abrindo novas possibilidades para imagens acústicas. / Acoustic imaging is a computationally intensive and ill-conditioned inverse problem, which involves estimating high resolution source distributions with large microphone arrays. The classical method for acoustic imaging consists of beamforming, and produces the source distribution of interest convolved with the array point spread function. This convolution smears the image of interest, significantly reducing its effective resolution. Convolutions can be avoided with covariance fitting methods, which have been known to produce robust high-resolution estimates. However, these have been avoided due to prohibitive computational costs. In this thesis, we assume a 2D separable array geometry, and develop fast transforms to accelerate acoustic imaging by several orders of magnitude with respect to previous methods. These transforms are very generic, and can be applied to accelerate beamforming, deconvolution algorithms and regularized least-squares solvers. Thus, one can obtain high-resolution images with state-of-the-art algorithms, while maintaining low computational cost. We show that separable arrays deliver accuracy competitive with multi-arm spiral geometries, while producing huge computational benefits. Finally, we show how to extend this approach with array calibration, a near-field propagation model and arbitrary focal surfaces, opening new and exciting possibilities for acoustic imaging.
12

Bayesian methods and machine learning in astrophysics

Higson, Edward John January 2019 (has links)
This thesis is concerned with methods for Bayesian inference and their applications in astrophysics. We principally discuss two related themes: advances in nested sampling (Chapters 3 to 5), and Bayesian sparse reconstruction of signals from noisy data (Chapters 6 and 7). Nested sampling is a popular method for Bayesian computation which is widely used in astrophysics. Following the introduction and background material in Chapters 1 and 2, Chapter 3 analyses the sampling errors in nested sampling parameter estimation and presents a method for estimating them numerically for a single nested sampling calculation. Chapter 4 introduces diagnostic tests for detecting when software has not performed the nested sampling algorithm accurately, for example due to missing a mode in a multimodal posterior. The uncertainty estimates and diagnostics in Chapters 3 and 4 are implemented in the $\texttt{nestcheck}$ software package, and both chapters describe an astronomical application of the techniques introduced. Chapter 5 describes dynamic nested sampling: a generalisation of the nested sampling algorithm which can produce large improvements in computational efficiency compared to standard nested sampling. We have implemented dynamic nested sampling in the $\texttt{dyPolyChord}$ and $\texttt{perfectns}$ software packages. Chapter 6 presents a principled Bayesian framework for signal reconstruction, in which the signal is modelled by basis functions whose number (and form, if required) is determined by the data themselves. This approach is based on a Bayesian interpretation of conventional sparse reconstruction and regularisation techniques, in which sparsity is imposed through priors via Bayesian model selection. We demonstrate our method for noisy 1- and 2-dimensional signals, including examples of processing astronomical images. The numerical implementation uses dynamic nested sampling, and uncertainties are calculated using the methods introduced in Chapters 3 and 4. Chapter 7 applies our Bayesian sparse reconstruction framework to artificial neural networks, where it allows the optimum network architecture to be determined by treating the number of nodes and hidden layers as parameters. We conclude by suggesting possible areas of future research in Chapter 8.
13

Ultrasonic guided wave imaging via sparse reconstruction

Levine, Ross M. 22 May 2014 (has links)
Structural health monitoring (SHM) is concerned with the continuous, long-term assessment of structural integrity. One commonly investigated SHM technique uses guided ultrasonic waves, which travel through the structure and interact with damage. Measured signals are then analyzed in software for detection, estimation, and characterization of damage. One common configuration for such a system uses a spatially-distributed array of fixed piezoelectric transducers, which is inexpensive and can cover large areas. Typically, one or more sets of prerecorded baseline signals are measured when the structure is in a known state, with imaging methods operating on differences between follow-up measurements and these baselines. Presented here is a new class of SHM spatially-distributed array algorithms that rely on sparse reconstruction. For this problem, damage over a region of interest (ROI) is considered to be sparse. Two different techniques are demonstrated here. The first, which relies on sparse reconstruction, uses an a priori assumption of scattering behavior to generate a redundant dictionary where each column corresponds to a pixel in the ROI. The second method extends this concept by using multidimensional models for each pixel, with each pixel corresponding to a "block" in the dictionary matrix; this method does not require advance knowledge of scattering behavior. Analysis and experimental results presented demonstrate the validity of the sparsity assumption. Experiments show that images generated with sparse methods are superior to those created with delay-and-sum methods; the techniques here are shown to be tolerant of propagation model mismatch. The block-sparse method described here also allows the extraction of scattering patterns, which can be used for damage characterization.
14

Fusion of Sparse Reconstruction Algorithms in Compressed Sensing

Ambat, Sooraj K January 2015 (has links) (PDF)
Compressed Sensing (CS) is a new paradigm in signal processing which exploits the sparse or compressible nature of the signal to significantly reduce the number of measurements, without compromising on the signal reconstruction quality. Recently, many algorithms have been reported in the literature for efficient sparse signal reconstruction. Nevertheless, it is well known that the performance of any sparse reconstruction algorithm depends on many parameters like number of measurements, dimension of the sparse signal, the level of sparsity, the measurement noise power, and the underlying statistical distribution of the non-zero elements of the signal. It has been observed that a satisfactory performance of the sparse reconstruction algorithm mandates certain requirement on these parameters, which is different for different algorithms. Many applications are unlikely to fulfil this requirement. For example, imaging speed is crucial in many Magnetic Resonance Imaging (MRI) applications. This restricts the number of measurements, which in turn affects the medical diagnosis using MRI. Hence, any strategy to improve the signal reconstruction in such adverse scenario is of substantial interest in CS. Interestingly, it can be observed that the performance degradation of the sparse recovery algorithms in the aforementioned cases does not always imply a complete failure. That is, even in such adverse situations, a sparse reconstruction algorithm may provide partially correct information about the signal. In this thesis, we study this scenario and propose a novel fusion framework and an iterative framework which exploit the partial information available in the sparse signal estimate(s) to improve sparse signal reconstruction. The proposed fusion framework employs multiple sparse reconstruction algorithms, independently, for signal reconstruction. We first propose a fusion algorithm viz. FACS which fuses the estimates of multiple participating algorithms in order to improve the sparse signal reconstruction. To alleviate the inherent drawbacks of FACS and further improve the sparse signal reconstruction, we propose another fusion algorithm called CoMACS and variants of CoMACS. For low latency applications, we propose a latency friendly fusion algorithm called pFACS. We also extend the fusion framework to the MMV problem and propose the extension of FACS called MMV-FACS. We theoretically analyse the proposed fusion algorithms and derive guarantees for performance improvement. We also show that the proposed fusion algorithms are robust against both signal and measurement perturbations. Further, we demonstrate the efficacy of the proposed algorithms via numerical experiments: (i) using sparse signals with different statistical distributions in noise-free and noisy scenarios, and (ii) using real-world ECG signals. The extensive numerical experiments show that, for a judicious choice of the participating algorithms, the proposed fusion algorithms result in a sparse signal estimate which is often better than the sparse signal estimate of the best participating algorithm. The proposed fusion framework requires toemploy multiple sparse reconstruction algorithms for sparse signal reconstruction. We also propose an iterative framework and algorithm called {IFSRA to improve the performance of a given arbitrary sparse reconstruction algorithm. We theoretically analyse IFSRA and derive convergence guarantees under signal and measurement perturbations. Numerical experiments on synthetic and real-world data confirm the efficacy of IFSRA. The proposed fusion algorithms and IFSRA are general in nature and does not require any modification in the participating algorithm(s).
15

Development of Sparse Recovery Based Optimized Diffuse Optical and Photoacoustic Image Reconstruction Methods

Shaw, Calvin B January 2014 (has links) (PDF)
Diffuse optical tomography uses near infrared (NIR) light as the probing media to re-cover the distributions of tissue optical properties with an ability to provide functional information of the tissue under investigation. As NIR light propagation in the tissue is dominated by scattering, the image reconstruction problem (inverse problem) is non-linear and ill-posed, requiring usage of advanced computational methods to compensate this. Diffuse optical image reconstruction problem is always rank-deficient, where finding the independent measurements among the available measurements becomes challenging problem. Knowing these independent measurements will help in designing better data acquisition set-ups and lowering the costs associated with it. An optimal measurement selection strategy based on incoherence among rows (corresponding to measurements) of the sensitivity (or weight) matrix for the near infrared diffuse optical tomography is proposed. As incoherence among the measurements can be seen as providing maximum independent information into the estimation of optical properties, this provides high level of optimization required for knowing the independency of a particular measurement on its counterparts. The utility of the proposed scheme is demonstrated using simulated and experimental gelatin phantom data set comparing it with the state-of-the-art methods. The traditional image reconstruction methods employ ℓ2-norm in the regularization functional, resulting in smooth solutions, where the sharp image features are absent. The sparse recovery methods utilize the ℓp-norm with p being between 0 and 1 (0 ≤ p1), along with an approximation to utilize the ℓ0-norm, have been deployed for the reconstruction of diffuse optical images. These methods are shown to have better utility in terms of being more quantitative in reconstructing realistic diffuse optical images compared to traditional methods. Utilization of ℓp-norm based regularization makes the objective (cost) function non-convex and the algorithms that implement ℓp-norm minimization utilizes approximations to the original ℓp-norm function. Three methods for implementing the ℓp-norm were con-sidered, namely Iteratively Reweigthed ℓ1-minimization (IRL1), Iteratively Reweigthed Least-Squares (IRLS), and Iteratively Thresholding Method (ITM). These results in-dicated that IRL1 implementation of ℓp-minimization provides optimal performance in terms of shape recovery and quantitative accuracy of the reconstructed diffuse optical tomographic images. Photoacoustic tomography (PAT) is an emerging hybrid imaging modality combining optics with ultrasound imaging. PAT provides structural and functional imaging in diverse application areas, such as breast cancer and brain imaging. A model-based iterative reconstruction schemes are the most-popular for recovering the initial pressure in limited data case, wherein a large linear system of equations needs to be solved. Often, these iterative methods requires regularization parameter estimation, which tends to be a computationally expensive procedure, making the image reconstruction process to be performed off-line. To overcome this limitation, a computationally efficient approach that computes the optimal regularization parameter is developed for PAT. This approach is based on the least squares-QR (LSQR) decomposition, a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution.
16

Algorithmes gloutons orthogonaux sous contrainte de positivité / Orthogonal greedy algorithms for non-negative sparse reconstruction

Nguyen, Thi Thanh 18 November 2019 (has links)
De nombreux domaines applicatifs conduisent à résoudre des problèmes inverses où le signal ou l'image à reconstruire est à la fois parcimonieux et positif. Si la structure de certains algorithmes de reconstruction parcimonieuse s'adapte directement pour traiter les contraintes de positivité, il n'en va pas de même des algorithmes gloutons orthogonaux comme OMP et OLS. Leur extension positive pose des problèmes d'implémentation car les sous-problèmes de moindres carrés positifs à résoudre ne possèdent pas de solution explicite. Dans la littérature, les algorithmes gloutons positifs (NNOG, pour “Non-Negative Orthogonal Greedy algorithms”) sont souvent considérés comme lents, et les implémentations récemment proposées exploitent des schémas récursifs approchés pour compenser cette lenteur. Dans ce manuscrit, les algorithmes NNOG sont vus comme des heuristiques pour résoudre le problème de minimisation L0 sous contrainte de positivité. La première contribution est de montrer que ce problème est NP-difficile. Deuxièmement, nous dressons un panorama unifié des algorithmes NNOG et proposons une implémentation exacte et rapide basée sur la méthode des contraintes actives avec démarrage à chaud pour résoudre les sous-problèmes de moindres carrés positifs. Cette implémentation réduit considérablement le coût des algorithmes NNOG et s'avère avantageuse par rapport aux schémas approximatifs existants. La troisième contribution consiste en une analyse de reconstruction exacte en K étapes du support d'une représentation K-parcimonieuse par les algorithmes NNOG lorsque la cohérence mutuelle du dictionnaire est inférieure à 1/(2K-1). C'est la première analyse de ce type. / Non-negative sparse approximation arises in many applications fields such as biomedical engineering, fluid mechanics, astrophysics, and remote sensing. Some classical sparse algorithms can be straightforwardly adapted to deal with non-negativity constraints. On the contrary, the non-negative extension of orthogonal greedy algorithms is a challenging issue since the unconstrained least square subproblems are replaced by non-negative least squares subproblems which do not have closed-form solutions. In the literature, non-negative orthogonal greedy (NNOG) algorithms are often considered to be slow. Moreover, some recent works exploit approximate schemes to derive efficient recursive implementations. In this thesis, NNOG algorithms are introduced as heuristic solvers dedicated to L0 minimization under non-negativity constraints. It is first shown that the latter L0 minimization problem is NP-hard. The second contribution is to propose a unified framework on NNOG algorithms together with an exact and fast implementation, where the non-negative least-square subproblems are solved using the active-set algorithm with warm start initialisation. The proposed implementation significantly reduces the cost of NNOG algorithms and appears to be more advantageous than existing approximate schemes. The third contribution consists of a unified K-step exact support recovery analysis of NNOG algorithms when the mutual coherence of the dictionary is lower than 1/(2K-1). This is the first analysis of this kind.

Page generated in 0.1012 seconds