• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 4
  • 2
  • 1
  • Tagged with
  • 16
  • 16
  • 8
  • 7
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

TOF-PET Imaging within the Framework of Sparse Reconstruction

Lao, Dapeng 2012 May 1900 (has links)
Recently, the limited-angle TOF-PET system has become an active topic mainly due to the considerable reduction of hardware cost and potential applicability for performing needle biopsy on patients while in the scanner. However, this kind of measurement configurations oftentimes suffers from the deteriorated reconstructed images, because insufficient data are observed. The established theory of Compressed Sensing (CS) provides a potential framework for attacking this problem. CS claims that the imaged object can be faithfully recovered from highly underdetermined observations, provided that it can be sparse in some transformed domain. In here a first attempt was made in applying the CS framework to TOF-PET imaging for two undersampling configurations. First, to deal with undersampling TOF-PET imaging, an efficient sparsity-promoted algorithm was developed for combined regularizations of p-TV and l1-norm, where it was found that (a) it is capable of providing better reconstruction than the traditional EM algorithm, and (b) the 0.5-TV regularization was significantly superior to the regularizations of 0-TV and 1-TV, which are widely investigated in the open literature. Second, a general framework was proposed for sparsity-promoted ART, where accelerated techniques of multi-step and order-set were simultaneously used. From the results, it was observed that the accelerated sparsity-promoted ART method was capable of providing better reconstruction than traditional ART. Finally, a relationship was established between the number of detectors (or the range of angle) and TOF time resolution, which provided an empirical guidance for designing novel low-cost TOF-PET systems while ensuring good reconstruction quality.
2

Analysis of Sparse Channel Estimation

Carroll, Brian Michael 03 September 2009 (has links)
No description available.
3

Application of L1 reconstruction of sparse signals to ambiguity resolution in radar

Shaban, Fahad 13 May 2013 (has links)
The objective of the proposed research is to develop a new algorithm for range and Doppler ambiguity resolution in radar detection data using L1 minimization methods for sparse signals and to investigate the properties of such techniques. This novel approach to ambiguity resolution makes use of the sparse measurement structure of the post-detection data in multiple pulse repetition frequency radars and the resulting equivalence of the computationally intractable L0 minimization and the surrogate L1 minimization methods. The ambiguity resolution problem is cast as a linear system of equations which is then solved for the unique sparse solution in the absence of errors. It is shown that the new technique successfully resolves range and Doppler ambiguities and the recovery is exact in the ideal case of no errors in the system. The behavior of the technique is then investigated in the presence of real world data errors encountered in radar measurement and detection process. Examples of such errors include blind zone effects, collisions, false alarms and missed detections. It is shown that the mathematical model consisting of a linear system of equations developed for the ideal case can be adjusted to account for data errors. Empirical results show that the L1 minimization approach also works well in the presence of errors with minor extensions to the algorithm. Several examples are presented to demonstrate the successful implementation of the new technique for range and Doppler ambiguity resolution in pulse Doppler radars.
4

Sparse Reconstruction Schemes for Nonlinear Electromagnetic Imaging

Desmal, Abdulla 03 1900 (has links)
Electromagnetic imaging is the problem of determining material properties from scattered fields measured away from the domain under investigation. Solving this inverse problem is a challenging task because (i) it is ill-posed due to the presence of (smoothing) integral operators used in the representation of scattered fields in terms of material properties, and scattered fields are obtained at a finite set of points through noisy measurements; and (ii) it is nonlinear simply due the fact that scattered fields are nonlinear functions of the material properties. The work described in this thesis tackles the ill-posedness of the electromagnetic imaging problem using sparsity-based regularization techniques, which assume that the scatterer(s) occupy only a small fraction of the investigation domain. More specifically, four novel imaging methods are formulated and implemented. (i) Sparsity-regularized Born iterative method iteratively linearizes the nonlinear inverse scattering problem and each linear problem is regularized using an improved iterative shrinkage algorithm enforcing the sparsity constraint. (ii) Sparsity-regularized nonlinear inexact Newton method calls for the solution of a linear system involving the Frechet derivative matrix of the forward scattering operator at every iteration step. For faster convergence, the solution of this matrix system is regularized under the sparsity constraint and preconditioned by leveling the matrix singular values. (iii) Sparsity-regularized nonlinear Tikhonov method directly solves the nonlinear minimization problem using Landweber iterations, where a thresholding function is applied at every iteration step to enforce the sparsity constraint. (iv) This last scheme is accelerated using a projected steepest descent method when it is applied to three-dimensional investigation domains. Projection replaces the thresholding operation and enforces the sparsity constraint. Numerical experiments, which are carried out using synthetically generated or actually measured scattered fields, show that the images recovered by these sparsity-regularized methods are sharper and more accurate than those produced by existing methods. The methods developed in this work have potential application areas ranging from oil/gas reservoir engineering to biological imaging where sparse domains naturally exist.
5

Paleoclimate reconstructionfrom climate proxiesby neural methods

Déchelle-Marquet, Marie January 2019 (has links)
In the present work, we investigate the capacity of machine learning to reconstruct simulated large scale surface temperature anomalies given a sparse observation field. Several methods are combined: self-organizing maps and recurrent neural networks of the temporal trajectory. To evaluate our global scale reconstruction, we base our validation on global climate indices time series and EOF analysis. In our experiments, the obtained reconstructions of the global surface temperature anomalies provide a good correlation (over 90%) with the target values when considering scarce available observations sampling about 0.5% of the globe. We reconstruct the surface temperature anomaly fields from 0.05% of total number of data points. We obtain an RMSE of 0.39°C. We further validate the quality of the results calculating a correlation of 0.92, 0.97 and 0.98 between the reconstructed and target indices of AMO, ENSO and IPO. / Klimatsystemet består av olika komponenter inklusive atmosfären, havet och jorden. Som ett öppet system utbyter det hela tiden energi med resten av universum. Det är också ett dynamiskt system vars utveckling kan förutsägas av kända fysiska lagar. Interaktionen mellan dess olika komponenter leder till en så kallad naturlig variation. Denna variabilitet återspeglas i form av svängningslägen, inklusive AMO, ENSO och IPO. För att studera dessa variationer har vi klimatmodeller som representerar de olika krafterna och deras effekt på klimatförändringar på lång sikt. I detta sammanhang är variationerna i det förflutna klimatet särskilt intressanta och tillåter oss en bättre förståelse av klimatförändringar och bättre förutsäga den framtida utvecklingen. Men för att studera det förflutna klimatet eller paleoklimat är den enda tillgängliga informationen endast fullständig under de senaste 150 åren. Innan dess är de enda tillgängliga indikatorerna naturliga, kallad klimatproxy, som trädringar eller iskärnor. Vi kan härleda tidsserier med klimatdata, till exempel temperatur. Denna information är emellertid knappast tillfälligt såväl som över hela världen. Återskapa det globala klimatet från sådana data hanteras fortfarande dåligt. Länken mellan lokal information och global klimat studeras här med hjälp av statistiska metoder, inklusive neurala nätverk. Det långsiktiga målet med denna studie är att bygga en metod för att rekonstruera paleoklimatet från data om klimatproxy, vi fokuserar inledningsvis på rekonstruktionen av ett så kallat perfekt klimat, det vill säga en modell som endast tar hänsyn till naturlig variation, från rumsligt sällsynta tidsserier. De studerade uppgifterna är de från globala yttemperaturutgångar från den havsatmosfärkopplade IPSL-modellen. Uppgifterna förbehandlas för att ta bort säsongens genomsnittliga cykel och omvandlas till temperaturavvikelser. Dessutom väljs rutnätpunkter som representerar information om proxyer pseudo-slumpmässigt, med respekt för den verkliga dispositionen av dessa, övervägande i norr på kontinenterna. Uppgifterna delas upp i träningsdata (150 år), validering (30 år) och testdata (120 år). De metoder som används kombinerar (1) självorganiserande kartor och hierarkisk stigande klassificering, användbara för att producera en reducerad storlek av inmatningsdata, här baserat på tidskorrelationen mellan temperaturutvecklingen under 150 år, (2) ItCompSOM använder korrelationen mellan klasser erhållna genom självorganiserande kartor för att rekonstruera obevakad data, (3) återkommande nervnätverk för att förklara den temporära komponenten i data och förbättra den tidigare rekonstruktionen. Slutligen är definitionen av nya mätvärden nödvändig för att validera de föreslagna modellerna. Utvärderingen av produkterna görs således genom temporär rekonstruktion av AMO, ENSO, IPO klimatlägen samt genom projicering av huvudkomponenterna i analysen av huvudkomponenterna i inputdata. Således konstrueras en reducerad modell av globala temperaturdata baserad på 150 års fullständiga data först, vilket reducerar den rumsliga informationen från 9216 rutnätpunkter till 191 regioner associerade med 1 medelvärde vardera. För att ansluta denna modell till tidssekvenser av sällsynta temperaturer i världen antas det att varje klass som innefattar minst en observerad proxy-data är känd. Rekonstruktionen av globala yttemperaturutvecklingar med ItCompSOM ger en korrelation till indexen på mer än 90% för endast 0,5% av de initiala observationerna. Detta resultat förbättras kraftigt tack vare återkommande nervnätverk, vilket leder till en korrelation av 0,92, 0,97 respektive 0,98 för AMO, ENSO och IPO med endast 0,05% av observationerna. Dessa poäng förklaras med den använda metoden, regionaliseringen hjälper till att koncentrera informationen. Medan 0,5% av rutpunkterna är lika med 43 poäng, om de är korrekt fördelade, representerar de 22% av informationen om regionerna (43 av 191). Dessa mycket uppmuntrande resultat återstår att tillämpas på verkliga klimatproblem, det vill säga med hänsyn till å ena sidan den externa och antropologiska kraften, osäkerheterna relaterade till de verkliga uppgifterna om ombud å andrasidan.
6

Topics in Sparse Inverse Problems and Electron Paramagnetic Resonance Imaging

Som, Subhojit 27 October 2010 (has links)
No description available.
7

Sparse Methods for Model Estimation with Applications to Radar Imaging

Austin, Christian David 19 June 2012 (has links)
No description available.
8

Echantillonnage compressé le long de trajectoires physiquement plausibles en IRM / Compressed sensing along physically plausible sampling trajectories in MRI

Chauffert, Nicolas 28 September 2015 (has links)
L'imagerie par résonance magnétique (IRM) est une technique d'imagerie non invasive et non ionisante qui permet d'imager et de discriminer les tissus mous grâce à une bonne sensibilité de contraste issue de la variation de paramètres physiques (T$_1$, T$_2$, densité de protons) spécifique à chaque tissu. Les données sont acquises dans l'espace-$k$, correspondant aux fréquences spatiales de l'image. Des contraintes physiques et matérielles contraignent le mode de fonctionnement des gradients de champ magnétique utilisés pour acquérir les données. Ainsi, ces dernières sont obtenues séquentiellement le long de trajectoires assez régulières (dérivée et dérivée seconde bornées). En conséquence, la durée d'acquisition augmente avec la résolution recherchée de l'image. Accélérer l'acquisition des données est crucial pour réduire la durée d'examen et ainsi améliorer le confort du sujet, diminuer les coûts, limiter les distorsions dans l'image~(e.g., dues au mouvement), ou encore augmenter la résolution temporelle en IRM fonctionnelle. L'échantillonnage compressif permet de sous-échantillonner l'espace-$k$, et de reconstruire une image de bonne qualité en utilisant une hypothèse de parcimonie de l'image dans une base d'ondelettes. Les théories d'échantillonnage compressif s'adaptent mal à l'IRM, même si certaines heuristiques ont permis d'obtenir des résultats prometteurs. Les problèmes rencontrés en IRM pour l'application de cette théorie sont i) d'une part, les bases d'acquisition~(Fourier) et de représentation~(ondelettes) sont cohérentes ; et ii) les schémas actuellement couverts par la théorie sont composés de mesures isolées, incompatibles avec l'échantillonnage continu le long de segments ou de courbes. Cette thèse vise à développer une théorie de l'échantillonnage compressif applicable à l'IRM et à d'autres modalités. D'une part, nous proposons une théorie d'échantillonnage à densité variable pour répondre au premier point. Les échantillons les plus informatifs ont une probabilité plus élevée d'être mesurés. D'autre part, nous proposons des schémas et concevons des trajectoires qui vérifient les contraintes d'acquisition tout en parcourant l'espace-$k$ avec la densité prescrite dans la théorie de l'échantillonnage à densité variable. Ce second point étant complexe, il est abordé par une séquence de contributions indépendantes. D'abord, nous proposons des schémas d'échantillonnage à densité variables le long de courbes continues~(marche aléatoire, voyageur de commerce). Ensuite, nous proposons un algorithme de projection sur l'espace des contraintes qui renvoie la courbe physiquement plausible la plus proche d'une courbe donnée~(e.g., une solution du voyageur de commerce). Nous donnons enfin un algorithme de projection sur des espaces de mesures qui permet de trouver la projection d'une distribution quelconque sur l'espace des mesures porté par les courbes admissibles. Ainsi, la courbe obtenue est physiquement admissible et réalise un échantillonnage à densité variable. Les résultats de reconstruction obtenus en simulation à partir de cette méthode dépassent ceux associées aux trajectoires d'acquisition utilisées classiquement~(spirale, radiale) de plusieurs décibels (de l'ordre de 3~dB) et permettent d'envisager une implémentation prochaine à 7~Tesla notamment dans le contexte de l'imagerie anatomique haute résolution. / Magnetic Resonance Imaging~(MRI) is a non-invasive and non-ionizing imaging technique that provides images of body tissues, using the contrast sensitivity coming from the magnetic parameters (T$_1$, T$_2$ and proton density). Data are acquired in the $k$-space, corresponding to spatial Fourier frequencies. Because of physical constraints, the displacement in the $k$-space is subject to kinematic constraints. Indeed, magnetic field gradients and their temporal derivative are upper bounded. Hence, the scanning time increases with the image resolution. Decreasing scanning time is crucial to improve patient comfort, decrease exam costs, limit the image distortions~(eg, created by the patient movement), or decrease temporal resolution in functionnal MRI. Reducing scanning time can be addressed by Compressed Sensing~(CS) theory. The latter is a technique that guarantees the perfect recovery of an image from undersampled data in $k$-space, by assuming that the image is sparse in a wavelet basis. Unfortunately, CS theory cannot be directly cast to the MRI setting. The reasons are: i) acquisition~(Fourier) and representation~(wavelets) bases are coherent and ii) sampling schemes obtained using CS theorems are composed of isolated measurements and cannot be realistically implemented by magnetic field gradients: the sampling is usually performed along continuous or more regular curves. However, heuristic application of CS in MRI has provided promising results. In this thesis, we aim to develop theoretical tools to apply CS to MRI and other modalities. On the one hand, we propose a variable density sampling theory to answer the first inpediment. The more the sample contains information, the more it is likely to be drawn. On the other hand, we propose sampling schemes and design sampling trajectories that fulfill acquisition constraints, while traversing the $k$-space with the sampling density advocated by the theory. The second point is complex and is thus addressed step by step. First, we propose continuous sampling schemes based on random walks and on travelling salesman~(TSP) problem. Then, we propose a projection algorithm onto the space of constraints that returns the closest feasible curve of an input curve~(eg, a TSP solution). Finally, we provide an algorithm to project a measure onto a set of measures carried by parameterizations. In particular, if this set is the one carried by admissible curves, the algorithm returns a curve which sampling density is close to the measure to project. This designs an admissible variable density sampler. The reconstruction results obtained in simulations using this strategy outperform existing acquisition trajectories~(spiral, radial) by about 3~dB. They permit to envision a future implementation on a real 7~T scanner soon, notably in the context of high resolution anatomical imaging.
9

TIME-FREQUENCY ANALYSIS TECHNIQUES FOR NON-STATIONARY SIGNALS USING SPARSITY

AMIN, VAISHALI, 0000-0003-0873-3981 January 2022 (has links)
Non-stationary signals, particularly frequency modulated (FM) signals which arecharacterized by their time-varying instantaneous frequencies (IFs), are fundamental to radar, sonar, radio astronomy, biomedical applications, image processing, speech processing, and wireless communications. Time-frequency (TF) analyses of such signals provide two-dimensional mapping of time-domain signals, and thus are regarded as the most preferred technique for detection, parameter estimation, analysis and utilization of such signals. In practice, these signals are often received with compressed measurements as a result of either missing samples, irregular samplings, or intentional under-sampling of the signals. These compressed measurements induce undesired noise-like artifacts in the TF representations (TFRs) of such signals. Compared to random missing data, burst missing samples present a more realistic, yet a more challenging, scenario for signal detection and parameter estimation through robust TFRs. In this dissertation, we investigated the effects of burst missing samples on different joint-variable domain representations in detail. Conventional TFRs are not designed to deal with such compressed observations. On the other hand, sparsity of such non-stationary signals in the TF domain facilitates utilization of sparse reconstruction-based methods. The limitations of conventional TF approaches and the sparsity of non-stationary signals in TF domain motivated us to develop effective TF analysis techniques that enable improved IF estimation of such signals with high resolution, mitigate undesired effects of cross terms and artifacts and achieve highly concentrated robust TFRs, which is the goal of this dissertation. In this dissertation, we developed several TF analysis techniques that achieved the aforementioned objectives. The developed methods are mainly classified into two three broad categories: iterative missing data recovery, adaptive local filtering based TF approach, and signal stationarization-based approaches. In the first category, we recovered the missing data in the instantaneous auto-correlation function (IAF) domain in conjunction with signal-adaptive TF kernels that are adopted to mitigate undesired cross-terms and preserve desired auto-terms. In these approaches, we took advantage of the fact that such non-stationary signals become stationary in the IAF domain at each time instant. In the second category, we developed a novel adaptive local filtering-based TF approach that involves local peak detection and filtering of TFRs within a window of a specified length at each time instant. The threshold for each local TF segment is adapted based on the local maximum values of the signal within that segment. This approach offers low-complexity, and is particularly useful for multi-component signals with distinct amplitude levels. Finally, we developed knowledge-based TFRs based on signal stationarization and demonstrated the effectiveness of the proposed TF techniques in high-resolution Doppler analysis of multipath over-the-horizon radar (OTHR) signals. This is an effective technique that enables improved target parameter estimation in OTHR operations. However, due to high proximity of these Doppler signatures in TF domain, their separation poses a challenging problem. By utilizing signal self-stationarization and ensuring IF continuity, the developed approaches show excellent performance to handle multiple signal components with variations in their amplitude levels. / Electrical and Computer Engineering
10

Arrays de microfones para medida de campos acústicos. / Microphone arrays for acoustic field measurements.

Ribeiro, Flávio Protásio 23 January 2012 (has links)
Imageamento acústico é um problema computacionalmente caro e mal-condicionado, que envolve estimar distribuições de fontes com grandes arranjos de microfones. O método clássico para imageamento acústico utiliza beamforming, e produz a distribuição de fontes de interesse convoluída com a função de espalhamento do arranjo. Esta convolução borra a imagem ideal, significativamente diminuindo sua resolução. Convoluções podem ser evitadas com técnicas de ajuste de covariância, que produzem estimativas de alta resolução. Porém, estas têm sido evitadas devido ao seu alto custo computacional. Nesta tese, admitimos um arranjo bidimensional com geometria separável, e desenvolvemos transformadas rápidas para acelerar imagens acústicas em várias ordens de grandeza. Estas transformadas são genéricas, e podem ser aplicadas para acelerar beamforming, algoritmos de deconvolução e métodos de mínimos quadrados regularizados. Assim, obtemos imagens de alta resolução com algoritmos estado-da-arte, mantendo baixo custo computacional. Mostramos que arranjos separáveis produzem estimativas competitivas com as de geometrias espirais logaritmicas, mas com enormes vantagens computacionais. Finalmente, mostramos como estender este método para incorporar calibração, um modelo para propagação em campo próximo e superfícies focais arbitrárias, abrindo novas possibilidades para imagens acústicas. / Acoustic imaging is a computationally intensive and ill-conditioned inverse problem, which involves estimating high resolution source distributions with large microphone arrays. The classical method for acoustic imaging consists of beamforming, and produces the source distribution of interest convolved with the array point spread function. This convolution smears the image of interest, significantly reducing its effective resolution. Convolutions can be avoided with covariance fitting methods, which have been known to produce robust high-resolution estimates. However, these have been avoided due to prohibitive computational costs. In this thesis, we assume a 2D separable array geometry, and develop fast transforms to accelerate acoustic imaging by several orders of magnitude with respect to previous methods. These transforms are very generic, and can be applied to accelerate beamforming, deconvolution algorithms and regularized least-squares solvers. Thus, one can obtain high-resolution images with state-of-the-art algorithms, while maintaining low computational cost. We show that separable arrays deliver accuracy competitive with multi-arm spiral geometries, while producing huge computational benefits. Finally, we show how to extend this approach with array calibration, a near-field propagation model and arbitrary focal surfaces, opening new and exciting possibilities for acoustic imaging.

Page generated in 0.1179 seconds