• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 369
  • 56
  • 52
  • 45
  • 28
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 695
  • 126
  • 111
  • 105
  • 94
  • 91
  • 79
  • 74
  • 72
  • 69
  • 67
  • 67
  • 66
  • 64
  • 61
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Digitally-Assisted Mixed-Signal Wideband Compressive Sensing

Yu, Zhuizhuan 2011 May 1900 (has links)
Digitizing wideband signals requires very demanding analog-to-digital conversion (ADC) speed and resolution specifications. In this dissertation, a mixed-signal parallel compressive sensing system is proposed to realize the sensing of wideband sparse signals at sub-Nqyuist rate by exploiting the signal sparsity. The mixed-signal compressive sensing is realized with a parallel segmented compressive sensing (PSCS) front-end, which not only can filter out the harmonic spurs that leak from the local random generator, but also provides a tradeoff between the sampling rate and the system complexity such that a practical hardware implementation is possible. Moreover, the signal randomization in the system is able to spread the spurious energy due to ADC nonlinearity along the signal bandwidth rather than concentrate on a few frequencies as it is the case for a conventional ADC. This important new property relaxes the ADC SFDR requirement when sensing frequency-domain sparse signals. The mixed-signal compressive sensing system performance is greatly impacted by the accuracy of analog circuit components, especially with the scaling of CMOS technology. In this dissertation, the effect of the circuit imperfection in the mixed-signal compressive sensing system based on the PSCS front-end is investigated in detail, such as the finite settling time, the timing uncertainty and so on. An iterative background calibration algorithm based on LMS (Least Mean Square) is proposed, which is shown to be able to effectively calibrate the error due to the circuit nonideal factors. A low-speed prototype built with off-the-shelf components is presented. The prototype is able to sense sparse analog signals with up to 4 percent sparsity at 32 percent of the Nqyuist rate. Many practical constraints that arose during building the prototype such as circuit nonidealities are addressed in detail, which provides good insights for a future high-frequency integrated circuit implementation. Based on that, a high-frequency sub-Nyquist rate receiver exploiting the parallel compressive sensing is designed and fabricated with IBM90nm CMOS technology, and measurement results are presented to show the capability of wideband compressive sensing at sub-Nyquist rate. To the best of our knowledge, this prototype is the first reported integrated chip for wideband mixed-signal compressive sensing. The proposed prototype achieves 7 bits ENOB and 3 GS/s equivalent sampling rate in simulation assuming a 0.5 ps state-of-art jitter variance, whose FOM beats the FOM of the high speed state-of-the-art Nyquist ADCs by 2-3 times. The proposed mixed-signal compressive sensing system can be applied in various fields. In particular, its applications for wideband spectrum sensing for cognitive radios and spectrum analysis in RF tests are discussed in this work.
192

Robust Extraction Of Sparse 3d Points From Image Sequences

Vural, Elif 01 September 2008 (has links) (PDF)
In this thesis, the extraction of sparse 3D points from calibrated image sequences is studied. The presented method for sparse 3D reconstruction is examined in two steps, where the first part addresses the problem of two-view reconstruction, and the second part is the extension of the two-view reconstruction algorithm to include multiple views. The examined two-view reconstruction method consists of some basic building blocks, such as feature detection and matching, epipolar geometry estimation, and the reconstruction of cameras and scene structure. Feature detection and matching is achieved by Scale Invariant Feature Transform (SIFT) method. For the estimation of epipolar geometry, the 7-point and 8-point algorithms are examined for Fundamental matrix (F-matrix) computation, while RANSAC and PROSAC are utilized for the robustness and accuracy for model estimation. In the final stage of two-view reconstruction, the camera projection matrices are computed from the F-matrix, and the locations of 3D scene points are estimated by triangulation / hence, determining the scene structure and cameras up to a projective transformation. The extension of the two-view reconstruction to multiple views is achieved by estimating the camera projection matrix of each additional view from the already reconstructed matches, and then adding new points to the scene structure by triangulating the unreconstructed matches. Finally, the reconstruction is upgraded from projective to metric by a rectifying homography computed from the camera calibration information. In order to obtain a refined reconstruction, two different methods are suggested for the removal of erroneous points from the scene structure. In addition to the examination of the solution to the reconstruction problem, experiments have been conducted that compare the performances of competing algorithms used in various stages of reconstruction. In connection with sparse reconstruction, a rate-distortion efficient piecewise planar scene representation algorithm that generates mesh models of scenes from reconstructed point clouds is examined, and its performance is evaluated through experiments.
193

Solution Of Sparse Systems On Gpu Architecture

Lulec, Andac 01 June 2011 (has links) (PDF)
The solution of the linear system of equations is one of the core aspects of Finite Element Analysis (FEA) software. Since large amount of arithmetic operations are required for the solution of the system obtained by FEA, the influence of the solution of linear equations on the performance of the software is very significant. In recent years, the increasing demand for performance in the game industry caused significant improvements on the performances of Graphical Processing Units (GPU). With their massive floating point operations capability, they became attractive sources of performance for the general purpose programmers. Because of this reason, GPUs are chosen as the target hardware to develop an efficient parallel direct solver for the solution of the linear equations obtained from FEA.
194

Implementation of the locally competitive algorithm on a field programmable analog array

Balavoine, Aurèle 17 November 2009 (has links)
Sparse approximation is an important class of optimization problem in signal and image processing applications. This thesis presents an analog solution to this problem, based on the Locally Competitive Algorithm (LCA). A Hopfield-Network-like analog system, operating on sub-threshold currents is proposed as a solution. The results of the circuit components' implementation on the RASP2.8a chip, a Field Programmable Analog Array, are presented.
195

Multi-level solver for degenerated problems with applications to p-versions of the fem

Beuchler, Sven 18 July 2003 (has links) (PDF)
Dissertation ueber die effektive Vorkonditionierung linearer Gleichungssysteme resultierend aus der Diskretisierung eines elliptischen Randwertproblems 2. Ordnung mittels der Methode der Finiten Elementen. Als Vorkonditionierer werden multi-level artige Vorkonditionierer (BPX, Multi-grid, Wavelets) benutzt.
196

Supervised feature learning via sparse coding for music information rerieval

O'Brien, Cian John 08 June 2015 (has links)
This thesis explores the ideas of feature learning and sparse coding for Music Information Retrieval (MIR). Sparse coding is an algorithm which aims to learn new feature representations from data automatically. In contrast to previous work which uses sparse coding in an MIR context the concept of supervised sparse coding is also investigated, which makes use of the ground-truth labels explicitly during the learning process. Here sparse coding and supervised coding are applied to two MIR problems: classification of musical genre and recognition of the emotional content of music. A variation of Label Consistent K-SVD is used to add supervision during the dictionary learning process. In the case of Music Genre Recognition (MGR) an additional discriminative term is added to encourage tracks from the same genre to have similar sparse codes. For Music Emotion Recognition (MER) a linear regression term is added to learn an optimal classifier and dictionary pair. These results indicate that while sparse coding performs well for MGR, the additional supervision fails to improve the performance. In the case of MER, supervised coding significantly outperforms both standard sparse coding and commonly used designed features, namely MFCC and pitch chroma.
197

Magneto-hydrodynamics simulation study of high density thermal plasmas in plasma acceleration devices

Sitaraman, Hariswaran 17 October 2013 (has links)
The development of a Magneto-hydrodynamics (MHD) numerical tool to study high density thermal plasmas in plasma acceleration devices is presented. The MHD governing equations represent eight conservation equations for the evolution of density, momentum, energy and induced magnetic fields in a plasma. A matrix-free implicit method is developed to solve these conservation equations within the framework of an unstructured grid finite volume formulation. The analytic form of the convective flux Jacobian is derived for general unstructured grids. A Lower Upper Symmetric Gauss Seidel (LU-SGS) technique is developed as part of the implicit scheme. A coloring based algorithm for parallelization of this technique is also presented and its computational efficiency is compared with a global matrix solve technique that uses the GMRES (Generalized Minimum Residual) algorithm available in the PETSc (Portable Extensible Toolkit for Scientific computation) libraries. The verification cases used for this study are the MHD shock tube problem in one, two and three dimensions, the oblique shock and the Hartmann flow problem. It is seen that the matrix free method is comparatively faster and shows excellent scaling on multiple cores compared to the global matrix solve technique. The numerical model was thus verified against the above mentioned standard test cases and two application problems were studied. These include the simulation of plasma deflagration phenomenon in a coaxial plasma accelerator and a novel high speed flow control device called the Rail Plasma Actuator (RailPAc). Experimental studies on coaxial plasma accelerators have revealed two different modes of operation based on the delay between gas loading and discharge ignition. Longer delays lead to the detonation or the snowplow mode while shorter delays lead to the relatively efficient stationary or deflagration mode. One of the theories that explain the two different modes is based on plasma resistivity. A numerical modeling study is presented here in the context of a coaxial plasma accelerator and the effect of plasma resistivity is dealt with in detail. The simulated results pertaining to axial distribution of radial currents are compared with experimental measurements which show good agreement with each other. The simulations show that magnetic field diffusion is dominant at lower conductivities which tend to form a stationary region of high current density close to the inlet end of the device. Higher conductivities led to the formation of propagating current sheet like features due to greater convection of magnetic field. This study also validates the theory behind the two modes of operation based on plasma resistivity. The RailPAc (Rail Plasma Actuator) is a novel flow control device that uses the magnetic Lorentz forces for fluid flow actuation at atmospheric pressures. Experimental studies reveal actuation ~ 10-100 m/s can be achieved with this device which is much larger than conventional electro-hydrodynamic (EHD) force based plasma actuators. A magneto-hydrodynamics simulation study of this device is presented. The model is further developed to incorporate applied electric and magnetic fields seen in this device. The snowplow model which is typically used for studying pulsed plasma thrusters is used to predict the arc velocities which agrees well with experimental measurements. Two dimensional simulations were performed to study the effect of Lorentz forcing and heating effects on fluid flow actuation. Actuation on the order of 100 m/s is attained at the head of the current sheet due to the effect of Lorentz forcing alone. The inclusion of heating effects led to isotropic blast wave like actuation which is detrimental to the performance of RailPAc. This study also revealed the deficiencies of a single fluid model and a more accurate multi-fluid approach is proposed for future work. / text
198

Ανακατασκευή θερμικών εικόνων υψηλής ανάλυσης από εικόνες χαμηλής ανάλυσης με τεχνικές compressed sensing / Thermal image super resolution via compressed sensing

Ροντογιάννης, Επαμεινώνδας 10 June 2015 (has links)
Στην παρούσα εργασία εξετάζεται η αύξηση της ανάλυσης (super resolution) σε θερμικές εικόνες χρησιμοποιώντας τεχνικές συμπιεσμένης καταγραφής (compressed sensing). Οι εικόνες εκφράζονται με αραιό τρόπο ως προς δυο υπερπλήρη λεξικά (ένα χαμηλής και ένα υψηλής ανάλυσης) και επιχειρούμε κατασκευή της εικόνας υψηλής ανάλυσης. Τα αποτελέσματα της μεθόδου αυτής συγκρίνονται με τα αποτελέσματα τεχνικών που χρησιμοποιούν image registration με ακρίβεια subpixel για την επίτευξη του super resolution. / This thesis deals with the problem of resolution enhancement (super resolution) of thermal images using com- pressed sensing methods. We solve the super resolution problem in four stages. First, we seek a sparse representation of a low-resolution image with respect to two statistically-learned overcomplete dictionaries (for high and low resolution images respectively) and then we use the coefficients of this representa- tion to calculate the high resolution image. Then, we calculate the high resolution image using methods requiring multiple low resolution images aligned with subpixel accuracy (conventional approach). We compare the results of each method using broadly acclaimed metrics regarding reconstruction quality standards.
199

Sparse coding for machine learning, image processing and computer vision

Mairal, Julien 30 November 2010 (has links) (PDF)
We study in this thesis a particular machine learning approach to represent signals that that consists of modelling data as linear combinations of a few elements from a learned dictionary. It can be viewed as an extension of the classical wavelet framework, whose goal is to design such dictionaries (often orthonormal basis) that are adapted to natural signals. An important success of dictionary learning methods has been their ability to model natural image patches and the performance of image denoising algorithms that it has yielded. We address several open questions related to this framework: How to efficiently optimize the dictionary? How can the model be enriched by adding a structure to the dictionary? Can current image processing tools based on this method be further improved? How should one learn the dictionary when it is used for a different task than signal reconstruction? How can it be used for solving computer vision problems? We answer these questions with a multidisciplinarity approach, using tools from statistical machine learning, convex and stochastic optimization, image and signal processing, computer vision, but also optimization on graphs.
200

Classification models for high-dimensional data with sparsity patterns

Tillander, Annika January 2013 (has links)
Today's high-throughput data collection devices, e.g. spectrometers and gene chips, create information in abundance. However, this poses serious statistical challenges, as the number of features is usually much larger than the number of observed units.  Further, in this high-dimensional setting, only a small fraction of the features are likely to be informative for any specific project. In this thesis, three different approaches to the two-class supervised classification in this high-dimensional, low sample setting are considered. There are classifiers that are known to mitigate the issues of high-dimensionality, e.g. distance-based classifiers such as Naive Bayes. However, these classifiers are often computationally intensive and therefore less time-consuming for discrete data. Hence, continuous features are often transformed into discrete features. In the first paper, a discretization algorithm suitable for high-dimensional data is suggested and compared with other discretization approaches. Further, the effect of discretization on misclassification probability in high-dimensional setting is evaluated.   Linear classifiers are more stable which motivate adjusting the linear discriminant procedure to high-dimensional setting. In the second paper, a two-stage estimation procedure of the inverse covariance matrix, applying Lasso-based regularization and Cuthill-McKee ordering is suggested. The estimation gives a block-diagonal approximation of the covariance matrix which in turn leads to an additive classifier. In the third paper, an asymptotic framework that represents sparse and weak block models is derived and a technique for block-wise feature selection is proposed.      Probabilistic classifiers have the advantage of providing the probability of membership in each class for new observations rather than simply assigning to a class. In the fourth paper, a method is developed for constructing a Bayesian predictive classifier. Given the block-diagonal covariance matrix, the resulting Bayesian predictive and marginal classifier provides an efficient solution to the high-dimensional problem by splitting it into smaller tractable problems. The relevance and benefits of the proposed methods are illustrated using both simulated and real data. / Med dagens teknik, till exempel spektrometer och genchips, alstras data i stora mängder. Detta överflöd av data är inte bara till fördel utan orsakar även vissa problem, vanligtvis är antalet variabler (p) betydligt fler än antalet observation (n). Detta ger så kallat högdimensionella data vilket kräver nya statistiska metoder, då de traditionella metoderna är utvecklade för den omvända situationen (p<n).  Dessutom är det vanligtvis väldigt få av alla dessa variabler som är relevanta för något givet projekt och styrkan på informationen hos de relevanta variablerna är ofta svag. Därav brukar denna typ av data benämnas som gles och svag (sparse and weak). Vanligtvis brukar identifiering av de relevanta variablerna liknas vid att hitta en nål i en höstack. Denna avhandling tar upp tre olika sätt att klassificera i denna typ av högdimensionella data.  Där klassificera innebär, att genom ha tillgång till ett dataset med både förklaringsvariabler och en utfallsvariabel, lära en funktion eller algoritm hur den skall kunna förutspå utfallsvariabeln baserat på endast förklaringsvariablerna. Den typ av riktiga data som används i avhandlingen är microarrays, det är cellprov som visar aktivitet hos generna i cellen. Målet med klassificeringen är att med hjälp av variationen i aktivitet hos de tusentals gener (förklaringsvariablerna) avgöra huruvida cellprovet kommer från cancervävnad eller normalvävnad (utfallsvariabeln). Det finns klassificeringsmetoder som kan hantera högdimensionella data men dessa är ofta beräkningsintensiva, därav fungera de ofta bättre för diskreta data. Genom att transformera kontinuerliga variabler till diskreta (diskretisera) kan beräkningstiden reduceras och göra klassificeringen mer effektiv. I avhandlingen studeras huruvida av diskretisering påverkar klassificeringens prediceringsnoggrannhet och en mycket effektiv diskretiseringsmetod för högdimensionella data föreslås. Linjära klassificeringsmetoder har fördelen att vara stabila. Nackdelen är att de kräver en inverterbar kovariansmatris och vilket kovariansmatrisen inte är för högdimensionella data. I avhandlingen föreslås ett sätt att skatta inversen för glesa kovariansmatriser med blockdiagonalmatris. Denna matris har dessutom fördelen att det leder till additiv klassificering vilket möjliggör att välja hela block av relevanta variabler. I avhandlingen presenteras även en metod för att identifiera och välja ut blocken. Det finns också probabilistiska klassificeringsmetoder som har fördelen att ge sannolikheten att tillhöra vardera av de möjliga utfallen för en observation, inte som de flesta andra klassificeringsmetoder som bara predicerar utfallet. I avhandlingen förslås en sådan Bayesiansk metod, givet den blockdiagonala matrisen och normalfördelade utfallsklasser. De i avhandlingen förslagna metodernas relevans och fördelar är visade genom att tillämpa dem på simulerade och riktiga högdimensionella data.

Page generated in 0.0307 seconds