• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • 1
  • 1
  • Tagged with
  • 12
  • 12
  • 12
  • 8
  • 7
  • 7
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Panic Detection in Human Crowds using Sparse Coding

Kumar, Abhishek 21 August 2012 (has links)
Recently, the surveillance of human activities has drawn a lot of attention from the research community and the camera based surveillance is being tried with the aid of computers. Cameras are being used extensively for surveilling human activities; however, placing cameras and transmitting visual data is not the end of a surveillance system. Surveillance needs to detect abnormal or unwanted activities. Such abnormal activities are very infrequent as compared to regular activities. At present, surveillance is done manually, where the job of operators is to watch a set of surveillance video screens to discover an abnormal event. This is expensive and prone to error. The limitation of these surveillance systems can be effectively removed if an automated anomaly detection system is designed. With powerful computers, computer vision is being seen as a panacea for surveillance. A computer vision aided anomaly detection system will enable the selection of those video frames which contain an anomaly, and only those selected frames will be used for manual verifications. A panic is a type of anomaly in a human crowd, which appears when a group of people start to move faster than the usual speed. Such situations can arise due to a fearsome activity near a crowd such as fight, robbery, riot, etc. A variety of computer vision based algorithms have been developed to detect panic in human crowds, however, most of the proposed algorithms are computationally expensive and hence too slow to be real-time. Dictionary learning is a robust tool to model a behaviour in terms of the linear combination of dictionary elements. A few panic detection algorithms have shown high accuracy using the dictionary learning method; however, the dictionary learning approach is computationally expensive. Orthogonal matching pursuit (OMP) is an inexpensive way to model a behaviour using dictionary elements and in this research OMP is used to design a panic detection algorithm. The proposed algorithm has been tested on two datasets and results are found to be comparable to state-of-the-art algorithms.
2

Signal reconstruction from incomplete and misplaced measurements

Sastry, Challa, Hennenfent, Gilles, Herrmann, Felix J. January 2007 (has links)
Constrained by practical and economical considerations, one often uses seismic data with missing traces. The use of such data results in image artifacts and poor spatial resolution. Sometimes due to practical limitations, measurements may be available on a perturbed grid, instead of on the designated grid. Due to algorithmic requirements, when such measurements are viewed as those on the designated grid, the recovery procedures may result in additional artifacts. This paper interpolates incomplete data onto regular grid via the Fourier domain, using a recently developed greedy algorithm. The basic objective is to study experimentally as to what could be the size of the perturbation in measurement coordinates that allows for the measurements on the perturbed grid to be considered as on the designated grid for faithful recovery. Our experimental work shows that for compressible signals, a uniformly distributed perturbation can be offset with slightly more number of measurements.
3

Panic Detection in Human Crowds using Sparse Coding

Kumar, Abhishek 21 August 2012 (has links)
Recently, the surveillance of human activities has drawn a lot of attention from the research community and the camera based surveillance is being tried with the aid of computers. Cameras are being used extensively for surveilling human activities; however, placing cameras and transmitting visual data is not the end of a surveillance system. Surveillance needs to detect abnormal or unwanted activities. Such abnormal activities are very infrequent as compared to regular activities. At present, surveillance is done manually, where the job of operators is to watch a set of surveillance video screens to discover an abnormal event. This is expensive and prone to error. The limitation of these surveillance systems can be effectively removed if an automated anomaly detection system is designed. With powerful computers, computer vision is being seen as a panacea for surveillance. A computer vision aided anomaly detection system will enable the selection of those video frames which contain an anomaly, and only those selected frames will be used for manual verifications. A panic is a type of anomaly in a human crowd, which appears when a group of people start to move faster than the usual speed. Such situations can arise due to a fearsome activity near a crowd such as fight, robbery, riot, etc. A variety of computer vision based algorithms have been developed to detect panic in human crowds, however, most of the proposed algorithms are computationally expensive and hence too slow to be real-time. Dictionary learning is a robust tool to model a behaviour in terms of the linear combination of dictionary elements. A few panic detection algorithms have shown high accuracy using the dictionary learning method; however, the dictionary learning approach is computationally expensive. Orthogonal matching pursuit (OMP) is an inexpensive way to model a behaviour using dictionary elements and in this research OMP is used to design a panic detection algorithm. The proposed algorithm has been tested on two datasets and results are found to be comparable to state-of-the-art algorithms.
4

Compressed Sensing : Algorithms and Applications

Sundman, Dennis January 2012 (has links)
The theoretical problem of finding the solution to an underdeterminedset of linear equations has for several years attracted considerable attentionin the literature. This problem has many practical applications.One example of such an application is compressed sensing (cs), whichhas the potential to revolutionize how we acquire and process signals. Ina general cs setup, few measurement coefficients are available and thetask is to reconstruct a larger, sparse signal.In this thesis we focus on algorithm design and selected applicationsfor cs. The contributions of the thesis appear in the following order:(1) We study an application where cs can be used to relax the necessityof fast sampling for power spectral density estimation problems. Inthis application we show by experimental evaluation that we can gainan order of magnitude in reduced sampling frequency. (2) In order toimprove cs recovery performance, we extend simple well-known recoveryalgorithms by introducing a look-ahead concept. From simulations it isobserved that the additional complexity results in significant improvementsin recovery performance. (3) For sensor networks, we extend thecurrent framework of cs by introducing a new general network modelwhich is suitable for modeling several cs sensor nodes with correlatedmeasurements. Using this signal model we then develop several centralizedand distributed cs recovery algorithms. We find that both thecentralized and distributed algorithms achieve a significant gain in recoveryperformance compared to the standard, disconnected, algorithms.For the distributed case, we also see that as the network connectivity increases,the performance rapidly converges to the performance of thecentralized solution. / <p>QC 20120229</p>
5

Compressed Sensing for Electronic Radio Frequency Receiver:Detection, Sensitivity, and Implementation

Lin, Ethan 02 May 2016 (has links)
No description available.
6

Sensing dictionary construction for orthogonal matching pursuit algorithm in compressive sensing

Li, Bo 10 1900 (has links)
<p>In compressive sensing, the fundamental problem is to reconstruct sparse signal from its nonadaptive insufficient linear measurement. Besides sparse signal reconstruction algorithms, measurement matrix or measurement dictionary plays an important part in sparse signal recovery. Orthogonal Matching Pursuit (OMP) algorithm, which is widely used in compressive sensing, is especially affected by measurement dictionary. Measurement dictionary with small restricted isometry constant or coherence could improve the performance of OMP algorithm. Based on measurement dictionary, sensing dictionary can be constructed and can be incorporated into OMP algorithm. In this thesis, two methods are proposed to design sensing dictionary. In the first method, sensing dictionary design problem is formulated as a linear programming problem. The solution is unique and can be obtained by standard linear programming method such as primal-dual interior point method. The major drawback of linear programming based method is its high computational complexity. The second method is termed sensing dictionary designing algorithm. In this algorithm, each atom of sensing dictionary is designed independently to reduce the maximal magnitude of its inner product with measurement dictionary. Compared with linear programming based method, the proposed sensing dictionary design algorithm is of low computational complexity and the performance is similar. Simulation results indicate that both of linear programming based method and the proposed sensing dictionary designing algorithm can design sensing dictionary with small mutual coherence and cumulative coherence. When the designed sensing dictionary is applied to OMP algorithm, the performance of OMP algorithm improves.</p> / Master of Science in Electrical and Computer Engineering (MSECE)
7

Identification And Localization On A Wireless Magnetic Sensor Network

Baghaee, Sajjad 01 June 2012 (has links) (PDF)
This study focused on using magnetic sensors for localization and identification of targets with a wireless sensor network (WSN). A wireless sensor network with MICAz motes was set up utilizing a centralized tree-based system. The MTS310, which is equipped with a 2-axis magnetic sensor was used as the sensor board on MICAz motes. The use of magnetic sensors in wireless sensor networks is a topic that has gained limited attention in comparison to that of other sensors. Research has generally focused on the detection of large ferromagnetic targets (e.g., cars and airplanes). Moreover, the changes in the magnetic field intensity measured by the sensor have been used to obtain simple information, such as target direction or whether or not the target has passed a certain point. This work aims at understanding the sensing limitations of magnetic sensors by considering small-scale targets moving within a 30 cm radius. Four heavy iron bars were used as test targets in this study. Target detection, identification and sequential localization were accomplished using the Minimum Euclidean Distance (MED) method. The results show the accuracy of this method for this job. Different forms of sensor sensing region discretization were considered. Target identification was done on the boundaries of sensing regions. Different gateways were selected as entrance point for identification point and the results of them were compared with each other. An online ILS system was implemented and continuous movements of the ferromagnetic objects were monitored. The undesirable factors which affect the measurements were discussed and techniques to reduce or eliminate faulty measurements are presented. A magnetic sensor orientation detector and set/reset strap have been designed and fabricated. Orthogonal Matching Pursuit (OMP) algorithm was proposed for multiple sensors multiple target case in ILS systems as a future work. This study can then be used to design energy-efficient, intelligent magnetic sensor networks
8

Intégration de connaissances a priori dans la reconstruction des signaux parcimonieux : Cas particulier de la spectroscopie RMN multidimensionnelle

Merhej, Dany 10 February 2012 (has links) (PDF)
Les travaux de cette thèse concernent la conception d'outils algorithmiques permettant l'intégration de connaissances a priori dans la reconstruction de signaux parcimonieux. Le but étant principalement d'améliorer la reconstruction de ces signaux à partir d'un ensemble de mesures largement inférieur à ce que prédit le célèbre théorème de Shannon-Nyquist. Dans une première partie nous proposons, dans le contexte de la nouvelle théorie du " compressed sensing " (CS), l'algorithme NNOMP (Neural Network Orthogonal Matching Pursuit), qui est une version modifiée de l'algorithme OMP dans laquelle nous avons remplacé l'étape de corrélation par un réseau de neurones avec un entraînement adapté. Le but est de mieux reconstruire les signaux parcimonieux possédant des structures supplémentaires, i.e. appartenant à un modèle de signaux parcimonieux particulier. Pour la validation expérimentale de NNOMP, trois modèles simulés de signaux parcimonieux à structures supplémentaires ont été considérés, ainsi qu'une application pratique dans un arrangement similaire au " single pixel imaging ". Dans une deuxième partie, nous proposons une nouvelle méthode de sous-échantillonnage en spectroscopie RMN multidimensionnelle (y compris l'imagerie spectroscopique RMN), lorsque les spectres des acquisitions correspondantes de dimension inférieure, e.g. monodimensionnelle, sont intrinsèquement parcimonieux. Dans cette méthode, on modélise le processus d'acquisition des données et de reconstruction des spectres multidimensionnels, par un système d'équations linéaires. On utilise ensuite des connaissances a priori, sur les emplacements non nuls dans les spectres multidimensionnels, pour enlever la sous-détermination induite par le sous échantillonnage des données. Ces connaissances a priori sont obtenues à partir des spectres des acquisitions de dimension inférieure, e.g. monodimensionnelle. La possibilité de sous-échantillonnage est d'autant plus importante que ces spectres monodimensionnels sont parcimonieux. La méthode proposée est évaluée sur des données synthétiques et expérimentales in vitro et in vivo.
9

Intégration de connaissances a priori dans la reconstruction des signaux parcimonieux : Cas particulier de la spectroscopie RMN multidimensionnelle / Embedding prior knowledge in the reconstruction of sparse signals : Special case of the multidimensional NMR spectroscopy

Merhej, Dany 10 February 2012 (has links)
Les travaux de cette thèse concernent la conception d’outils algorithmiques permettant l’intégration de connaissances a priori dans la reconstruction de signaux parcimonieux. Le but étant principalement d’améliorer la reconstruction de ces signaux à partir d’un ensemble de mesures largement inférieur à ce que prédit le célèbre théorème de Shannon-Nyquist. Dans une première partie nous proposons, dans le contexte de la nouvelle théorie du « compressed sensing » (CS), l’algorithme NNOMP (Neural Network Orthogonal Matching Pursuit), qui est une version modifiée de l’algorithme OMP dans laquelle nous avons remplacé l'étape de corrélation par un réseau de neurones avec un entraînement adapté. Le but est de mieux reconstruire les signaux parcimonieux possédant des structures supplémentaires, i.e. appartenant à un modèle de signaux parcimonieux particulier. Pour la validation expérimentale de NNOMP, trois modèles simulés de signaux parcimonieux à structures supplémentaires ont été considérés, ainsi qu’une application pratique dans un arrangement similaire au « single pixel imaging ». Dans une deuxième partie, nous proposons une nouvelle méthode de sous-échantillonnage en spectroscopie RMN multidimensionnelle (y compris l’imagerie spectroscopique RMN), lorsque les spectres des acquisitions correspondantes de dimension inférieure, e.g. monodimensionnelle, sont intrinsèquement parcimonieux. Dans cette méthode, on modélise le processus d’acquisition des données et de reconstruction des spectres multidimensionnels, par un système d’équations linéaires. On utilise ensuite des connaissances a priori, sur les emplacements non nuls dans les spectres multidimensionnels, pour enlever la sous-détermination induite par le sous échantillonnage des données. Ces connaissances a priori sont obtenues à partir des spectres des acquisitions de dimension inférieure, e.g. monodimensionnelle. La possibilité de sous-échantillonnage est d’autant plus importante que ces spectres monodimensionnels sont parcimonieux. La méthode proposée est évaluée sur des données synthétiques et expérimentales in vitro et in vivo. / The work of this thesis concerns the proposal of algorithms for the integration of prior knowledge in the reconstruction of sparse signals. The purpose is mainly to improve the reconstruction of these signals from a set of measurements well below what is requested by the famous theorem of Shannon-Nyquist. In the first part we propose, in the context of the new theory of "compressed sensing" (CS), the algorithm NNOMP (Neural Network Orthogonal Matching Pursuit), which is a modified version of the algorithm OMP in which we replaced the correlation step by a properly trained neural network. The goal is to better reconstruct sparse signals with additional structures, i.e. belonging to a particular model of sparse signals. For the experimental validation of NNOMP three simulated models of sparse signals with additional structures were considered and a practical application in an arrangement similar to the “single pixel imaging”. In the second part, we propose a new method for under sampling in multidimensional NMR spectroscopy (including NMR spectroscopic imaging), when the corresponding spectra of lower dimensional acquisitions, e.g. one-dimensional, are intrinsically sparse. In this method, we model the whole process of data acquisition and reconstruction of multidimensional spectra, by a system of linear equations. We then use a priori knowledge about the non-zero locations in multidimensional spectra, to remove the under-determinacy induced by data under sampling. This a priori knowledge is obtained from the lower dimensional acquisition spectra, e.g. one-dimensional. The possibility of under sampling increases proportionally with the sparsity of these one dimensional spectra. The proposed method is evaluated on synthetic, experimental in vitro and in vivo data.
10

Contributions à l'étude de détection des bandes libres dans le contexte de la radio intelligente.

Khalaf, Ziad 08 February 2013 (has links) (PDF)
Les systèmes de communications sans fil ne cessent de se multiplier pour devenir incontournables de nos jours. Cette croissance cause une augmentation de la demande des ressources spectrales, qui sont devenues de plus en plus rares. Afin de résoudre ce problème de pénurie de fréquences, Joseph Mitola III, en 2000, a introduit l'idée de l'allocation dynamique du spectre. Il définit ainsi le terme " Cognitive Radio " (Radio Intelligente), qui est largement pressenti pour être le prochain Big Bang dans les futures communications sans fil [1]. Dans le cadre de ce travail on s'intéresse à la problématique du spectrum sensing qui est la détection de présence des Utilisateurs Primaires dans un spectre sous licence, dans le contexte de la radio intelligente. L'objectif de ce travail est de proposer des méthodes de détection efficaces à faible complexité et/ou à faible temps d'observation et ceci en utilisant le minimum d'information a priori sur le signal à détecter. Dans la première partie on traite le problème de détection d'un signal aléatoire dans le bruit. Deux grandes méthodes de détection sont utilisées : la détection d'énergie ou radiomètre et la détection cyclostationnaire. Dans notre contexte, ces méthodes sont plus complémentaires que concurrentes. Nous proposons une architecture hybride de détection des bandes libres, qui combine la simplicité du radiomètre et la robustesse des détecteurs cyclostationnaires. Deux méthodes de détection sont proposées qui se basent sur cette même architecture. Grâce au caractère adaptatif de l'architecture, la détection évolue au cours du temps pour tendre vers la complexité du détecteur d'énergie avec des performances proches du détecteur cyclostationnaire ou du radiomètre selon la méthode utilisée et l'environnement de travail. Dans un second temps on exploite la propriété parcimonieuse de la Fonction d'Autocorrelation Cyclique (FAC) pour proposer un nouvel estimateur aveugle qui se base sur le compressed sensing afin d'estimer le Vecteur d'Autocorrelation Cyclique (VAC), qui est un vecteur particulier de la Fonction d'Autocorrelation Cyclique pour un délai fixe. On montre par simulation que ce nouvel estimateur donne de meilleures performances que celles obtenues avec l'estimateur classique, qui est non aveugle et ceci dans les mêmes conditions et en utilisant le même nombre d'échantillons. On utilise l'estimateur proposé, pour proposer deux détecteurs aveugles utilisant moins d'échantillons que nécessite le détecteur temporel de second ordre de [2] qui se base sur l'estimateur classique de la FAC. Le premier détecteur exploite uniquement la propriété de parcimonie du VAC tandis que le second détecteur exploite en plus de la parcimonie la propriété de symétrie du VAC, lui permettant ainsi d'obtenir de meilleures performances. Ces deux détecteurs outre qu'ils sont aveugles sont plus performants que le détecteur non aveugle de [2] dans le cas d'un faible nombre d'échantillons.

Page generated in 0.1088 seconds