Spelling suggestions: "subject:"designal separation"" "subject:"absignal separation""
11 |
Object detection for signal separation with different time-frequency representationsStrydom, Llewellyn January 2021 (has links)
The task of detecting and separating multiple radio-frequency signals in a wideband scenario has attracted much interest recently, especially from the cognitive radio community. Many successful approaches in this field have been based on machine learning and computer vision methods using the wideband signal spectrogram as an input feature. YOLO and R-CNN are deep learning-based object detection algorithms that have been used to obtain state-of-the-art results
on several computer vision benchmark tests. In this work, YOLOv2 and Faster R-CNN are implemented, trained and tested, to solve the signal separation task. Previous signal separation research does not consider representations other than the spectrogram. Here, specific focus is placed on investigating different time-frequency representations based on the short-time Fourier transform. Results are presented in terms of traditional object detection metrics, with Faster R-CNN and YOLOv2 achieving mean average precision scores of up to 89.3% and 88.8%
respectively. / Dissertation (MEng (Computer Engineering))--University of Pretoria, 2017. / Saab Grintek Defence / University of Pretoria / Electrical, Electronic and Computer Engineering / MEng (Computer Engineering) / Unrestricted
|
12 |
Why only two ears? Some indicators from the study of source separation using two sensorsJoseph, Joby 08 1900 (has links)
In this thesis we develop algorithms for estimating broadband source signals from a mixture using only two sensors. This is motivated by what is known in the literature as cocktail party effect, the ability of human beings to listen to the desired source from a mixture of sources with at most two ears. Such a study lets us, achieve a better understanding of the auditory pathway in the brain and confirmation of the results from physiology and psychoacoustics, have a clue to search for an equivalent structure in the brain which corresponds to the modification which improves the algorithm, come up with a benchmark system to automate the evaluation of the systems like 'surround sound', perform speech recognition in noisy environments. Moreover, it is possible that, what we learn about the replication of the functional units in the brain may help us in replacing those using signal processing units for patients suffering due to the defects in these units.
There are two parts to the thesis. In the first part we assume the source signals to be broadband and having strong spectral overlap. Channel is assumed to have a few strong multipaths. We propose an algorithm to estimate all the strong multi-paths from each source to the sensors for more than two sources with measurement from two sensors. Because the channel matrix is not invertible when the number of sources is more than the number of sensors, we make use of the estimates of the multi-path delays for each source to improve the SIR of the sources. In the second part we look at a specific scenario of colored signals and channel being one with a prominent direct path. Speech signals as the sources in a weakly reverberant room and a pair of microphones as the sensors satisfy these conditions. We consider the case with and without a head like structure between the microphones. The head like structure we used was a cubical block of wood. We propose an algorithm for separating sources under such a scenario. We identify the features of speech and the channel which makes it possible for the human auditory system to solve the cocktail party problem. These properties are the same as that satisfied by our model. The algorithm works well in a partly acoustically treated room, (with three persons speaking and two microphones and data acquired using standard PC setup) and not so well in a heavily reverberant scenario.
We see that there are similarities in the processing steps involved in the algorithm and what we know of the way our auditory system works, especially so in the regions before the auditory cortex in the auditory pathway. Based on the above experiments we give reasons to support the hypothesis about why all the known organisms need to have only two ears and not more but may have more than two eyes to their advantage. Our results also indicate that part of pitch estimation for individual sources might be occurring in the brain after separating the individual source components. This might solve the dilemma of having to do multi-pitch estimation. Recent works suggest that there are parallel pathways in the brain up to the primary auditory cortex which deal with temporal cue based processing and spatial cue based processing. Our model seem to mimic the pathway which makes use of the spatial cues.
|
13 |
Desconvolução não-supervisionada baseada em esparsidadeFernandes, Tales Gouveia January 2016 (has links)
Orientador: Prof. Dr. Ricardo Suyama / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia da Informação, 2016. / O presente trabalho analisa o problema de desconvolução não-supervisionada de sinais abordando a característica esparsa dos sinais envolvidos. O problema de desconvolução não-supervisionada de sinais se assemelha, em muitos aspectos, ao problema de separação cega de fontes, que consiste basicamente de se estimar sinais a partir de versões que correspondem a misturas desses sinais originais, denominados simplesmente de fontes. Ao aplicar a desconvolução não-supervisionada é necessario explorar características dos sinais e/ou do sistema para auxiliar na resolução do problema. Uma dessas características, a qual foi utilizada neste trabalho, é o conceito de esparsidade. O conceito de esparsidade está relacionado a sinais e/ou sistemas em que toda a informação está concentrada em uma quantidade pequena de valores, os quais representam a informação real do que se queira analisar sobre o sinal ou sobre o sistema. Nesse contexto, há critérios que estabelecem condições suficientes, sobre os sinais e/ou sistemas envolvidos, capazes de garantir a desconvolução dos mesmos. Com isso, os algoritmos para recuperação dos sinais e/ou sistemas utilizarão os critérios estabelecidos baseado na característica esparsa dos mesmos. Desta forma, neste trabalho será feito a comparação de convergência dos algoritmos aplicados em alguns cenários específicos, os quais definem o sinal e o sistema utilizados. Por fim, os resultados obtidos nas simulações permitem obter uma boa ideia do comportamento dos diferentes algoritmos analisados e a viabilidade de uso no problema de desconvolução de sinais esparsos. / The present work analyzes the deconvolution problem unsupervised signs approaching the sparse characteristic of the signals involved. The deconvolution problem unsupervised signals resembles in many aspects to the problem of blind source separation, which consists primarily of estimating signals from versions which are mixtures of these original signals, simply referred to as sources. By applying unsupervised deconvolution it is necessary to explore characteristics of signals and/or system to assistant in problem resolution. One of these features, which was used in this work is the concept of sparsity. The concept of sparseness associated signs and/or systems in which all the information is concentrated in a small number of values, which represent the actual information that one wants to analyze on the signal or on the system. In this context, there are criteria that establish sufficient conditions on the signs and/or systems involved, able to ensure the deconvolution of them. Thus, the algorithms for signal recovery and/or systems will use the criteria based on sparse characteristic of them. Thus, the present work will be doing the convergence of algorithms comparison applied in some specific scenarios, which define the signal and the system used. Finally, the results obtained from simulations allow getting a good idea of the behavior of different algorithms and analyzed for viability using the deconvolution problem of sparse signals.
|
14 |
L'analyse probabiliste en composantes latentes et ses adaptations aux signaux musicaux : application à la transcription automatique de musique et à la séparation de sources / Probabilistic latent component analysis and its adaptation to musical signals : application to automatic music transcription and source separationFuentes, Benoît 14 March 2013 (has links)
La transcription automatique de musique polyphonique consiste à estimer automatiquernent les notes présentes dans un enregistrement via trois de leurs attributs : temps d'attaque, durée et hauteur. Pour traiter ce problème, il existe une classe de méthodes dont le principe est de modéliser un signal comme une somme d'éléments de base, porteurs d'informations symboliques. Parmi ces techniques d'analyse, on trouve l'analyse probabiliste en composantes latentes (PLCA). L'objet de cette thèse est de proposer des variantes et des améliorations de la PLCA afin qu'elle puisse mieux s'adapter aux signaux musicaux et ainsi mieux traiter le problème de la transcription. Pour cela, un premier angle d'approche est de proposer de nouveaux modèles de signaux, en lieu et place du modèle inhérent à la PLCA, suffisamment expressifs pour pouvoir s'adapter aux notes de musique possédant simultanément des variations temporelles de fréquence fondamentale et d'enveloppe spectrale. Un deuxième aspect du travail effectué est de proposer des outils permettant d'aider l'algorithme d'estimation des paramètres à converger vers des solutions significatives via l'incorporation de connaissances a priori sur les signaux à analyser, ainsi que d'un nouveau modèle dynamique. Tous les algorithmes ainsi imaginés sont appliqués à la tâche de transcription automatique. Nous voyons également qu'ils peuvent être directement utilisés pour la séparation de sources, qui consiste à séparer plusieurs sources d'un mélange, et nous proposons deux applications dans ce sens. / Automatic music transcription consists in automatically estimating the notes in a recording, through three attributes: onset time, duration and pitch. To address this problem, there is a class of methods which is based on the modeling of a signal as a sum of basic elements, carrying symbolic information. Among these analysis techniques, one can find the probabilistic latent component analysis (PLCA). The purpose of this thesis is to propose variants and improvements of the PLCA, so that it can better adapt to musical signals and th us better address the problem of transcription. To this aim, a first approach is to put forward new models of signals, instead of the inherent model 0 PLCA, expressive enough so they can adapt to musical notes having variations of both pitch and spectral envelope over time. A second aspect of this work is to provide tools to help the parameters estimation algorithm to converge towards meaningful solutions through the incorporation of prior knowledge about the signals to be analyzed, as weil as a new dynamic model. Ali the devised algorithms are applie to the task of automatic transcription. They can also be directly used for source separation, which consists in separating several sources from a mixture, and Iwo applications are put forward in this direction
|
15 |
Geometric Methods for Robust Data Analysis in High DimensionAnderson, Joseph T. 26 May 2017 (has links)
No description available.
|
16 |
Κατασκευή μικροϋπολογιστικού συστήματος διαχωρισμού σημάτων με τον αλγόριθμο ICAΧονδρός, Παναγιώτης 13 October 2013 (has links)
Η διπλωματική εργασία αυτή αφορά την κατασκευή ενός μικροϋπολογιστικού συστήματος διαχωρισμού σημάτων. Ο διαχωρισμός των σημάτων γίνεται με βάση τη θεωρία της τεχνικής της Ανάλυσης Ανεξάρτητων Συνιστωσών. Αφού παρουσιαστεί η θεωρία της τεχνικής, παρουσιάζεται ο μικροελεγκτής ADuC 7026 που επελέγη για την υλοποίηση. Στη συνέχεια γίνεται η παρουσίαση του λογισμικού προσομοίωσης του μικροελεγκτή και παρατίθενται βασικά παραδείγματα για τον προγραμματισμό του. Τέλος, αναπτύσσονται, χωρίς τη χρήση περιφερειακών, και προσομοιώνονται, με τη χρήση περιφερειακών τρεις αλγόριθμοι, δυο εκδόσεις του FastICA και μια έκδοση του InfoMax. Οι αλγόριθμοι αυτοί αξιολογούνται ως προς τις επιδόσεις τους και εξάγονται τα συμπεράσματα. / This thesis deals with the construction of a microcomputer system to separate signals. The separation of the signals is based on the theory of the technique of Independent Component Analysis. The theory of the technique and the microcontroller ADuC 7026 chosen for implementation are presented. Then, follows the presentation of the software on which the microcontroller is simulated and basic examples of its programming are mentioned. Finally, three algorithms, two versions of FastICA and a version of InfoMax, are developed without the use of peripheral systems and simulated using peripheral systems. These algorithms are evaluated for their performance and conclusions are drawn.
|
17 |
Αυτόματος διαχωρισμός ακουστικών σημάτων που διαδίδονται στο ανθρώπινο σώμα και λαμβάνονται από πιεζοκρυστάλλους κατά την διάρκεια ύπνουΒογιατζή, Ελένη 13 October 2013 (has links)
Στο πλαίσιο της εργασίας αυτής πραγματοποιείται ανάλυση και εφαρμογή του
διαχωρισμού ακουστικών σημάτων, τα οποία έχουν ληφθεί από το ανθρώπινο σώμα,
όταν αυτό βρίσκεται σε κατάσταση ύπνου. Τα σήματα αυτά έχουν ληφθεί με τη βοήθεια
μιας συσκευής πιεζοκρυστάλλων και ο διαχωρισμός τους επιτυγχάνεται με τη μέθοδο
Ανάλυσης Ανεξάρτητων Συνιστωσών (ICA). Κύριος σκοπός όλων των παραπάνω είναι να
χρησιμοποιηθεί η εν λόγω μεθοδολογία στη διάγνωση της αποφρακτικής άπνοιας (OSA).
Στο πρώτο κεφάλαιο, παρουσιάζεται αναλυτικά η μέθοδος ICA και το μαθηματικό μοντέλο
που την περιγράφει, όπως επίσης και όλα τα βήματα προεπεξεργασίας της. Στη συνέχεια
αναλύεται διεξοδικά η λειτουργία του αλγορίθμου FastICA και οι ιδιότητες του, με τον
οποίο υλοποιείται το πειραματικό μέρος της εργασίας αυτής. Στο δεύτερο κεφάλαιο,
μελετάται η ασθένεια της αποφρακτικής άπνοιας (OSA), οι παράγοντες και η παθολογία
της καθώς και το κύριο διαγνωστικό σύμπτωμα της: το ροχαλητό. Ύστερα, πραγματεύεται
την διάγνωση και τους γνωστότερους τρόπους θεραπείας αυτής της νόσου και τελικά τη
μέθοδο του Snoring Detection. Στο τρίτο κεφάλαιο γίνεται μια εισαγωγή στον
πιεζοηλεκτρισμό, και μία μελέτη του πιεζοηλεκτρικού φαινομένου και του μαθηματικού
του μοντέλου. Ακολουθεί αναφορά των ειδών πιεζοηλεκτρικών αισθητήρων με τους
οποίους λαμβάνονται τα σήματα που εξετάζονται σε αυτή την εργασία. Στο επόμενο
κεφάλαιο γίνεται μία σύνδεση των δεδομένων θεωρίας που αναφέρονται στα
προηγούμενα κεφάλαια και μία εισαγωγή στην πειραματική μέθοδο. Στο κεφάλαιο πέντε
παρατίθενται κάποια παραδείγματα εφαρμογής του αλγορίθμου FastICA με τυχαία
σήματα, τα οποία έχουν σκοπό να δοκιμάσουν την απόδοση του. Στο κεφάλαιο έξι,
5
γίνεται η πειραματική διαδικασία όπου τώρα τα σήματα που διαχωρίζονται με τον
αλγόριθμο FastICA προέρχονται από το ανθρώπινο σώμα. Η υλοποίηση της γίνεται σε
Matlab. Έτσι, γίνεται εξαγωγή του ζητούμενου σήματος ροχαλητού και αναγράφονται
κάποια συμπεράσματα για την απόδοση του αλγορίθμου. Στο τέλος της εργασίας
παρατίθενται σε ένα παράρτημα όλοι οι κώδικες της MATLAB που χρησιμοποιήθηκαν για
την ολοκλήρωση του πειραματικού της μέρους στα κεφάλαια πέντε και έξι. / In this particular thesis, analysis and application of separation of acoustic signals is carried
out. These signals have been taken from the human body in a sleeping state. They are
obtained by means of a piezocrystallic device and their separation is achieved by the
method of Independent Component Analysis (ICA). The main purpose of all this is to use
this methodology in order to diagnose the Obstructive Sleep Apnea (OSA). The first chapter
presents the method of ICA and the mathematical model that describes it as well as all the
pre-processing steps. Then it analyses, in detail, the algorithm FastICA, which is used in the
experimental part of this thesis and its properties. The second chapter studies the disease
of obstructive sleep apnea (OSA), its factors and its pathology and the major diagnostic
symptom: snoring. Then, it discusses the diagnosis and the best known ways of treating
this disease and eventually the method of Snoring Detection. The third chapter is an
introduction to piezoelectricity and a study of the piezoelectric effect and its mathematical
description. This is followed by a reference to the types of piezoelectric sensors which are
used to obtain the signals used in this paper. In chapter five we have listed some examplesapplications
of the FastICA algorithm with random signals, which are designed to test the
performance. Section six is where the experimental procedure takes place. The signals
derived from the human body are separated by the algorithm FastICA and the
implementation is done in Matlab. In addition, some conclusions regarding the
performance of the algorithm. At the end of this paper, all the MATLAB codes used for the
completion of the experimental part of the chapters five and six are listed in an Annex.
|
18 |
Swap Book Hedging using Stochastic Optimisation with Realistic Risk FactorsNordin, Rickard, Mårtensson, Emil January 2021 (has links)
Market makers such as large banks are exposed to market risk in fixed income by acting as a counterparty for customers that enter swap contracts. This master thesis addresses the problem of creating a cost-effective hedge for a realistic swap book of a market maker in a multiple yield curve setting. The proposed hedge model is the two-stage stochastic optimisation problem created by Blomvall and Hagenbjörk (2020). Systematic term structure innovations (components) are estimated using six different component models including principal component analysis (PCA), independent component analysis (ICA) and rotations of principal components. The component models are evaluated with a statistical test that uses daily swap rate observations from the European swap market. The statistical test shows that for both FRA and IRS contracts, a rotation of regular principal components is capable of a more accurate description of swap rate innovations than regular PCA. The hedging model is applied to an FRA and an IRS swap book separately, with daily rebalancing, over the period 2013-06-21 to 2021-05-11. The model produces a highly effective hedge for the tested component methods. However, replacing the PCA components with improved components does not improve the hedge. The study is conducted in collaboration with two other master theses, each done at separate banks. This thesis is done in collaboration with Swedbank and the simulated swap book is based on the exposure of a typical swap book at Swedbank, which is why the European swap market is studied.
|
19 |
Independent component analysis based interference reduction in cellular systems with co-channel interferenceKostanic, Ivica Nikola 01 April 2003 (has links)
No description available.
|
20 |
A Real-Time Classification approach of a Human Brain-Computer Interface based on Movement Related ElectroencephalogramMileros, Martin D. January 2004 (has links)
<p>A Real-Time Brain-Computer Interface is a technical system classifying increased or decreased brain activity in Real-Time between different body movements, actions performed by a person. Focus in this thesis will be on testing algorithms and settings, finding the initial time interval and how increased activity in the brain can be distinguished and satisfyingly classified. The objective is letting the system give an output somewhere within 250ms of a thought of an action, which will be faster than a persons reaction time. </p><p>Algorithms in the preprocessing were Blind Signal Separation and the Fast Fourier Transform. With different frequency and time interval settings the algorithms were tested on an offline Electroencephalographic data file based on the "Ten Twenty" Electrode Application System, classified using an Artificial Neural Network. </p><p>A satisfying time interval could be found between 125-250ms, but more research is needed to investigate that specific interval. A reduction in frequency resulted in a lack of samples in the sample window preventing the algorithms from working properly. A high frequency is therefore proposed to help keeping the sample window small in the time domain. Blind Signal Separation together with the Fast Fourier Transform had problems finding appropriate correlation using the Ten-Twenty Electrode Application System. Electrodes should be placed more selectively at the parietal lobe, in case of requiring motor responses.</p>
|
Page generated in 0.1011 seconds