Spelling suggestions: "subject:"lparse signals"" "subject:"apparse signals""
1 |
Sparse Signal Processing Based Image Compression and InpaintingAlmshaal, Rashwan M 01 January 2016 (has links)
In this thesis, we investigate the application of compressive sensing and sparse signal processing techniques to image compression and inpainting problems. Considering that many signals are sparse in certain transformation domain, a natural question to ask is: can an image be represented by as few coefficients as possible? In this thesis, we propose a new model for image compression/decompression based on sparse representation. We suggest constructing an overcomplete dictionary by combining two compression matrices, the discrete cosine transform (DCT) matrix and Hadamard-Walsh transform (HWT) matrix, instead of using only one transformation matrix that has been used by the common compression techniques such as JPEG and JPEG2000. We analyze the Structural Similarity Index (SSIM) versus the number of coefficients, measured by the Normalized Sparse Coefficient Rate (NSCR) for our approach. We observe that using the same NSCR, SSIM for images compressed using the proposed approach is between 4%-17% higher than when using JPEG. Several algorithms have been used for sparse coding. Based on experimental results, Orthogonal Matching Pursuit (OMP) is proved to be the most efficient algorithm in terms of computational time and the quality of the decompressed image.
In addition, based on compressive sensing techniques, we propose an image inpainting approach, which could be used to fill missing pixels and reconstruct damaged images. In this approach, we use the Gradient Projection for Sparse Reconstruction (GPSR) algorithm and wavelet transformation with Daubechies filters to reconstruct the damaged images based on the information available in the original image. Experimental results show that our approach outperforms existing image inpainting techniques in terms of computational time with reasonably good image reconstruction performance.
|
2 |
Non-convex methods for spectrally sparse signal reconstruction via low-rank Hankel matrix completionWang, Tianming 01 May 2018 (has links)
Spectrally sparse signals arise in many applications of signal processing. A spectrally sparse signal is a mixture of a few undamped or damped complex sinusoids. An important problem from practice is to reconstruct such a signal from partial time domain samples. Previous convex methods have the drawback that the computation and storage costs do not scale well with respect to the signal length. This common drawback restricts their applicabilities to large and high-dimensional signals.
The reconstruction of a spectrally sparse signal from partial samples can be formulated as a low-rank Hankel matrix completion problem. We develop two fast and provable non-convex solvers, FIHT and PGD. FIHT is based on Riemannian optimization while PGD is based on Burer-Monteiro factorization with projected gradient descent. Suppose the underlying spectrally sparse signal is of model order r and length n. We prove that O(r^2log^2(n)) and O(r^2log(n)) random samples are sufficient for FIHT and PGD respectively to achieve exact recovery with overwhelming probability. Every iteration, the computation and storage costs of both methods are linear with respect to signal length n. Therefore they are suitable for handling spectrally sparse signals of large size, which may be prohibited for previous convex methods. Extensive numerical experiments verify their recovery abilities as well as computation efficiency, and also show that the algorithms are robust to noise and mis-specification of the model order. Comparing the two solvers, FIHT is faster for easier problems while PGD has a better recovery ability.
|
3 |
Classificação de Fibrilação Atrial utilizando Curtose / Classification of Atrial Fibrillation using CurtosisOLIVEIRA jÚNIOR, Alfredo Costa 16 February 2017 (has links)
Submitted by Maria Aparecida (cidazen@gmail.com) on 2017-04-17T11:57:59Z
No. of bitstreams: 1
Alfredo Costa Oliveira Júnior.pdf: 789446 bytes, checksum: c5c9858983f5e6384177bda8d1ae2a0a (MD5) / Made available in DSpace on 2017-04-17T11:57:59Z (GMT). No. of bitstreams: 1
Alfredo Costa Oliveira Júnior.pdf: 789446 bytes, checksum: c5c9858983f5e6384177bda8d1ae2a0a (MD5)
Previous issue date: 2017-02-16 / Atrial fibrilation(AF) is one of the most common cardiac arrhythmias worldwide. Thus,
there are ample efforts to implement AF diagnosis systems. The main noninvasive way
to assess cardiac health is through electrocardiogram (ECG) signal analysis, which
represents the electrical activity of the cardiac muscle, and has characteristic temporal
markings: P, Q, R, S and T waves. Some authors use filtering techniques, statistical
analysis and even neural networks for detecting AF based on the RR interval, that is
given by the temporal difference between the peaks of the R wave. However, analises
of the RR interval allows for evaluating changes occurring only in the R wave of the
ECG signal, not allowing to assess, for example, variations in the P wave provoked by
the AF. In face of that, we propose characterize the ECG signal amplitude aiming at
classifying both healthy and AF patients. The ECG signal was analyzed in the proposed
methodology through the following statistics: variance, asymmetry, and kurtosis. Herein,
we use the MIT-BIH Atrial Fibrillation and MIT-BIH Normal Sinus Rhythm database
signals to evaluate AF and normal heartbeat intervals. Our study shown that kurtosis
outperfomed variance and asymmetry with respect to sensibility (Se = 100%), specificity
(Sp = 88.33%) and accuracy (Ac = 91.33%). The results were expected since kurtosis
is a non-Gaussian measure and the ECG signal has sparse distribution. The proposed
methodology also requires a lower number of pre-processing stages, and its simplicity
allows for implementations in imbedded systems supporting the clinical diagnosis. / A Fibrilação atrial (FA) é uma das arritmias cardíacas mais comuns em todo o mundo.
Por isso, amplos são os esforços para implementar sistemas que apoiem o diagnóstico
de FA. A principal forma não invasiva de avaliar a saúde cardíaca, é através da análise
do sinal de eletrocardiograma (ECG), o qual representa a atividade elétrica do músculo
cardíaco, e possui marcações temporais características: as ondas P, Q, R, S e T. Alguns
autores utilizaram técnicas de filtragem, análise estatística e até redes neurais para
detectar FA com base no intervalo RR, que é dado pela diferença temporal entre os
picos da onda R. Entretanto, a análise do intervalo RR permite avaliar apenas as
variações que ocorrem na onda R do sinal de ECG, não permitindo avaliar, por exemplo,
as alterações na onda P, provocadas pela FA. Diante disso, propõe-se caracterizar
a amplitude do sinal de ECG, a fim de classificar pacientes com FA e saudáveis. Na
metodologia proposta, o sinal de ECG, foi analisado por meio das seguintes estatísticas:
variância, assimetria e curtose. Para avaliar o classificador proposto, usou-se sinais
obtidos das bases de dados MIT-BIH Atrial Fibrillation e MIT-BIH Normal Sinus Rhythm
referentes aos pacientes com FA e com ritmo cardíaco normal, respectivamente. Dentre as estatísticas analidadas, a curtose foi a que apresentou resultados superiores
em termos de sensibilidade (Se = 100%), especificidade (Sp = 88, 33%) e acurácia
(Ac = 91, 33%). Esses resultados são de se esperar pelo fato de que a curtose é uma
medida de não-gaussianidade e que o sinal de ECG possui distribuição esparsa. A metodologia proposta também requer um número menor de etapas de pré-processamento,
e sua simplicidade permite implementações em sistemas embarcados que apoiarão o
diagnóstico clínico.
|
4 |
Conversor configurável analógico para informação.REIS, Vanderson de Lima. 23 May 2018 (has links)
Submitted by Lucienne Costa (lucienneferreira@ufcg.edu.br) on 2018-05-23T00:02:18Z
No. of bitstreams: 1
VANDERSON DE LIMA REIS – TESE (PPGEE) 2017.pdf: 6102324 bytes, checksum: 3f5467799d6127fee0e2bce02ef9d841 (MD5) / Made available in DSpace on 2018-05-23T00:02:18Z (GMT). No. of bitstreams: 1
VANDERSON DE LIMA REIS – TESE (PPGEE) 2017.pdf: 6102324 bytes, checksum: 3f5467799d6127fee0e2bce02ef9d841 (MD5)
Previous issue date: 2017-04-20 / Capes / Nos conversores Analógicos Digitais (ADC) com frequência de conversão baseada no Teorema de Nyquist, o parâmetro básico para orientar a aquisição é a largura de banda do sinal. O tratamento da informação e a remoção da redundância são realizados após a representação digital obtida do sinal. A Amostragem Compressiva foi proposta como uma técnica de digitalização que explora a esparsidade do sinal em um determinado domínio, para capturar apenas seu conteúdo de informação, com uma taxa que pode ser menor do que a preconizada pelo Teorema de Nyquist. As arquiteturas em hardware para implementar a Amostragem Compressiva são chamadas de Conversores Analógicos para Informação (AIC). Os AIC propostos na bibliografia exploram a esparsidade do sinal em um determinado domínio, e por isso cada arquitetura é especifica para uma classe de sinais. Nesta tese propõe-se um AIC configurável, baseado em arquiteturas conhecidas, capaz de adquirir sinais de várias classes, alterando seus parâmetros de configuração. No trabalho desenvolveu-se um modelo computacional, que permite analisar o comportamento dinâmico do AIC, e dos parâmetros de hardware propostos, bem como foi feita a implementação física da arquitetura proposta. Verificou-se a adaptabilidade dessa arquitetura a partir dos resultados obtidos, pois foi possível fazer a aquisição de mais de uma classe de sinais. / In analog-to-digital converters (ADC) based on Nyquist Theorem, the basic parameter to guide acquisition is the bandwidth of the signal. The information processing and redundancy removal are performed after the digital representation obtained from the signal. Compressed Sensing was proposed as a digitalization technique that exploits the sparsity of the signal in a given domain to capture only its information content, at a rate that may be lower than that advocated by the Nyquist Theorem. The hardware architectures to implement Compressed Sensing are called Analog to Information Converters (AIC). The AICs proposed in the bibliography exploit the sparsity of the signal in a given domain, and therefore each architecture is specific for a class of signals. This thesis proposes a configurable AIC, based on known architectures, capable of acquiring signals from several classes, changing its configuration parameters. A computational model was developed to analyze the dynamic behavior of AIC and proposed hardware parameters, as well as the physical implementation of the proposed architecture. It was verified the adaptability of the proposed architecture from the obtained results, since it was possible to perform the acquisition of more than one class of signals.
|
5 |
Interface cérebro-computador explorando métodos para representação esparsa dos sinaisOrmenesse, Vinícius January 2018 (has links)
Orientador: Prof. Dr. Ricardo Suyama / Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia da Informação, Santo André, 2018. / Uma interface cerebro-computador (BCI) e projetada para que se consiga, de modo
efetivo, fornecer uma via alternativa de comunicacao entre o cerebro do usuario e o computador.
Sinais captados por meio de eletrodos, tipicamente posicionados no escalpo do
individuo, sao previamente processados para que haja eliminacao de ruidos externos. A
partir dai, diversas tecnicas para processamento de sinais sao utilizadas para posteriormente
classificar os sinais registrados e realizar a traducao do estado mental do usuario
em um comando especifico a ser executado pelo computador. No presente trabalho sao
utilizadas tecnicas de representacao esparsa dos sinais para a extracao de caracteristicas
relevantes para classificacao dos mesmos, com intuito de aumentar a robustez e melhorar
o desempenho do sistema. Para a extracao de sinais esparsos, foram utilizados algoritmos
de criacao de dicionarios, a partir dos quais e possivel obter uma representacao esparsa
para todo o subespaco de sinal. No trabalho foram utilizados 5 diferentes algoritmos de
criacao de dicionario: Metodo de direcoes otimas (MOD), K-SVD, RLS-DLA, LS-DLA e
Aprendizado de dicionario Online (ODL). A classificacao dos sinais foi realizada com o
metodo de .. vizinhos mais proximos (k - NN). Os resultados obtidos com a abordagem
de representacao esparsa foram comparados com os resultados do BCI Competition IV
dataset 2a. Para o primeiro colocado da competicao foi obtido, em termos do coeficiente
kappa, uma acuracia de 0.57 enquanto que no trabalho utilizando os metodos esparsos,
obteve-se, em coeficiente kappa, uma acuracia de 0.90. Em comparacao obteve-se um ganho
de 0.33 de acuracia, onde se deduz que o uso de sinais esparsos pode ser benefico para
o dificil problema de se projetar uma interface cerebro computador. / A brain computer interface (BCI) is designed to effectively translate commands
thought by human individuals into commands that a computer can effectively understand.
Electrical impulses generated from the brain sculp are recorded from a device called an
electroencephalograph and are preprocessed for elimination of external noise. From there,
several techniques for signal processing are used to later classify the signals obtained by
the electroencephalograph. In this work, techniques for sparse representation of signals
are used for feature extraction, in order to increase robustness and system performance.
For the extraction of sparse signals, five different dictionary learning algorithms were
used, being able to produce a basis capable of represensing the entire signal subspace.
In this work, 5 different dictionary learning algorithms were used: Method of Optimal
Directions (MOD), K-SVD, Recursive Least Square Dictionary Learning (RLS-DLA),
Least Square Dictionary Learning (LS-DLA) and Online Dictionary Learning (ODL). For
the classification task, the k-NN method was used. The simulation results obtained with
this approach were compared with the best BCI Competition IV dataset 2a results. For
the first place in the competition, an accuracy of 0.57 was obtained, in terms of the kappa
coefficient, whereas in the work using the sparse methods, a kappa coefficient of 0.90
was obtainned, improving accuracy in 0.33 accuracy was obtained, which indicates that
the use of sparse signals may be beneficial to the difficult problem of designing a brain
computer interface.
|
6 |
Efficient Design of Embedded Data Acquisition Systems Based on Smart SamplingSatyanarayana, J V January 2014 (has links) (PDF)
Data acquisition from multiple analog channels is an important function in many embedded devices used in avionics, medical electronics, robotics and space applications. It is desirable to engineer these systems to reduce their size, power consumption, heat dissipation and cost. The goal of this research is to explore designs that exploit a priori knowledge of the input signals in order to achieve these objectives. Sparsity is a commonly observed property in signals that facilitates sub-Nyquist sampling and reconstruction through compressed sensing, thereby reducing the number of A to D conversions.
New architectures are proposed for the real-time, compressed acquisition of streaming signals. A. It is demonstrated that by sampling a collection of signals in a multiplexed fashion, it is possible to efficiently utilize all the available sampling cycles of the analogue-to-digital converters (ADCs), facilitating the acquisition of multiple signals using fewer ADCs. The proposed method is modified to accommodate more general signals, for which spectral leakage, due to the occurrence of non-integral number of cycles in the reconstruction window, violates the sparsity assumption. When the objective is to only detect the constituent frequencies in the signals, as against exact reconstruction, it can be achieved surprisingly well even in the presence of severe noise (SNR ~ 5 dB) and considerable undersampling. This has been applied to the detection of the carrier frequency in a noisy FM signal.
Information redundancy due to inter-signal correlation gives scope for compressed acquisition of a set of signals that may not be individually sparse. A scheme has been proposed in which the correlation structure in a set of signals is progressively learnt within a small fraction of the duration of acquisition, because of which only a few ADCs are adequate for capturing the signals. Signals from the different channels of EEG possess significant correlation. Employing signals taken from the Physionet database, the correlation structure of nearby EEG electrodes was captured. Subsequent to this training phase, the learnt KLT matrix has been used to reconstruct signals of all the electrodes with reasonably good accuracy from the recordings of a subset of electrodes. Average error is below 10% between the original and reconstructed signals with respect to the power in delta, theta and alpha bands: and below 15% in the beta band. It was also possible to reconstruct all the channels in the 10-10 system of electrode placement with an average error less than 8% using recordings on the sparser 10-20 system.
In another design, a set of signals are collectively sampled on a finer sampling grid using ADCs driven by phase-shifted clocks. Thus, each signal is sampled at an effective rate that is a multiple of the ADC sampling rate. So, it is possible to have a less steep transition between the pass band and the stop band, thereby reducing the order of the anti-aliasing filter from 30 to 8. This scheme has been applied to the acquisition of voltages proportional to the deflection of the control surfaces in an aerospace vehicle.
The idle sampling cycles of an ADC that performs compressive sub-sampling of a sparse signal, can be used to acquire the residue left after a coarse low-resolution sample is taken in the preceding cycle, like in a pipelined ADC. Using a general purpose, low resolution ADC, a DAC and a summer, one can acquire a sparse signal with double the resolution of the ADC, without having to use a dedicated pipelined ADC. It has also been demonstrated as to how this idea can be applied to achieve a higher dynamic range in the acquisition of fetal electrocardiogram signals.
Finally, it is possible to combine more than one of the proposed schemes, to handle acquisition of diverse signals with di_erent kinds of sparsity. The implementation of the proposed schemes in such an integrated design can share common hardware components so as to achieve a compact design.
|
7 |
Efficient tranceiver techniques for interference and fading mitigation in wireless communication systems / Νέες αποδοτικές τεχνικές εκπομπής και λήψης για μείωση παρεμβολών σε ασύρματα δίκτυα επικοινωνίαςΒλάχος, Ευάγγελος 12 December 2014 (has links)
Wireless communication systems require advanced techniques at the transmitter and at the receiver that improve the performance in hostile radio environments. The received signal is significantly distorted due to the dynamic nature of the wireless channel caused by multipath fading and Doppler spread. In order to mitigate the negative impact of the channel to the received signal quality, techniques as equalization and diversity are usually employed in the system design.
During the transmission, the phenomenon of inter-symbol interference (ISI) occurs at the receiver due to the time dispersion of the involved channels. Hence, several delayed replicas of previous symbols interfere with the current symbol. Equalization is usually employed in order to combat the effect of the ISI. Several implementations for equalization filters have been proposed, including linear and non-linear processing, providing complexity-performance trade-offs. It is known that the length of the equalization filter determines the complexity of the technique. Since the wireless channels are characterized by long and sparse impulse responses, the conventional equalizers require high computational complexity due to the large size of their filters.
In this dissertation, we have further investigated the long standing problem of equalization in light of the recently derived theory of compressed sampling (CS) for sparse and redundant representations. The developed heuristic algorithms for equalization, can exploit either the sparsity of the channel impulse response (CIR), or the sparsity of the equalizer filters, in order to derive efficient implementation designs. To this end, building on basis pursuit and matching pursuit techniques new equalization schemes have been proposed that exhibit considerable computational savings, increased performance properties and short training sequence requirements. Our main contribution for this part is the Stochastic Gradient Pursuit algorithm for sparse adaptive equalization.
An alternative approach to combat ISI is based on the orthogonal frequency division multiplexing (OFDM) system. In this system, the entire channel is divided into many narrow subchannels, so as the transmitted signals to be orthogonal to each other, despite their spectral overlap. However, in the case of doubly selective channels, the Doppler effect destroys the orthogonality between subcarriers. Thus, similarly to ISI, the effect of intercarrier interference (ICI) is introduced at the receiver, where symbols which belong to other subcarriers interfere with the current one. Considering this problem, we have developed iterative algorithms which recursively cancels the ICI at the receiver, providing performance-complexity trade-offs.
For low or medium Doppler spreads, the typical approach is to approximate the frequency-domain channel matrix with a banded one. On this premise, we derived reduced-rank preconditioned conjugate gradient (PCG) algorithms in order to estimate the equalization matrix with a reduced number of iterations. Also developed an improved PCG algorithm with the same complexity order, using the Galerkin projections theory. However, in rapidly changing environments, a severe ICI is introduced and the banded approximation results in significant performance degradation. In order to recover this performance loss, we developed regularized estimation framework for ICI equalization, with linear complexity with respect the the number of the subcarriers. Moreover, we proposed a new equalization technique which has the potential to completely cancel the ICI. This approach works in a successive manner through a number of stages, conveying from the fully-connected ordered successive interference cancellation architecture (OSIC) in order to fully suppress the residual interference at each stage of the equalizer.
On the other hand, diversity can improve the performance of the communication system by sending the information symbols through multiple signal paths, each of which fades independently. One approach to obtain diversity is through cooperative transmission, considering a group of nearby terminals (relays) as forming one virtual antenna array and applying a spatial beamforming technique so as to optimize the communication via them. Such beamforming techniques differ from their classical counterparts where the array elements are located in a common processing unit, due to the distribution of the relays in the space.
In this setting, we developed new distributed algorithms which enable the relay cooperation for the computation of the beamforming weights leveraging the computational abilities of the relays. Each relay can estimate only the corresponding entry of the principal eigenvector, combining data from its network neighbours. The proposed algorithms are applied to two distributed beamforming schemes for relay networks. In the first scheme, the beamforming vector is computed through minimization of a total transmit power subject to the receiver quality-of-service (QoS) constraint. In the second scheme, the beamforming weights are obtained through maximization of the receiver SNR subject to a total transmit power constraint. Moreover, the proposed algorithms operate blindly, implying that no training data are required to be transmitted to the relays, and adaptively, exhibiting a quite short convergence period. / Τα συστήματα ασύρματων επικοινωνιών απαιτούν εξειδικευμένες τεχνικές στον πομπό και στον δέκτη, οι οποίες να βελτιώνουν την απόδοση του συστήματος σε εχθρικά περιβάλλοντα ασύρματης μετάδοσης. Λόγω της δυναμικής φύσης του ασύρματου καναλιού, που περιγράφεται από τα φαινόμενα της απόσβεσης, της πολυδιόδευσης και του Doppler, το λαμβανόμενο σήμα είναι παραμορφωμένο σε σημαντικό βαθμό. Για να αναιρέσουμε αυτήν την αρνητική επίδραση του καναλιού στην ποιότητα του λαμβανόμενου σήματος, κατά τον σχεδιασμό του συστήματος συνήθως υιοθετούνται τεχνικές όπως η ισοστάθμιση και η ποικιλομορφία.
Ένα φαινόμενο που προκύπτει στο δέκτη ενός ασύρματου συστήματος επικοινωνίας, λόγω της χρονικής διασποράς που παρουσιάζουν τα κανάλια, είναι η διασυμβολική παρεμβολή, όπου χρονικά καθυστερημένα αντίγραφα προηγούμενων συμβόλων παρεμβάλουν με το τρέχων σύμβολο. Ένας τρόπος για την αντιμετώπιση του φαινομένου αυτού, είναι μέσω της ισοστάθμισης στο δέκτη, όπου χρησιμοποιώντας γραμμικές και μη-γραμμικές τεχνικές επεξεργασίας, τα μεταδιδόμενα σύμβολα ανιχνεύονται από το ληφθέν σήμα. Ωστόσο, συνήθως τα ασύρματα κανάλια χαρακτηρίζονται από κρουστικές αποκρούσεις μεγάλου μήκους αλλά λίγων μη μηδενικών συντελεστών, και σε αυτήν την περίπτωση η υπολογιστική πολυπλοκότητα των συνήθων τεχνικών είναι πολύ υψηλή.
Στα πλαίσια αυτής της διατριβής, αναπτύχθηκαν νέοι ευριστικοί αλγόριθμοι για το πρόβλημα της ισοστάθμισης, οι οποίοι εκμεταλλεύονται είτε την αραιότητα της κρουστικής απόκρισης είναι την αραιότητα του αντιστρόφου φίλτρου, προκειμένου να παραχθούν αποδοτικές υλοποιήσεις. Θεωρώντας τον μη γραμμικό ισοσταθμιστή ανατροφοδότησης-απόφασης, έχει δειχθεί ότι κάτω από συνήθεις υποθέσεις για τους συντελεστές της κρουστικής απόκρισης του καναλιού, το εμπρόσθιο φίλτρο και το φίλτρο ανατροφοδότησης μπορούν να αναπαρασταθούν από αραιά διανύσματα. Για τον σκοπό αυτό, τεχνικές Συμπιεσμένης Καταγραφής, οι οποίες έχουν χρησιμοποιηθεί κατα κόρον σε προβλήματα ταυτοποίησης συστήματος, μπορούν να βελτιώσουν σε μεγάλο βαθμό την απόδοση κλασσικών ισοσταθμιστών που δεν λαμβάνουν υπόψιν τους την αραιότητα των διανυσμάτων. Έχοντας ως βάση τις τεχνικές basis pursuit και matching pursuit, αναπτύχθηκαν νέα σχήματα ισοσταθμιστών τα οποία παρουσιάζουν αξιοσημείωτη μείωση στο υπολογιστικό κόστος. Επίσης, αντίθετα με τη συνήθη πρακτική ταυτοποίησης συστήματος, αναπτύχθηκε νέος ευριστικό αλγόριθμος για το πρόβλημα αραιής προσαρμοστικής ισοστάθμισης, με την ονομασία Stochastic Gradient Pursuit. Επιπλέον, ο αλγόριθμος αυτός επεκτάθηκε και για την περίπτωση όπου ο αριθμός των μη μηδενικών στοιχείων του ισοσταθμιστή είναι άγνωστος.
Μία διαφορετική προσέγγιση για την αντιμετώπιση του φαινομένου της διασυμβολικής παρεμβολής είναι μέσω του συστήματος orthogonal frequency-division multiplexing (OFDM), όπου το συνολικό κανάλι χωρίζεται σε πολλά στενά υπο-κανάλια, με τέτοιον τρόπο ώστε τα μεταδιδόμενα σήματα να είναι ορθογώνια μεταξύ τους, παρότι παρουσιάζουν φασματική επικάλυψη. Ωστόσο, σε χρονικά και συχνοτικά επιλεκτικά κανάλια, το φαινόμενο Doppler καταστρέφει την ορθογωνιότητα των υπο-καναλιών. Σε αυτήν την περίπτωση, παρόμοια με το φαινόμενο της διασυμβολικής παρεμβολής, εμφανίζεται το φαινόμενο της διακαναλικής παρεμβολής, όπου τα σύμβολα που ανήκουν σε διαφορετικά υπο-κανάλια παρεμβάλουν στο τρέχον. Θεωρώντας αυτό το πρόβλημα, αναπτύχθηκαν νέα σχήματα ισοστάθμισης που ακυρώνουν διαδοχικά την παρεμβολή αυτή, παρέχοντας έναν συμβιβασμό μεταξύ της απόδοσης και της πολυπλοκότητας.
Στις περιπτώσεις όπου το φαινόμενο Doppler δεν είναι τόσο ισχυρό, η συνήθης τακτική είναι η προσέγγιση του πίνακα του καναλιού με έναν πίνακα ζώνης. Με αυτό το σκεπτικό, αναπτύχθηκαν αλγόριθμοι μειωμένης τάξης που βασίζονται στην επαναληπτική μέθοδο preconditioned conjugate gradient (PCG), προκειμένου να εκτιμήσουμε τον πίνακα ισοστάθμισης με έναν μειωμένο αριθμό επαναλήψεων. Επίσης, αναπτύχθηκαν τεχνικές που βασίζονται σε προβολές Galerkin για την βελτίωση της απόδοσης των συστημάτων χωρίς να αυξάνουν σημαντικά την πολυπλοκότητα. Ωστόσο, για τις περιπτώσεις όπου το φαινόμενο Doppler έχει ισχυρή επίδραση στο δέκτη του τηλεπικοινωνιακού συστήματος, όπως στις περιπτώσεις πολύ δυναμικών καναλιών, τότε η προσέγγιση με τον πίνακα ζώνης μειώνει σημαντικά την απόδοση του συστήματος. Με στόχο να ανακτήσουμε την απώλεια αυτή, αναπτύχθηκαν τεχνικές κανονικοποιημένης εκτίμησης, με γραμμική πολυπλοκότητα σε σχέση με τον αριθμό των υπο-καναλιών. Επιπρόσθετα, αναπτύχθηκε ένα νέο σχήμα ισοστάθμισης που έχει την δυνατότητα να ακυρώσει πλήρως την διακαναλική παρεμβολή. Το συγκεκριμένο σχήμα λειτουργεί βασιζόμενο σε έναν αριθμό διαδοχικών σταδίων, ακολουθώντας την φιλοσοφία της αρχιτεκτονικής fully-connected ordered successive interference cancellation (OSIC), με στόχο να μειώσει την εναπομείναντα παρεμβολή σε κάθε στάδιο του ισοσταθμιστή
Η απόδοση ενός τηλεπικοινωνιακού συστήματος μπορεί επίσης να βελτιωθεί με την χρήση τεχνικών ποικιλομορφίας, δηλαδή με την μετάδοση των συμβόλων μέσω πολλών ανεξάρτητων μονοπατιών. Μία τεχνική ποικιλομορφίας είναι η συνεργατική μετάδοση, όπου μία ομάδα κοντινών τερματικών (relays) σχηματίζουν μία εικονική συστοιχία κεραιών και τεχνικές διαμόρφωσης λοβού μετάδοσης χρησιμοποιούνται προκειμένου να βελτιστοποιηθεί η επικοινωνία μέσω των τερματικών. Οι συγκεκριμένες τεχνικές διαμόρφωσης λοβού μετάδοσης, διαφέρουν από τις κλασσικές όπου η συστοιχία κεραιών βρίσκεται τοποθετημένη σε έναν κόμβο, καθώς τα τερματικά κατανέμονται στον χώρο.
Υπό αυτές τις συνθήκες, αναπτύχθηκαν κατανεμημένοι αλγόριθμοι οι οποίοι εκμεταλλεύονται την επικοινωνία και τις υπολογιστικές δυνατότητες των τερματικών για τον υπολογισμό των συνιστωσών του διανύσματος διαμόρφωσης λοβού μετάδοσης. Κάθε τερματικό εκτιμά μόνο την αντίστοιχη συνιστώσα από το κύριο ιδιοδιάνυσμα, συνδιάζοντας δεδομένα από τα γειτονικά τερματικά. Οι προτεινόμενοι αλγόριθμοι εφαρμόστηκαν σε δύο σχήματα κατανεμημένης μετάδοσης μέσω ενδιάμεσων κόμβων. Στο πρώτο σχήμα, τα βάρη του διανύσματος διαμόρφωσης λοβού μετάδοσης υπολογίστηκαν με βάση την ελαχιστοποίηση της συνολικής ισχύος μετάδοσης υπό τον περιορισμό συγκεκριμένου κατωφλίου για την ποιότητα του λαμβανόμενου σήματος. Στο δεύτερο σχήμα, υπολογίστηκαν μεγιστοποιώντας την ποιότητα του λαμβανόμενου σήματος υπό τον περιορισμό ενός κατωφλίου για την συνολική ισχύ μετάδοσης. Επιπλέον, οι αλγόριθμοι που αναπτύχθηκαν λειτουργούν τυφλά, δηλαδή χωρίς φάση εκπαίδευσης, και προσαρμοστικά με μικρό διάστημα σύγκλισης.
|
8 |
Séparation de Sources Dans des Mélanges non-Lineaires / Blind Source Separation in Nonlinear MixturesEhsandoust, Bahram 30 April 2018 (has links)
La séparation aveugle de sources aveugle (BSS) est une technique d’estimation des différents signaux observés au travers de leurs mélanges à l’aide de plusieurs capteurs, lorsque le mélange et les signaux sont inconnus. Bien qu’il ait été démontré mathématiquement que pour des mélanges linéaires, sous des conditions faibles, des sources mutuellement indépendantes peuvent être estimées, il n’existe dans de résultats théoriques généraux dans le cas de mélanges non-linéaires. La littérature sur ce sujet est limitée à des résultats concernant des mélanges non linéaires spécifiques.Dans la présente étude, le problème est abordé en utilisant une nouvelle approche utilisant l’information temporelle des signaux. L’idée originale conduisant à ce résultat, est d’étudier le problème de mélanges linéaires, mais variant dans le temps, déduit du problème non linéaire initial par dérivation. Il est démontré que les contre-exemples déjà présentés, démontrant l’inefficacité de l’analyse par composants indépendants (ACI) pour les mélanges non-linéaires, perdent leur validité, considérant l’indépendance au sens des processus stochastiques, au lieu de l’indépendance au sens des variables aléatoires. Sur la base de cette approche, de bons résultats théoriques et des développements algorithmiques sont fournis. Bien que ces réalisations ne soient pas considérées comme une preuve mathématique de la séparabilité des mélanges non-linéaires, il est démontré que, compte tenu de quelques hypothèses satisfaites dans la plupart des applications pratiques, elles sont séparables.De plus, les BSS non-linéaires pour deux ensembles utiles de signaux sources sont également traités, lorsque les sources sont (1) spatialement parcimonieuses, ou (2) des processus Gaussiens. Des méthodes BSS particulières sont proposées pour ces deux cas, dont chacun a été largement étudié dans la littérature qui correspond à des propriétés réalistes pour de nombreuses applications pratiques.Dans le cas de processus Gaussiens, il est démontré que toutes les applications non-linéaires ne peuvent pas préserver la gaussianité de l’entrée, cependant, si on restreint l’étude aux fonctions polynomiales, la seule fonction préservant le caractère gaussiens des processus (signaux) est la fonction linéaire. Cette idée est utilisée pour proposer un algorithme de linéarisation qui, en cascade par une méthode BSS linéaire classique, sépare les mélanges polynomiaux de processus Gaussiens.En ce qui concerne les sources parcimonieuses, on montre qu’elles constituent des variétés distinctes dans l’espaces des observations et peuvent être séparées une fois que les variétés sont apprises. À cette fin, plusieurs problèmes d’apprentissage multiple ont été généralement étudiés, dont les résultats ne se limitent pas au cadre proposé du SRS et peuvent être utilisés dans d’autres domaines nécessitant un problème similaire. / Blind Source Separation (BSS) is a technique for estimating individual source components from their mixtures at multiple sensors, where the mixing model is unknown. Although it has been mathematically shown that for linear mixtures, under mild conditions, mutually independent sources can be reconstructed up to accepted ambiguities, there is not such theoretical basis for general nonlinear models. This is why there are relatively few results in the literature in this regard in the recent decades, which are focused on specific structured nonlinearities.In the present study, the problem is tackled using a novel approach utilizing temporal information of the signals. The original idea followed in this purpose is to study a linear time-varying source separation problem deduced from the initial nonlinear problem by derivations. It is shown that already-proposed counter-examples showing inefficiency of Independent Component Analysis (ICA) for nonlinear mixtures, loose their validity, considering independence in the sense of stochastic processes instead of simple random variables. Based on this approach, both nice theoretical results and algorithmic developments are provided. Even though these achievements are not claimed to be a mathematical proof for the separability of nonlinear mixtures, it is shown that given a few assumptions, which are satisfied in most practical applications, they are separable.Moreover, nonlinear BSS for two useful sets of source signals is also addressed: (1) spatially sparse sources and (2) Gaussian processes. Distinct BSS methods are proposed for these two cases, each of which has been widely studied in the literature and has been shown to be quite beneficial in modeling many practical applications.Concerning Gaussian processes, it is demonstrated that not all nonlinear mappings can preserve Gaussianity of the input. For example being restricted to polynomial functions, the only Gaussianity-preserving function is linear. This idea is utilized for proposing a linearizing algorithm which, cascaded by a conventional linear BSS method, separates polynomial mixturesof Gaussian processes.Concerning spatially sparse sources, it is shown that spatially sparsesources make manifolds in the observations space, and can be separated once the manifolds are clustered and learned. For this purpose, multiple manifold learning problem has been generally studied, whose results are not limited to the proposed BSS framework and can be employed in other topics requiring a similar issue.
|
Page generated in 0.0523 seconds