Spelling suggestions: "subject:"bindependent component 2analysis "" "subject:"bindependent component 3analysis ""
1 |
Predictive detection of epileptic seizures in EEG for reactive careValko, Andras, Homsi, Antoine January 2017 (has links)
It is estimated that 65 million people worldwide have epilepsy, and many of them have uncontrollable seizures even with the use of medication. A seizure occurs when the normal electrical activity of the brain is interrupted by sudden and unusually intense bursts of electrical energy, and these bursts can be observed and detected by the use of an electroencephalograph (EEG) machine. This work presents an algorithm that monitors subtle changes in scalp EEG characteristics to predict seizures. The algorithm is built to calibrate itself to every specifc patient based on recorded data, and is computationally effcient enough for future on-line applications. The presented algorithm performs ICA-based artifact filtering and Lasso-based feature selection from a large array of statistical features. Classification is based on a neural network using Bayesian regularized backpropagation.The selected method was able to classify 4 second long preictal segments with an average sensitivity of 99.53% and an average specificity of 99.9% when tested on 15 different patients from the CHB-MIT database.
|
2 |
APPLYING BLIND SOURCE SEPARATION TO MAGNETIC ANOMALY DETECTIONUnknown Date (has links)
The research shows a novel approach for the Magnetic Anomaly Differentiation and Localization Algorithm, which simultaneously localizes multiple magnetic anomalies with weak total field signatures (tens of nT). In particular, it focuses on the case where there are two homogeneous targets with known magnetic moments. This was done by analyzing the magnetic signals and adapting Independent Component Analysis (ICA) and Simulated Annealing (SA) to solve the problem statement. The results show the groundwork for using a combination of fastICA and SA to give localization errors of 3 meters or less per target in simulation and achieved a 58% success rate. Experimental results experienced additional errors due to the effects of magnetic background, unknown magnetic moments, and navigation error. While one target was localized within 3 meters, only the latest experimental run showed the second target approaching the localization specification. This highlighted the need for higher signal-to-noise ratio and equipment with better navigational accuracy. The data analysis was used to provide recommendations on the needed equipment to minimize observed errors and improve algorithm success. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2020. / FAU Electronic Theses and Dissertations Collection
|
3 |
Blind Acoustic Feedback Cancellation for an AUVFrick, Hampus January 2023 (has links)
SAAB has developed an autonomous underwater vehicle that can mimic a conventional submarine for military fleets to exercise anti-submarine warfare. The AUV actively emits amplified versions of received sonar pulses to create the illusion of being a larger object. To prevent acoustic feedback, the AUV must distinguish between the sound to be actively responded to and its emitted signal. This master thesis has examined techniques aimed at preventing the AUV from responding to previously emitted signals to avoid acoustical feedback, without relying on prior knowledge of either the received signal or the signal emitted by the AUV. The two primary types of algorithms explored for this problem include blind source separation and adaptive filtering. The adaptive filters based on Leaky Least Mean Square and Kalman have shown promising results in attenuating the active response from the received signal. The adaptive filters utilize the fact that a certain hydrophone primarily receives the active response. This hydrophone serves as an estimate of the active response since the signal it captures is considered unknown and is to be removed. The techniques based on blind source separation have utilized the recordings of three hydrophones placed at various locations of the AUV to separate and estimate the received signal from the one emitted by the AUV. The results have demonstrated that neither of the reviewed methods is suitable for implementation on the AUV. The hydrophones are situated at a considerable distance from each other, resulting in distinct time delays between the reception of the two signals. This is usually referred to as a convolutive mixture. This is commonly solved using the frequency domain to transform the convolutive mixture to an instantaneous mixture. However, the fact that the signals share the same frequency spectrum and are adjacent in time has proven highly challenging.
|
4 |
Investigating the role of APOE-ε4, a risk gene for Alzheimer's disease, on functional brain networks using magnetoencephalographyLuckhoo, Henry Thomas January 2013 (has links)
Alzheimer's disease (AD) is developing into the single greatest healthcare challenge in the coming decades. The development of early and effective treatments that can prevent the pathological damage responsible for AD-related dementia is of utmost priority for healthcare authorities. The role of the APOE-ε4 genotype, which has been shown to increase an individual's risk of developing AD, is of central interest to this goal. Understanding the mechanism by which possession of this gene modulates brain function, leading to a predisposition towards AD is an active area of research. Functional connectivity (FC) is an excellent candidate for linking APOE-related differences in brain function to sites of AD pathology. Magnetoencephalography (MEG) is a neuroimaging tool that can provide a unique insight into the electrophysiology underpinning resting-state networks (RSNs) - whose dysfunction is postulated to lead to a predisposition to AD. This thesis presents a range of methods for measuring functional connectivity in MEG data. We first develop a set of novel adaptations for preprocessing MEG data and performing source reconstruction using a beamformer (chapter 3). We then develop a range of analyses for measuring FC through correlations in the slow envelope oscillations of band-limited source-space MEG data (chapter 4). We investigate the optimum time scales for detecting FC. We then develop methods for extracting single networks (using seed-based correlation) and multiple networks (using ICA). We proceed to develop a group-statistical framework for detecting spatial differences in RSNs and present a preliminary finding for APOE-genotype-dependent differences in RSNs (chapter 5). We also develop a statistical framework for quantifying task-locked temporal differences in functional networks during task-positive experiments (chapter 6). Finally, we demonstrate a data-driven parcellation and network analysis pipeline that includes a novel correction for signal leakage between parcels. We use this framework to show evidence of stationary cross-frequency FC (chapter 7).
|
5 |
BLIND SOURCE SEPARATION USING FREQUENCY DOMAIN INDEPENDENT COMPONENT ANALYSIS / BLIND SOURCE SEPARATION USING FREQUENCY DOMAIN INDEPENDENT COMPONENT ANALYSISE., Okwelume Gozie, Kingsley, Ezeude Anayo January 2007 (has links)
Our thesis work focuses on Frequency-domain Blind Source Separation (BSS) in which the received mixed signals are converted into the frequency domain and Independent Component Analysis (ICA) is applied to instantaneous mixtures at each frequency bin. Computational complexity is also reduced by using this method. We also investigate the famous problem associated with Frequency-Domain Blind Source Separation using ICA referred to as the Permutation and Scaling ambiguities, using methods proposed by some researchers. This is our main target in this project; to solve the permutation and scaling ambiguities in real time applications / Gozie: modebelu2001@yahoo.com Anayo: ezeudea@yahoo.com
|
6 |
Singing voice extraction from stereophonic recordingsSofianos, Stratis January 2013 (has links)
Singing voice separation (SVS) can be defined as the process of extracting the vocal element from a given song recording. The impetus for research in this area is mainly that of facilitating certain important applications of music information retrieval (MIR) such as lyrics recognition, singer identification, and melody extraction. To date, the research in the field of SVS has been relatively limited, and mainly focused on the extraction of vocals from monophonic sources. The general approach in this scenario has been one of considering SVS as a blind source separation (BSS) problem. Given the inherent diversity of music, such an approach is motivated by the quest for a generic solution. However, it does not allow the exploitation of prior information, regarding the way in which commercial music is produced. To this end, investigations are conducted into effective methods for unsupervised separation of singing voice from stereophonic studio recordings. The work involves extensive literature review of existing methods that relate to SVS, as well as commercial approaches. Following the identification of shortcomings of the conventional methods, two novel approaches are developed for the purpose of SVS. These approaches, termed SEMANICS and SEMANTICS draw their motivation from statistical as well as spectral properties of the target signal and focus on the separation of voice in the frequency domain. In addition, a third method, named Hybrid SEMANTICS, is introduced that addresses time‐, as well as frequency‐domain separation. As there is lack of a concrete standardised music database that includes a large number of songs, a dataset is created using conventional stereophonic mixing methods. Using this database, and based on widely adopted objective metrics, the effectiveness of the proposed methods has been evaluated through thorough experimental investigations.
|
7 |
Application of supervised and unsupervised learning to analysis of the arterial pressure pulseWalsh, Andrew Michael, Graduate school of biomedical engineering, UNSW January 2006 (has links)
This thesis presents an investigation of statistical analytical methods applied to the analysis of the shape of the arterial pressure waveform. The arterial pulse is analysed by a selection of both supervised and unsupervised methods of learning. Supervised learning methods are generally better known as regression. Unsupervised learning methods seek patterns in data without the specification of a target variable. The theoretical relationship between arterial pressure and wave shape is first investigated by study of a transmission line model of the arterial tree. A meta-database of pulse waveforms obtained by the SphygmoCor"??" device is then analysed by the unsupervised learning technique of Self Organising Maps (SOM). The map patterns indicate that the observed arterial pressures affect the wave shape in a similar way as predicted by the theoretical model. A database of continuous arterial pressure obtained by catheter line during sleep is used to derive supervised models that enable estimation of arterial pressures, based on the measured wave shapes. Independent component analysis (ICA) is also used in a supervised learning methodology to show the theoretical plausibility of separating the pressure signals from unwanted noise components. The accuracy and repeatability of the SphygmoCor?? device is measured and discussed. Alternative regression models are introduced that improve on the existing models in the estimation of central cardiovascular parameters from peripheral arterial wave shapes. Results of this investigation show that from the information in the wave shape, it is possible, in theory, to estimate the continuous underlying pressures within the artery to a degree of accuracy acceptable to the Association for the Advancement of Medical Instrumentation. This could facilitate a new role for non-invasive sphygmographic devices, to be used not only for feature estimation but as alternatives to invasive arterial pressure sensors in the measurement of continuous blood pressure.
|
8 |
Denoising of Infrared Images Using Independent Component AnalysisBjörling, Robin January 2005 (has links)
<p>Denna uppsats syftar till att undersöka användbarheten av metoden Independent Component Analysis (ICA) för brusreducering av bilder tagna av infraröda kameror. Speciellt fokus ligger på att reducera additivt brus. Bruset delas upp i två delar, det Gaussiska bruset samt det sensorspecifika mönsterbruset. För att reducera det Gaussiska bruset används en populär metod kallad sparse code shrinkage som bygger på ICA. En ny metod, även den byggandes på ICA, utvecklas för att reducera mönsterbrus. För varje sensor utförs, i den nya metoden, en analys av bilddata för att manuellt identifiera typiska mönsterbruskomponenter. Dessa komponenter används därefter för att reducera mönsterbruset i bilder tagna av den aktuella sensorn. Det visas att metoderna ger goda resultat på infraröda bilder. Algoritmerna testas både på syntetiska såväl som på verkliga bilder och resultat presenteras och jämförs med andra algoritmer.</p> / <p>The purpose of this thesis is to evaluate the applicability of the method Independent Component Analysis (ICA) for noise reduction of infrared images. The focus lies on reducing the additive uncorrelated noise and the sensor specific additive Fixed Pattern Noise (FPN). The well known method sparse code shrinkage, in combination with ICA, is applied to reduce the uncorrelated noise degrading infrared images. The result is compared to an adaptive Wiener filter. A novel method, also based on ICA, for reducing FPN is developed. An independent component analysis is made on images from an infrared sensor and typical fixed pattern noise components are manually identified. The identified components are used to fast and effectively reduce the FPN in images taken by the specific sensor. It is shown that both the FPN reduction algorithm and the sparse code shrinkage method work well for infrared images. The algorithms are tested on synthetic as well as on real images and the performance is measured.</p>
|
9 |
Contrast properties of entropic criteria for blind source separation : a unifying framework based on information-theoretic inequalitiesVrins, Frédéric D. 02 March 2007 (has links)
In the recent years, Independent Component Analysis (ICA) has become a fundamental tool in adaptive signal and data processing, especially in the field of Blind Source Separation (BSS). Even though there exist some methods for which an algebraic solution to the ICA problem may be found, other iterative methods are very popular. Among them is the class of information-theoretic approaches, laying on entropies. The associated objective functions are maximized based on optimization schemes, and on gradient-ascent techniques in particular. Two major issues in this field are the following: 1) Does the global maximum point of these entropic objectives correspond to a satisfactory solution of BSS ?
and 2) as gradient techniques are used, optimization algorithms look in fact for local maximum points, so what about the meaning of these local optima from the BSS problem point of view?
Even though there are some partial answers to these questions in the literature, most of them are based on simulation and conjectures; formal developments are often lacking. This thesis aims at filling this lack and providing intuitive justifications, too. We focus the analysis on Rényi's entropy-based contrast functions. Our results show that, generally speaking, Rényi's entropy is not a suitable contrast function for BSS, even though we recover the well-known results saying that Shannon's entropy-based objectives are contrast functions. We also show that the range-based contrast functions can be built under some conditions on the sources.
The BSS problem is stated in the first chapter, and viewed under the information (theory) angle. The two next chapters address specifically the above questions. Finally, the last chapter deals with range-based ICA, the only ``entropy-based contrast' which, based on the enclosed results,
is also a <i>discriminant</i> contrast function, in the sense that it is theoretically free of spurious local optima. Geometrical interpretations and surprising examples are given. The interest of this approach is confirmed by testing the algorithm on the MLSP 2006 data analysis competition benchmark; the proposed method outperforms the previously obtained results on large-scale and noisy mixture samples obtained through ill-conditioned mixing matrices.
|
10 |
A Novel Hybrid Dimensionality Reduction Method using Support Vector Machines and Independent Component AnalysisMoon, Sangwoo 01 August 2010 (has links)
Due to the increasing demand for high dimensional data analysis from various applications such as electrocardiogram signal analysis and gene expression analysis for cancer detection, dimensionality reduction becomes a viable process to extracts essential information from data such that the high-dimensional data can be represented in a more condensed form with much lower dimensionality to both improve classification accuracy and reduce computational complexity. Conventional dimensionality reduction methods can be categorized into stand-alone and hybrid approaches. The stand-alone method utilizes a single criterion from either supervised or unsupervised perspective. On the other hand, the hybrid method integrates both criteria. Compared with a variety of stand-alone dimensionality reduction methods, the hybrid approach is promising as it takes advantage of both the supervised criterion for better classification accuracy and the unsupervised criterion for better data representation, simultaneously. However, several issues always exist that challenge the efficiency of the hybrid approach, including (1) the difficulty in finding a subspace that seamlessly integrates both criteria in a single hybrid framework, (2) the robustness of the performance regarding noisy data, and (3) nonlinear data representation capability.
This dissertation presents a new hybrid dimensionality reduction method to seek projection through optimization of both structural risk (supervised criterion) from Support Vector Machine (SVM) and data independence (unsupervised criterion) from Independent Component Analysis (ICA). The projection from SVM directly contributes to classification performance improvement in a supervised perspective whereas maximum independence among features by ICA construct projection indirectly achieving classification accuracy improvement due to better intrinsic data representation in an unsupervised perspective. For linear dimensionality reduction model, I introduce orthogonality to interrelate both projections from SVM and ICA while redundancy removal process eliminates a part of the projection vectors from SVM, leading to more effective dimensionality reduction. The orthogonality-based linear hybrid dimensionality reduction method is extended to uncorrelatedness-based algorithm with nonlinear data representation capability. In the proposed approach, SVM and ICA are integrated into a single framework by the uncorrelated subspace based on kernel implementation.
Experimental results show that the proposed approaches give higher classification performance with better robustness in relatively lower dimensions than conventional methods for high-dimensional datasets.
|
Page generated in 0.0998 seconds