211 |
Fenchel duality-based algorithms for convex optimization problems with applications in machine learning and image restorationHeinrich, André 27 March 2013 (has links) (PDF)
The main contribution of this thesis is the concept of Fenchel duality with a focus on its application in the field of machine learning problems and image restoration tasks. We formulate a general optimization problem for modeling support vector machine tasks and assign a Fenchel dual problem to it, prove weak and strong duality statements as well as necessary and sufficient optimality conditions for that primal-dual pair. In addition, several special instances of the general optimization problem are derived for different choices of loss functions for both the regression and the classifification task. The convenience of these approaches is demonstrated by numerically solving several problems. We formulate a general nonsmooth optimization problem and assign a Fenchel dual problem to it. It is shown that the optimal objective values of the primal and the dual one coincide and that the primal problem has an optimal solution under certain assumptions. The dual problem turns out to be nonsmooth in general and therefore a regularization is performed twice to obtain an approximate dual problem that can be solved efficiently via a fast gradient algorithm. We show how an approximate optimal and feasible primal solution can be constructed by means of some sequences of proximal points closely related to the dual iterates. Furthermore, we show that the solution will indeed converge to the optimal solution of the primal for arbitrarily small accuracy. Finally, the support vector regression task is obtained to arise as a particular case of the general optimization problem and the theory is specialized to this problem. We calculate several proximal points occurring when using difffferent loss functions as well as for some regularization problems applied in image restoration tasks. Numerical experiments illustrate the applicability of our approach for these types of problems.
|
212 |
Wavelet Based Algorithms For Spike Detection In Micro Electrode Array RecordingsNabar, Nisseem S 06 1900 (has links)
In this work, the problem of detecting neuronal spikes or action potentials (AP) in noisy recordings from a Microelectrode Array (MEA) is investigated. In particular, the spike detection algorithms should be less complex and with low computational complexity so as to be amenable for real time applications. The use of the MEA is that it allows collection of extracellular signals from either a single unit or multiple (45) units within a small area. The noisy MEA recordings then undergo basic filtering, digitization and are presented to a computer for further processing. The challenge lies in using this data for detection of spikes from neuronal firings and extracting spatiotemporal patterns from the spike train which may allow control of a robotic limb or other neuroprosthetic device directly from the brain. The aim is to understand the spiking action of the neurons, and use this knowledge to devise efficient algorithms for Brain Machine Interfaces (BMIs).
An effective BMI will require a realtime, computationally efficient implementation which can be carried out on a DSP board or FPGA system. The aim is to devise algorithms which can detect spikes and underlying spatio-temporal correlations having computational and time complexities to make a real time implementation feasible on a specialized DSP chip or an FPGA device. The time-frequency localization, multiresolution representation and analysis properties of wavelets make them suitable for analysing sharp transients and spikes in signals and distinguish them from noise resembling a transient or the spike. Three algorithms for the detection of spikes in low SNR MEA neuronal recordings are proposed:
1. A wavelet denoising method based on the Discrete Wavelet Transform (DWT) to suppress the noise power in the MEA signal or improve the SNR followed by standard thresholding techniques to detect the spikes from the denoised signal.
2. Directly thresholding the coefficients of the Stationary (Undecimated) Wavelet Transform (SWT) to detect the spikes.
3. Thresholding the output of a Teager Energy Operator (TEO) applied to the signal on the discrete wavelet decomposed signal resulting in a multiresolution TEO framework.
The performance of the proposed three wavelet based algorithms in terms of the accuracy of spike detection, percentage of false positives and the computational complexity for different types of wavelet families in the presence of colored AR(5) (autoregressive model with order 5) and additive white Gaussian noise (AWGN) is evaluated. The performance is further evaluated for the wavelet family chosen under different levels of SNR in the presence of the colored AR(5) and AWGN noise.
Chapter 1 gives an introduction to the concept behind Brain Machine Interfaces (BMIs), an overview of their history, the current state-of-the-art and the trends for the future. It also describes the working of the Microelectrode Arrays (MEAs). The generation of a spike in a neuron, the proposed mechanism behind it and its modeling as an electrical circuit based on the Hodgkin-Huxley model is described. An overview of some of the algorithms that have been suggested for spike detection purposes whether in MEA recordings or Electroencephalographic (EEG) signals is given.
Chapter 2 describes in brief the underlying ideas that lead us to the Wavelet Transform paradigm. An introduction to the Fourier Transform, the Short Time Fourier Transform (STFT) and the Time-Frequency Uncertainty Principle is provided. This is followed by a brief description of the Continuous Wavelet Transform and the Multiresolution Analysis (MRA) property of wavelets. The Discrete Wavelet Transform (DWT) and its filter bank implementation are described next. It is proposed to apply the wavelet denoising algorithm pioneered by Donoho, to first denoise the MEA recordings followed by standard thresholding technique for spike detection.
Chapter 3 deals with the use of the Stationary or Undecimated Wavelet Transform (SWT) for spike detection. It brings out the differences between the DWT and the SWT. A brief discussion of the analysis of non-stationary time series using the SWT is presented. An algorithm for spike detection based on directly thresholding the SWT coefficients without any need for reconstructing the denoised signal followed by thresholding technique as in the first method is presented.
In chapter 4 a spike detection method based on multiresolution Teager Energy Operator is discussed. The Teager Energy Operator (TEO) picks up localized spikes in signal energy and thus is directly used for spike detection in many applications including R wave detection in ECG and various (alpha, beta) rhythms in EEG. Some basic properties of the TEO are discussed followed by the need for a multiresolution approach to TEO and the methods existing in literature.
The wavelet decomposition and the subsampled signal involved at each level naturally lends it to a multiresolution TEO framework at the same time significantly reducing the computational complexity due the subsampled signal at each level. A wavelet-TEO algorithm for spike detection with similar accuracies as the previous two algorithms is proposed. The method proposed here differs significantly from that in literature since wavelets are used instead of time domain processing.
Chapter 5 describes the method of evaluation of the three algorithms proposed in the previous chapters. The spike templates are obtained from MEA recordings, resampled and normalized for use in spike trains simulated as Poisson processes. The noise is modeled as colored autoregressive (AR) of order 5, i.e AR(5), as well as Additive White Gaussian Noise (AWGN). The noise in most human and animal MEA recordings conforms to the autoregressive model with orders of around 5. The AWGN Noise model is used in most spike detection methods in the literature. The performance of the proposed three wavelet based algorithms is measured in terms of the accuracy of spike detection, percentage of false positives and the computational complexity for different types of wavelet families. The optimal wavelet for this purpose is then chosen from the wavelet family which gives the best results. Also, optimal levels of decomposition and threshold factors are chosen while maintaining a balance between accuracy and false positives. The algorithms are then tested for performance under different levels of SNR with the noise modeled as AR(5) or AWGN. The proposed wavelet based algorithms exhibit a detection accuracy of approximately 90% at a low SNR of 2.35 dB with the false positives below 5%. This constitutes a significant improvement over the results in existing literature which claim an accuracy of 80% with false positives of nearly 10%. As the SNR increases, the detection accuracy increases to close to 100% and the false alarm rate falls to 0.
Chapter 6 summarizes the work. A comparison is made between the three proposed algorithms in terms of detection accuracy and false positives. Directions in which future work may be carried out are suggested.
|
213 |
奇異值分解在影像處理上之運用 / Singular Value Decomposition: Application to Image Processing顏佑君, Yen, Yu Chun Unknown Date (has links)
奇異值分解(singular valve decomposition)是一個重要且被廣為運用的矩陣分解方法,其具備許多良好性質,包括低階近似理論(low rank approximation)。在現今大數據(big data)的年代,人們接收到的資訊數量龐大且形式多元。相較於文字型態的資料,影像資料可以提供更多的資訊,因此影像資料扮演舉足輕重的角色。影像資料的儲存比文字資料更為複雜,若能運用影像壓縮的技術,減少影像資料中較不重要的資訊,降低影像的儲存空間,便能大幅提升影像處理工作的效率。另一方面,有時影像在被存取的過程中遭到雜訊汙染,產生模糊影像,此模糊的影像被稱為退化影像(image degradation)。近年來奇異值分解常被用於解決影像處理問題,對於影像資料也有充分的解釋能力。本文考慮將奇異值分解應用在影像壓縮與去除雜訊上,以奇異值累積比重作為選取奇異值的準則,並透過模擬實驗來評估此方法的效果。 / Singular value decomposition (SVD) is a robust and reliable matrix decomposition method. It has many attractive properties, such as the low rank approximation. In the era of big data, numerous data are generated rapidly. Offering attractive visual effect and important information, image becomes a common and useful type of data. Recently, SVD has been utilized in several image process and analysis problems. This research focuses on the problems of image compression and image denoise for restoration. We propose to apply the SVD method to capture the main signal image subspace for an efficient image compression, and to screen out the noise image subspace for image restoration. Simulations are conducted to investigate the proposed method. We find that the SVD method has satisfactory results for image compression. However, in image denoising, the performance of the SVD method varies depending on the original image, the noise added and the threshold used.
|
214 |
Apprentissage de représentations sur-complètes par entraînement d’auto-encodeursLajoie, Isabelle 12 1900 (has links)
Les avancés dans le domaine de l’intelligence artificielle, permettent à des systèmes informatiques de résoudre des tâches de plus en plus complexes liées par exemple à la vision, à la compréhension de signaux sonores ou au traitement de la langue. Parmi les modèles existants, on retrouve les Réseaux de Neurones Artificiels (RNA), dont la popularité a fait un grand bond en avant avec la découverte de Hinton et al. [22], soit l’utilisation de Machines de Boltzmann Restreintes (RBM) pour un pré-entraînement non-supervisé couche après couche, facilitant grandement l’entraînement supervisé du réseau à plusieurs couches cachées (DBN), entraînement qui s’avérait jusqu’alors très difficile à réussir. Depuis cette découverte, des chercheurs ont étudié l’efficacité de nouvelles stratégies de pré-entraînement, telles que l’empilement d’auto-encodeurs traditionnels(SAE) [5, 38], et l’empilement d’auto-encodeur débruiteur (SDAE) [44]. C’est dans ce contexte qu’a débuté la présente étude. Après un bref passage en revue des notions de base du domaine de l’apprentissage machine et des méthodes de pré-entraînement employées jusqu’à présent avec les modules RBM, AE et DAE, nous avons approfondi notre compréhension du pré-entraînement de type SDAE, exploré ses différentes propriétés et étudié des variantes de SDAE comme stratégie d’initialisation d’architecture profonde. Nous avons ainsi pu, entre autres choses, mettre en lumière l’influence du niveau de bruit, du nombre de couches et du nombre d’unités cachées sur l’erreur de généralisation du SDAE. Nous avons constaté une amélioration de la performance sur la tâche supervisée avec l’utilisation des bruits poivre et sel (PS) et gaussien (GS), bruits s’avérant mieux justifiés que celui utilisé jusqu’à présent, soit le masque à zéro (MN). De plus, nous avons démontré que la performance profitait d’une emphase imposée sur la reconstruction des données corrompues durant l’entraînement des différents DAE. Nos travaux ont aussi permis de révéler que le DAE était en mesure d’apprendre, sur des images naturelles, des filtres semblables à ceux retrouvés dans
les cellules V1 du cortex visuel, soit des filtres détecteurs de bordures. Nous aurons par ailleurs pu montrer que les représentations apprises du SDAE, composées des caractéristiques ainsi extraites, s’avéraient fort utiles à l’apprentissage d’une machine à vecteurs de support (SVM) linéaire ou à noyau gaussien, améliorant grandement sa performance de généralisation. Aussi, nous aurons observé que similairement au DBN, et contrairement au SAE, le SDAE possédait une bonne capacité en tant que modèle générateur. Nous avons également ouvert la porte à de nouvelles stratégies de pré-entraînement et découvert le potentiel de l’une d’entre elles, soit l’empilement d’auto-encodeurs rebruiteurs (SRAE). / Progress in the machine learning domain allows computational system to address more
and more complex tasks associated with vision, audio signal or natural language processing. Among the existing models, we find the Artificial Neural Network (ANN), whose popularity increased suddenly with the recent breakthrough of Hinton et al. [22], that consists in using Restricted Boltzmann Machines (RBM) for performing an unsupervised, layer by layer, pre-training initialization, of a Deep Belief Network (DBN), which enables the subsequent successful supervised training of such architecture. Since this discovery, researchers studied the efficiency of other similar pre-training strategies such
as the stacking of traditional auto-encoder (SAE) [5, 38] and the stacking of denoising
auto-encoder (SDAE) [44]. This is the context in which the present study started. After a brief introduction of the basic machine learning principles and of the pre-training methods used until now with RBM, AE and DAE modules, we performed a series of experiments to deepen our
understanding of pre-training with SDAE, explored its different proprieties and explored variations on the DAE algorithm as alternative strategies to initialize deep networks. We evaluated the sensitivity to the noise level, and influence of number of layers and number of hidden units on the generalization error obtained with SDAE. We experimented with other noise types and saw improved performance on the supervised task with the use of pepper and salt noise (PS) or gaussian noise (GS), noise types that are more justified then the one used until now which is masking noise (MN). Moreover, modifying the algorithm by imposing an emphasis on the corrupted components reconstruction during the unsupervised training of each different DAE showed encouraging performance improvements. Our work also allowed to reveal that DAE was capable of learning, on naturals images, filters similar to those found in V1 cells of the visual cortex, that are in essence edges detectors. In addition, we were able to verify that the learned representations of SDAE, are very good characteristics to be fed to a linear or gaussian support vector machine (SVM), considerably enhancing its generalization performance. Also, we observed that, alike DBN, and unlike SAE, the SDAE had the potential to be used as a good generative model. As well, we opened the door to novel pre-training strategies
and discovered the potential of one of them : the stacking of renoising auto-encoders
(SRAE).
|
215 |
Réseaux de neurones à relaxation entraînés par critère d'autoencodeur débruitantSavard, François 08 1900 (has links)
L’apprentissage machine est un vaste domaine où l’on cherche à apprendre les paramètres
de modèles à partir de données concrètes. Ce sera pour effectuer des tâches demandant
des aptitudes attribuées à l’intelligence humaine, comme la capacité à traiter des don-
nées de haute dimensionnalité présentant beaucoup de variations. Les réseaux de neu-
rones artificiels sont un exemple de tels modèles. Dans certains réseaux de neurones dits
profonds, des concepts "abstraits" sont appris automatiquement.
Les travaux présentés ici prennent leur inspiration de réseaux de neurones profonds,
de réseaux récurrents et de neuroscience du système visuel. Nos tâches de test sont
la classification et le débruitement d’images quasi binaires. On permettra une rétroac-
tion où des représentations de haut niveau (plus "abstraites") influencent des représentations à bas niveau. Cette influence s’effectuera au cours de ce qu’on nomme relaxation,
des itérations où les différents niveaux (ou couches) du modèle s’interinfluencent. Nous
présentons deux familles d’architectures, l’une, l’architecture complètement connectée,
pouvant en principe traiter des données générales et une autre, l’architecture convolutionnelle, plus spécifiquement adaptée aux images. Dans tous les cas, les données utilisées
sont des images, principalement des images de chiffres manuscrits.
Dans un type d’expérience, nous cherchons à reconstruire des données qui ont été
corrompues. On a pu y observer le phénomène d’influence décrit précédemment en comparant le résultat avec et sans la relaxation. On note aussi certains gains numériques et
visuels en terme de performance de reconstruction en ajoutant l’influence des couches
supérieures. Dans un autre type de tâche, la classification, peu de gains ont été observés.
On a tout de même pu constater que dans certains cas la relaxation aiderait à apprendre
des représentations utiles pour classifier des images corrompues. L’architecture convolutionnelle développée, plus incertaine au départ, permet malgré tout d’obtenir des reconstructions numériquement et visuellement semblables à celles obtenues avec l’autre
architecture, même si sa connectivité est contrainte. / Machine learning is a vast field where we seek to learn parameters for models from
concrete data. The goal will be to execute various tasks requiring abilities normally
associated more with human intelligence than with a computer program, such as the
ability to process high dimensional data containing a lot of variations. Artificial neural
networks are a large class of such models. In some neural networks said to be deep, we
can observe that high level (or "abstract") concepts are automatically learned.
The work we present here takes its inspiration from deep neural networks, from
recurrent networks and also from neuroscience of the visual system. Our test tasks are
classification and denoising for near binary images. We aim to take advantage of a
feedback mechanism through which high-level representations, that is to say relatively
abstract concepts, can influence lower-level representations. This influence will happen
during what we call relaxation, which is iterations where the different levels (or layers)
of the model can influence each other. We will present two families of architectures
based on this mechanism. One, the fully connected architecture, can in principle accept
generic data. The other, the convolutional one, is specifically made for images. Both
were trained on images, though, and mostly images of written characters.
In one type of experiment, we want to reconstruct data that has been corrupted. In
these tasks, we have observed the feedback influence phenomenon previously described
by comparing the results we obtained with and without relaxation. We also note some
numerical and visual improvement in terms of reconstruction performance when we add
upper layers’ influence. In another type of task, classification, little gain has been noted.
Still, in one setting where we tried to classify noisy data with a representation trained
without prior class information, relaxation did seem to improve results significantly. The
convolutional architecture, a bit more risky at first, was shown to produce numerical and
visual results in reconstruction that are near those obtained with the fully connected
version, even though the connectivity is much more constrained.
|
216 |
Numerische Methoden zur Analyse hochdimensionaler Daten / Numerical Methods for Analyzing High-Dimensional DataHeinen, Dennis 01 July 2014 (has links)
Diese Dissertation beschäftigt sich mit zwei der wesentlichen Herausforderungen, welche bei der Bearbeitung großer Datensätze auftreten, der Dimensionsreduktion und der Datenentstörung. Der erste Teil dieser Dissertation liefert eine Zusammenfassung über Dimensionsreduktion. Ziel der Dimensionsreduktion ist eine sinnvolle niedrigdimensionale Darstellung eines vorliegenden hochdimensionalen Datensatzes. Insbesondere diskutieren und vergleichen wir bewährte Methoden des Manifold-Learning. Die zentrale Annahme des Manifold-Learning ist, dass der hochdimensionale Datensatz (approximativ) auf einer niedrigdimensionalen Mannigfaltigkeit liegt. Störungen im Datensatz sind bei allen Dimensionsreduktionsmethoden hinderlich.
Der zweite Teil dieser Dissertation stellt eine neue Entstörungsmethode für hochdimensionale Daten vor, eine Wavelet-Shrinkage-Methode für die Glättung verrauschter Abtastwerte einer zugrundeliegenden multivariaten stückweise stetigen Funktion, wobei die Abtastpunkte gestreut sein können. Die Methode stellt eine Verallgemeinerung und Weiterentwicklung der für die Bildkompression eingeführten "Easy Path Wavelet Transform" (EPWT) dar. Grundlage ist eine eindimensionale Wavelet-Transformation entlang (adaptiv) zu konstruierender Pfade durch die Abtastpunkte. Wesentlich für den Erfolg der Methode sind passende adaptive Pfadkonstruktionen. Diese Dissertation beinhaltet weiterhin eine kurze Diskussion der theoretischen Eigenschaften von Wavelets entlang von Pfaden sowie numerische Resultate und schließt mit möglichen Modifikationen der Entstörungsmethode.
|
217 |
Medical Image Processing on the GPU : Past, Present and FutureEklund, Anders, Dufort, Paul, Forsberg, Daniel, LaConte, Stephen January 2013 (has links)
Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges.
|
218 |
Multiresolution analysis of ultrasound images of the prostateZhao, Fangwei January 2004 (has links)
[Truncated abstract] Transrectal ultrasound (TRUS) has become the urologist’s primary tool for diagnosing and staging prostate cancer due to its real-time and non-invasive nature, low cost, and minimal discomfort. However, the interpretation of a prostate ultrasound image depends critically on the experience and expertise of a urologist and is still difficult and subjective. To overcome the subjective interpretation and facilitate objective diagnosis, computer aided analysis of ultrasound images of the prostate would be very helpful. Computer aided analysis of images may improve diagnostic accuracy by providing a more reproducible interpretation of the images. This thesis is an attempt to address several key elements of computer aided analysis of ultrasound images of the prostate. Specifically, it addresses the following tasks: 1. modelling B-mode ultrasound image formation and statistical properties; 2. reducing ultrasound speckle; and 3. extracting prostate contour. Speckle refers to the granular appearance that compromises the image quality and resolution in optics, synthetic aperture radar (SAR), and ultrasound. Due to the existence of speckle the appearance of a B-mode ultrasound image does not necessarily relate to the internal structure of the object being scanned. A computer simulation of B-mode ultrasound imaging is presented, which not only provides an insight into the nature of speckle, but also a viable test-bed for any ultrasound speckle reduction methods. Motivated by analysis of the statistical properties of the simulated images, the generalised Fisher-Tippett distribution is empirically proposed to analyse statistical properties of ultrasound images of the prostate. A speckle reduction scheme is then presented, which is based on Mallat and Zhong’s dyadic wavelet transform (MZDWT) and modelling statistical properties of the wavelet coefficients and exploiting their inter-scale correlation. Specifically, the squared modulus of the component wavelet coefficients are modelled as a two-state Gamma mixture. Interscale correlation is exploited by taking the harmonic mean of the posterior probability functions, which are derived from the Gamma mixture. This noise reduction scheme is applied to both simulated and real ultrasound images, and its performance is quite satisfactory in that the important features of the original noise corrupted image are preserved while most of the speckle noise is removed successfully. It is also evaluated both qualitatively and quantitatively by comparing it with median, Wiener, and Lee filters, and the results revealed that it surpasses all these filters. A novel contour extraction scheme (CES), which fuses MZDWT and snakes, is proposed on the basis of multiresolution analysis (MRA). Extraction of the prostate contour is placed in a multi-scale framework provided by MZDWT. Specifically, the external potential functions of the snake are designated as the modulus of the wavelet coefficients at different scales, and thus are “switchable”. Such a multi-scale snake, which deforms and migrates from coarse to fine scales, eventually extracts the contour of the prostate
|
219 |
Analyse des intervalles ECG inter- et intra-battement sur des modèles d'espace d'état et de Markov cachés / Inter-beat and intra-beat ECG interval analysis based on state space and hidden markov modelsAkhbari, Mahsa 08 February 2016 (has links)
Les maladies cardiovasculaires sont l'une des principales causes de mortalité chez l'homme. Une façon de diagnostiquer des maladies cardiaques et des anomalies est le traitement de signaux cardiaques tels que le ECG. Dans beaucoup de ces traitements, des caractéristiques inter-battements et intra-battements de signaux ECG doivent être extraites. Ces caractéristiques comprennent les points de repère des ondes de l’ECG (leur début, leur fin et leur point de pic), les intervalles significatifs et les segments qui peuvent être définis pour le signal ECG. L'extraction des points de référence de l'ECG consiste à identifier l'emplacement du pic, de début et de la fin de l'onde P, du complexe QRS et de l'onde T. Ces points véhiculent des informations cliniquement utiles, mais la segmentation precise de chaque battement de l'ECG est une tâche difficile, même pour les cardiologues expérimentés.Dans cette thèse, nous utilisons un cadre bayésien basé sur le modèle dynamique d'ECG proposé par McSharry. Depuis ce modèle s'appuyant sur la morphologie des ECG, il peut être utile pour la segmentation et l'analyse d'intervalles d'ECG. Afin de tenir compte de la séquentialité des ondes P, QRS et T, nous utiliserons également l'approche de Markov et des modèles de Markov cachés (MMC). En bref dans cette thèse, nous utilisons un modèle dynamique (filtre de Kalman), un modèle séquentiel (MMC) et leur combinaison (commutation de filtres de Kalman (SKF)). Nous proposons trois méthodes à base de filtres de Kalman, une méthode basée sur les MMC et un procédé à base de SKF. Nous utilisons les méthodes proposées pour l'extraction de points de référence et l'analyse d'intervalles des ECG. Le méthodes basées sur le filtrage de Kalman sont également utilisés pour le débruitage d'ECG, la détection de l'alternation de l'onde T, et la détection du pic R de l'ECG du foetus.Pour évaluer les performances des méthodes proposées pour l'extraction des points de référence de l'ECG, nous utilisons la base de données "Physionet QT", et une base de données "Swine" qui comprennent ECG annotations de signaux par les médecins. Pour le débruitage d'ECG, nous utilisons les bases de données "MIT-BIH Normal Sinus Rhythm", "MIT-BIH Arrhythmia" et "MIT-BIH noise stress test". La base de données "TWA Challenge 2008 database" est utilisée pour la détection de l'alternation de l'onde T. Enfin, la base de données "Physionet Computing in Cardiology Challenge 2013 database" est utilisée pour la détection du pic R de l'ECG du feotus. Pour l'extraction de points de reference, la performance des méthodes proposées sont évaluées en termes de moyenne, écart-type et l'erreur quadratique moyenne (EQM). Nous calculons aussi la sensibilité des méthodes. Pour le débruitage d'ECG, nous comparons les méthodes en terme d'amélioration du rapport signal à bruit. / Cardiovascular diseases are one of the major causes of mortality in humans. One way to diagnose heart diseases and abnormalities is processing of cardiac signals such as ECG. In many of these processes, inter-beat and intra-beat features of ECG signal must be extracted. These features include peak, onset and offset of ECG waves, meaningful intervals and segments that can be defined for ECG signal. ECG fiducial point (FP) extraction refers to identifying the location of the peak as well as the onset and offset of the P-wave, QRS complex and T-wave which convey clinically useful information. However, the precise segmentation of each ECG beat is a difficult task, even for experienced cardiologists.In this thesis, we use a Bayesian framework based on the McSharry ECG dynamical model for ECG FP extraction. Since this framework is based on the morphology of ECG waves, it can be useful for ECG segmentation and interval analysis. In order to consider the time sequential property of ECG signal, we also use the Markovian approach and hidden Markov models (HMM). In brief in this thesis, we use dynamic model (Kalman filter), sequential model (HMM) and their combination (switching Kalman filter (SKF)). We propose three Kalman-based methods, an HMM-based method and a SKF-based method. We use the proposed methods for ECG FP extraction and ECG interval analysis. Kalman-based methods are also used for ECG denoising, T-wave alternans (TWA) detection and fetal ECG R-peak detection.To evaluate the performance of proposed methods for ECG FP extraction, we use the "Physionet QT database", and a "Swine ECG database" that include ECG signal annotations by physicians. For ECG denoising, we use the "MIT-BIH Normal Sinus Rhythm", "MIT-BIH Arrhythmia" and "MIT-BIH noise stress test" databases. "TWA Challenge 2008 database" is used for TWA detection and finally, "Physionet Computing in Cardiology Challenge 2013 database" is used for R-peak detection of fetal ECG. In ECG FP extraction, the performance of the proposed methods are evaluated in terms of mean, standard deviation and root mean square of error. We also calculate the Sensitivity for methods. For ECG denoising, we compare methods in their obtained SNR improvement.
|
220 |
Mathematical imaging tools in cancer research : from mitosis analysis to sparse regularisationGrah, Joana Sarah January 2018 (has links)
This dissertation deals with customised image analysis tools in cancer research. In the field of biomedical sciences, mathematical imaging has become crucial in order to account for advancements in technical equipment and data storage by sound mathematical methods that can process and analyse imaging data in an automated way. This thesis contributes to the development of such mathematically sound imaging models in four ways: (i) automated cell segmentation and tracking. In cancer drug development, time-lapse light microscopy experiments are conducted for performance validation. The aim is to monitor behaviour of cells in cultures that have previously been treated with chemotherapy drugs, since atypical duration and outcome of mitosis, the process of cell division, can be an indicator of successfully working drugs. As an imaging modality we focus on phase contrast microscopy, hence avoiding phototoxicity and influence on cell behaviour. As a drawback, the common halo- and shade-off effect impede image analysis. We present a novel workflow uniting both automated mitotic cell detection with the Hough transform and subsequent cell tracking by a tailor-made level-set method in order to obtain statistics on length of mitosis and cell fates. The proposed image analysis pipeline is deployed in a MATLAB software package called MitosisAnalyser. For the detection of mitotic cells we use the circular Hough transform. This concept is investigated further in the framework of image regularisation in the general context of imaging inverse problems, in which circular objects should be enhanced, (ii) exploiting sparsity of first-order derivatives in combination with the linear circular Hough transform operation. Furthermore, (iii) we present a new unified higher-order derivative-type regularisation functional enforcing sparsity of a vector field related to an image to be reconstructed using curl, divergence and shear operators. The model is able to interpolate between well-known regularisers such as total generalised variation and infimal convolution total variation. Finally, (iv) we demonstrate how we can learn sparsity promoting parametrised regularisers via quotient minimisation, which can be motivated by generalised Eigenproblems. Learning approaches have recently become very popular in the field of inverse problems. However, the majority aims at fitting models to favourable training data, whereas we incorporate knowledge about both fit and misfit data. We present results resembling behaviour of well-established derivative-based sparse regularisers, introduce novel families of non-derivative-based regularisers and extend this framework to classification problems.
|
Page generated in 0.0492 seconds