Spelling suggestions: "subject:"[een] FUZZY C-MEANS"" "subject:"[enn] FUZZY C-MEANS""
21 |
Segmentation d'images de transmission pour la correction de l'atténué en tomographie d'émission par positronsNguiffo Podie, Yves January 2009 (has links)
L'atténuation des photons est un phénomène qui affecte directement et de façon profonde la qualité et l'information quantitative obtenue d'une image en Tomographie d'Emission par Positrons (TEP). De sévères artefacts compliquant l'interprétation visuelle ainsi que de profondes erreurs d'exactitudes sont présents lors de l'évaluation quantitative des images TEP, biaisant la vérification de la corrélation entre les concentrations réelles et mesurées.L' atténuation est due aux effets photoélectrique et Compton pour l'image de transmission (30 keV - 140 keV), et majoritairement à l'effet Compton pour l'image d'émission (511 keV). La communauté en médecine nucléaire adhère largement au fait que la correction d'atténuation constitue une étape cruciale pour l'obtention d'images sans artefacts et quantitativement exactes. Pour corriger les images d'émission TEP pour l'atténué, l'approche proposée consiste concrètement à segmenter une image de transmission à l'aide d'algorithmes de segmentation: K-means (KM), Fuzzy C-means (FCM), Espérance-Maximisation (EM), et EM après une transformation en ondelettes (OEM). KM est un algorithme non supervisé qui partitionne les pixels de l'image en agrégats tels que chaque agrégat de la partition soit défini par ses objets et son centroïde. FCM est un algorithme de classification non-supervisée qui introduit la notion d'ensemble flou dans la définition des agrégats, et chaque pixel de l'image appartient à chaque agrégat avec un certain degré, et tous les agrégats sont caractérisés par leur centre de gravité.L'algorithme EM est une méthode d'estimation permettant de déterminer les paramètres du maximum de vraisemblance d'un mélange de distributions avec comme paramètres du modèle à estimer la moyenne, la covariance et le poids du mélange correspondant à chaque agrégat. Les ondelettes forment un outil pour la décomposition du signal en une suite de signaux dits d'approximation de résolution décroissante suivi d'une suite de rectifications appelées détails.L' image à laquelle a été appliquée les ondelettes est segmentée par EM. La correction d'atténuation nécessite la conversion des intensités des images de transmission segmentées en coefficients d'atténuation à 511 keV. Des facteurs de correction d' atténuation (FCA) pour chaque ligne de réponse sont alors obtenus, lesquels représentent le rapport entre les photons émis et transmis. Ensuite il s'agit de multiplier le sinogramme, formé par l'ensemble des lignes de réponses, des FCA par le sinogramme de l'image d'émission pour avoir le sinogramme corrigé pour l'atténuation, qui est par la suite reconstruit pour générer l'image d'émission TEP corrigée. Nous avons démontré l'utilité de nos méthodes proposées dans la segmentation d'images médicales en les appliquant à la segmentation des images du cerveau, du thorax et de l'abdomen humains. Des quatre processus de segmentation, la décomposition par les ondelettes de Haar suivie de l'Espérance-Maximisation (OEM) semble donner un meilleur résultat en termes de contraste et de résolution. Les segmentations nous ont permis une réduction claire de la propagation du bruit des images de transmission dans les images d'émission, permettant une amélioration de la détection des lésions, et améliorant les diagnostics en médecine nucléaire.
|
22 |
Déploiement sur une plate-forme de visualisation d'un algorithme coopératif pour la segmentation d'images IRM autour d'un système multi-agentsLaguel, Hadjer 12 October 2010 (has links) (PDF)
L'objectif de ce travail est le déploiement d'un système de segmentation d'images de résonance magnétiques (IRM) sur la plateforme de visualisation "BrainVISA". La segmentation vise à classifier le cerveau en trois régions : la matière blanche, la matière grise et le liquide céphalo-rachidien. Il existe plusieurs algorithmes de segmentation d'images, chacun possédant ses avantages et ses limites d'utilisation. Dans ce travail, nous utilisons deux types d'algorithmes : la FCM (Fuzzy C-Mean) qui étudie l'image dans sa globalité et permet de modéliser l'incertitude et l'imprécision et, la croissance de régions qui tient compte des relations de voisinage entre pixels ; le but étant de tirer partie des avantages de chacun. Les deux méthodes de segmentation sont utilisées de manière coopérative. Les prédicats de la croissance de régions sont adaptés à nos images afin d'améliorer la qualité de l'image segmentée. Une implémentation est alors mise en oeuvre autour d'un système multi-agents (SMA), qui permet d'utiliser l'algorithme de croissance de régions de manière plus efficace. La segmentation est réalisée sur des images réelles.
|
23 |
A Genetic Algorithm that Exchanges Neighboring Centers for Fuzzy c-Means ClusteringChahine, Firas Safwan 01 January 2012 (has links)
Clustering algorithms are widely used in pattern recognition and data mining applications. Due to their computational efficiency, partitional clustering algorithms are better suited for applications with large datasets than hierarchical clustering algorithms. K-means is among the most popular partitional clustering algorithm, but has a major shortcoming: it is extremely sensitive to the choice of initial centers used to seed the algorithm. Unless k-means is carefully initialized, it converges to an inferior local optimum and results in poor quality partitions. Developing improved method for selecting initial centers for k-means is an active area of research. Genetic algorithms (GAs) have been successfully used to evolve a good set of initial centers. Among the most promising GA-based methods are those that exchange neighboring centers between candidate partitions in their crossover operations.
K-means is best suited to work when datasets have well-separated non-overlapping clusters. Fuzzy c-means (FCM) is a popular variant of k-means that is designed for applications when clusters are less well-defined. Rather than assigning each point to a unique cluster, FCM determines the degree to which each point belongs to a cluster. Like k-means, FCM is also extremely sensitive to the choice of initial centers. Building on GA-based methods for initial center selection for k-means, this dissertation developed an evolutionary program for center selection in FCM called FCMGA. The proposed algorithm utilized region-based crossover and other mechanisms to improve the GA.
To evaluate the effectiveness of FCMGA, three independent experiments were conducted using real and simulated datasets. The results from the experiments demonstrate the effectiveness and consistency of the proposed algorithm in identifying better quality solutions than extant methods. Moreover, the results confirmed the effectiveness of region-based crossover in enhancing the search process for the GA and the convergence speed of FCM. Taken together, findings in these experiments illustrate that FCMGA was successful in solving the problem of initial center selection in partitional clustering algorithms.
|
24 |
Diseño y desarrollo de un sistema para la detección automática de retinopatía diabética en imágenes digitalesArenas Cavalli, José Tomas Alejandro January 2012 (has links)
Memoria para optar al título de Ingeniero Civil Electricista / Memoria para optar al título de Ingeniero Civil Industrial / La detección automática de la patología oftalmológica denominada retinopatía diabética tiene el potencial de prevenir casos de pérdida de visión y ceguera, en caso de impulsar la exanimación masiva de pacientes con diabetes. Este trabajo apunta a diseñar y desarrollar un clasificador a nivel de prototipo que permita discriminar entre pacientes con y sin presencia de la enfermedad, por medio del procesamiento automático de imágenes de fondo de ojo digitales. Los procedimientos se basan en la adaptación e integración de algoritmos publicados.
Las etapas desarrolladas en el procesamiento digital de las imágenes de retina para este objetivo fueron: localización de vasos sanguíneos, localización de disco óptico (DO), detección de lesiones claras y detección de lesiones rojas. Las técnicas utilizadas para cada una de las etapas fueron, respectivamente: Gabor wavelets y clasificadores bayesianos; características de los vasos y predicción de posición mediante regresores kNN; segmentación mediante fuzzy c-means y clasificación usando una red neuronal multicapas; y, operadores morfológicos ajustados óptimamente.
La base de datos de imágenes para el entrenamiento y prueba de los métodos desarrollados cuenta con 285 imágenes de un centro médico local, incluyendo 214 normales y 71 con la enfermedad. Los resultados específicos fueron: 100% de precisión en la ubicación del DO en las 142 imágenes de prueba; identificación del 91,4% de las imágenes con lesiones claras, i.e., la sensibilidad, mientras se reconocieron 53,3% de las imágenes sin lesiones claras, i.e., la especificidad (84,1% de sensibilidad y 84,7% de especificidad a nivel de pixel) en las mismas 142 muestras; y, 97% de sensibilidad y 92% de especificidad en la detección de lesiones rojas en 155 imágenes. El desempeño en la ubicación de la red vascular es medido por el resultado del resto de los pasos. El rendimiento general del sistema es de un 88,7% y 49,1% en cuanto a sensibilidad y especificidad, respectivamente.
Algunas medidas fundamentales son necesarias para la implementación a futuro. En primer lugar, acrecentar la base de datos de imágenes para entrenamiento y prueba. Además, es posible pulir cada una de las etapas intermedias de las cuatro grandes fases. Con todo, una ronda de implementación a nivel usuario de un prototipo general permitirá evaluación y mejora de los métodos diseñados y desarrollados.
|
25 |
An investigation into fuzzy clustering quality and speed : fuzzy C-means with effective seedingStetco, Adrian January 2017 (has links)
Cluster analysis, the automatic procedure by which large data sets can be split into similar groups of objects (clusters), has innumerable applications in a wide range of problem domains. Improvements in clustering quality (as captured by internal validation indexes) and speed (number of iterations until cost function convergence), the main focus of this work, have many desirable consequences. They can result, for example, in faster and more precise detection of illness onset based on symptoms or it could provide investors with a rapid detection and visualization of patterns in financial time series and so on. Partitional clustering, one of the most popular ways of doing cluster analysis, can be classified into two main categories: hard (where the clusters discovered are disjoint) and soft (also known as fuzzy; clusters are non-disjoint, or overlapping). In this work we consider how improvements in the speed and solution quality of the soft partitional clustering algorithm Fuzzy C-means (FCM) can be achieved through more careful and informed initialization based on data content. By carefully selecting the cluster centers in a way which disperses the initial cluster centers through the data space, the resulting FCM++ approach samples starting cluster centers during the initialization phase. The cluster centers are well spread in the input space, resulting in both faster convergence times and higher quality solutions. Moreover, we allow the user to specify a parameter indicating how far and apart the cluster centers should be picked in the dataspace right at the beginning of the clustering procedure. We show FCM++'s superior behaviour in both convergence times and quality compared with existing methods, on a wide rangeof artificially generated and real data sets. We consider a case study where we propose a methodology based on FCM++for pattern discovery on synthetic and real world time series data. We discuss a method to utilize both Pearson correlation and Multi-Dimensional Scaling in order to reduce data dimensionality, remove noise and make the dataset easier to interpret and analyse. We show that by using FCM++ we can make an positive impact on the quality (with the Xie Beni index being lower in nine out of ten cases for FCM++) and speed (with on average 6.3 iterations compared with 22.6 iterations) when trying to cluster these lower dimensional, noise reduced, representations of the time series. This methodology provides a clearer picture of the cluster analysis results and helps in detecting similarly behaving time series which could otherwise come from any domain. Further, we investigate the use of Spherical Fuzzy C-Means (SFCM) with the seeding mechanism used for FCM++ on news text data retrieved from a popular British newspaper. The methodology allows us to visualize and group hundreds of news articles based on the topics discussed within. The positive impact made by SFCM++ translates into a faster process (with on average 12.2 iterations compared with the 16.8 needed by the standard SFCM) and a higher quality solution (with the Xie Beni being lower for SFCM++ in seven out of every ten runs).
|
26 |
Mejora de las probabilidades de pertenencia en clasificación difusa con aplicación al cálculo preciso de órbitasSoto Espinosa, Jesús Antonio 30 June 2005 (has links)
No description available.
|
27 |
Fuzzy Entropy Based Fuzzy c-Means Clustering with Deterministic and Simulated Annealing MethodsFURUHASHI, Takeshi, YASUDA, Makoto 01 June 2009 (has links)
No description available.
|
28 |
Estimation of Parameters in Support Vector RegressionChan, Yi-Chao 21 July 2006 (has links)
The selection and modification of kernel functions is a very important problem in the field of support vector learning. However, the kernel function of a support vector machine has great influence on its performance. The kernel function projects the dataset from the original data space into the feature space, and therefore the problems which couldn¡¦t be done in low dimensions could be done in a higher dimension through the transform of the kernel function. In this thesis, we adopt the FCM clustering algorithm to group data patterns into clusters, and then use a statistical approach to calculate the standard deviation of each pattern with respect to the other patterns in the same cluster. Therefore we can make a proper estimation on the distribution of data patterns and assign a proper standard deviation for each pattern. The standard deviation is the same as the variance of a radial basis function. Then we have the origin data patterns and the variance of each data pattern for support vector learning. Experimental results have shown that our approach can derive better kernel functions than other methods, and also can have better learning and generalization abilities.
|
29 |
Scalable frameworks and algorithms for cluster ensembles and clustering data streamsHore, Prodip 01 June 2007 (has links)
Clustering algorithms are an important tool for data mining and data analysis purposes. Clustering algorithms fall under the category of unsupervised learning algorithms, which can group patterns without an external teacher or labels using some kind of similarity metric. Clustering algorithms are generally iterative in nature and computationally intensive. They will have disk accesses in every iteration for data sets larger than memory, making the algorithms unacceptably slow. Data could be processed in chunks, which fit into memory, to provide a scalable framework. Multiple processors may be used to process chunks in parallel. Clustering solutions from each chunk together form an ensemble and can be merged to provide a global solution. So, merging multiple clustering solutions, an ensemble, is important for providing a scalable framework.
Combining multiple clustering solutions or partitions, is also important for obtaining a robust clustering solution, merging distributed clustering solutions, and providing a knowledge reuse and privacy preserving data mining framework. Here we address combining multiple clustering solutions in a scalable framework. We also propose algorithms for incrementally clustering large or very large data sets. We propose an algorithm that can cluster large data sets through a single pass. This algorithm is also extended to handle clustering infinite data streams. These types of incremental/online algorithms can be used for real time processing as they don't revisit data and are capable of processing data streams under the constraint of limited buffer size and computational time. Thus, different frameworks/algorithms have been proposed to address scalability issues in different settings.
To our knowledge we are the first to introduce scalable algorithms for merging cluster ensembles, in terms of time and space complexity, on large real world data sets. We are also the first to introduce single pass and streaming variants of the fuzzy c means algorithm. We have evaluated the performance of our proposed frameworks/algorithms both on artificial and large real world data sets. A comparison of our algorithms with other relevant algorithms is discussed. These comparisons show the scalability and effectiveness of the partitions created by these new algorithms.
|
30 |
Methodology development algorithms for processing and analysis of optical coherence tomography images (O.C.T.) / Μεθοδολογία ανάπτυξης αλγόριθμων για την επεξεργασία και ανάλυση εικόνων τομογραφίας οπτικής συνοχής (Ο.C.Τ)Μανδελιάς, Κωνστασταντίνος 15 January 2014 (has links)
Optical Coherence Tomography (OCT) is a catheter‐based imaging method
that employs near‐infrared light to produce high‐resolution cross sectional
intravascular images. Α new segmentation technique is implemented for automatic
lumen area extraction and stent strut detection in intravascular OCT images for the
purpose of quantitative analysis of neointimal hyperplasia (NIH). Also a graphical
user interface (GUI) is designed based on the employed algorithm.
Methods: Four clinical dataset of frequency‐domain OCT scans of the human
femoral artery were analysed. First, a segmentation method based on Fuzzy C Means
(FCM) clustering and Wavelet Transform (WT) was applied towards inner luminal
contour extraction. Subsequently, stent strut positions were detected by utilizing
metrics derived from the local maxima of the wavelet transform into the FCM
membership function.
Results: The inner lumen contour and the position of stent strut were extracted with
very high accuracy. Compared with manual segmentation by an expert physician, the
automatic segmentation had an average overlap value of 0.917 ± 0.065 for all OCT
images included in the study. Also the proposed method and all automatic
segmentation algorithms utilised in this thesis such as k‐means, FCM, MRF – ICM and
MRF – Metropolis were compared by means of mean distance difference in mm and
processing time in sec with the physician’s manual assessments.. The strut detection
procedure successfully identified 9.57 ± 0.5 struts for each OCT image.
Conclusions: A new fast and robust automatic segmentation technique combining
FCM and WT for lumen border extraction and strut detection in intravascular OCT
images was designed and implemented. The proposed algorithm may be employed
for automated quantitative morphological analysis of in‐stent neointimal
hyperplasia. / Η τομογραφία οπτικής συνοχής (OCT) είναι μια απεικονιστική μέθοδος
βασισμένη στον καθετηριασμό και χρησιμοποίει υπέρυθρο φως για να παράγει
ένδo‐αγγειακές εικόνες – εγκάρσιας τομής με υψηλή ανάλυση. Σε αυτήν την
διατριβή, μια νέα τεχνική τμηματοποίησης υλοποιήθηκε για την αυτόματη εξαγωγή
της περιοχής του αυλού καθώς και για την ανίχνευση των «strut» στις ένδo‐
αγγειακές OCT εικόνες με σκοπό την ποσοτική ανάλυση της υπερπλασίας. Επίσης
ένα εύκολο στην χρήση περιβάλλον γραφικών για καθημερινή κλινική χρήση
σχεδιάστηκε με τον υλοποιημένο αλγόριθμο.
Μέθοδοι: Τέσσερις OCT κλινικές εξετάσεις πεδίου‐συχνότητας της ανθρώπινης
μηριαίας αρτηρίας αναλύθηκαν. H προτεινόμενη μέθοδος τμηματοποίησης για την
εξαγωγή του εσωτερικού περιγράμματος αυλού, είναι βασισμένη στον Fuzzy CMeans
(FCM) clustering και τον μετασχηματισμό κυματιδίου. Στη συνέχεια, οι
θέσεις των «strut» εντοπίστηκαν χρησιμοποιώντας διάφορες τοπικές παραμέτρους
που προέρχονται από τα τοπικά μέγιστα του μετασχηματισμού κυματιδίων εντός
της FCM συνάρτησης.
Αποτελέσματα: Το εσωτερικό περίγραμμα αυλού και η θέση των «strut» εξήχθηκαν
με πολύ μεγάλη ακρίβεια. Σε σύγκριση με την ποσοτική αξιολόγηση από έναν ειδικό
ιατρό, η αυτόματη τμηματοποίηση είχε μέση τιμή επικάλυψης 0,917±0,065 για όλες
τις OCT εικόνες που περιλαμβάνονται στη μελέτη. Επίσης, έγινε σύγκριση με τους
k‐means, FCM, ICM και Μetropolis αυτόματους αλγόριθμους τμηματοποίησης για
εξαγωγή του εσωτερικού περιγράμματος αυλού και επέδειξε υψηλής ακρίβειας
αποτελέσματα στον μικρότερο δυνατό χρόνο επεξεργασίας. Η διαδικασία
ανίχνευσης «strut» προσδιόρισε επιτυχώς 9.57± 0,5 «strut» για κάθε OCT εικόνα.
Συμπεράσματα: Μια νέα αποτελεσματική και γρήγορη αυτόματη τεχνική
τμηματοποίησης που συνδυάζει FCM και WT για την εξαγωγή των ορίων του αυλού
και την ανίχνευση των «strut» στις ένδο‐αγγειακές εικόνες OCT σχεδιάστηκε και
υλοποιήθηκε. Ο προτεινόμενος αλγόριθμος μπορεί να χρησιμοποιηθεί για την
αυτοματοποιημένη ποσοτική μορφολογική ανάλυση της υπερπλασίας.
|
Page generated in 0.0314 seconds