Spelling suggestions: "subject:"fuzzy cmeans"" "subject:"fuzzy ckmeans""
21 |
Segmentation d'images de transmission pour la correction de l'atténué en tomographie d'émission par positronsNguiffo Podie, Yves January 2009 (has links)
L'atténuation des photons est un phénomène qui affecte directement et de façon profonde la qualité et l'information quantitative obtenue d'une image en Tomographie d'Emission par Positrons (TEP). De sévères artefacts compliquant l'interprétation visuelle ainsi que de profondes erreurs d'exactitudes sont présents lors de l'évaluation quantitative des images TEP, biaisant la vérification de la corrélation entre les concentrations réelles et mesurées.L' atténuation est due aux effets photoélectrique et Compton pour l'image de transmission (30 keV - 140 keV), et majoritairement à l'effet Compton pour l'image d'émission (511 keV). La communauté en médecine nucléaire adhère largement au fait que la correction d'atténuation constitue une étape cruciale pour l'obtention d'images sans artefacts et quantitativement exactes. Pour corriger les images d'émission TEP pour l'atténué, l'approche proposée consiste concrètement à segmenter une image de transmission à l'aide d'algorithmes de segmentation: K-means (KM), Fuzzy C-means (FCM), Espérance-Maximisation (EM), et EM après une transformation en ondelettes (OEM). KM est un algorithme non supervisé qui partitionne les pixels de l'image en agrégats tels que chaque agrégat de la partition soit défini par ses objets et son centroïde. FCM est un algorithme de classification non-supervisée qui introduit la notion d'ensemble flou dans la définition des agrégats, et chaque pixel de l'image appartient à chaque agrégat avec un certain degré, et tous les agrégats sont caractérisés par leur centre de gravité.L'algorithme EM est une méthode d'estimation permettant de déterminer les paramètres du maximum de vraisemblance d'un mélange de distributions avec comme paramètres du modèle à estimer la moyenne, la covariance et le poids du mélange correspondant à chaque agrégat. Les ondelettes forment un outil pour la décomposition du signal en une suite de signaux dits d'approximation de résolution décroissante suivi d'une suite de rectifications appelées détails.L' image à laquelle a été appliquée les ondelettes est segmentée par EM. La correction d'atténuation nécessite la conversion des intensités des images de transmission segmentées en coefficients d'atténuation à 511 keV. Des facteurs de correction d' atténuation (FCA) pour chaque ligne de réponse sont alors obtenus, lesquels représentent le rapport entre les photons émis et transmis. Ensuite il s'agit de multiplier le sinogramme, formé par l'ensemble des lignes de réponses, des FCA par le sinogramme de l'image d'émission pour avoir le sinogramme corrigé pour l'atténuation, qui est par la suite reconstruit pour générer l'image d'émission TEP corrigée. Nous avons démontré l'utilité de nos méthodes proposées dans la segmentation d'images médicales en les appliquant à la segmentation des images du cerveau, du thorax et de l'abdomen humains. Des quatre processus de segmentation, la décomposition par les ondelettes de Haar suivie de l'Espérance-Maximisation (OEM) semble donner un meilleur résultat en termes de contraste et de résolution. Les segmentations nous ont permis une réduction claire de la propagation du bruit des images de transmission dans les images d'émission, permettant une amélioration de la détection des lésions, et améliorant les diagnostics en médecine nucléaire.
|
22 |
Déploiement sur une plate-forme de visualisation d'un algorithme coopératif pour la segmentation d'images IRM autour d'un système multi-agentsLaguel, Hadjer 12 October 2010 (has links) (PDF)
L'objectif de ce travail est le déploiement d'un système de segmentation d'images de résonance magnétiques (IRM) sur la plateforme de visualisation "BrainVISA". La segmentation vise à classifier le cerveau en trois régions : la matière blanche, la matière grise et le liquide céphalo-rachidien. Il existe plusieurs algorithmes de segmentation d'images, chacun possédant ses avantages et ses limites d'utilisation. Dans ce travail, nous utilisons deux types d'algorithmes : la FCM (Fuzzy C-Mean) qui étudie l'image dans sa globalité et permet de modéliser l'incertitude et l'imprécision et, la croissance de régions qui tient compte des relations de voisinage entre pixels ; le but étant de tirer partie des avantages de chacun. Les deux méthodes de segmentation sont utilisées de manière coopérative. Les prédicats de la croissance de régions sont adaptés à nos images afin d'améliorer la qualité de l'image segmentée. Une implémentation est alors mise en oeuvre autour d'un système multi-agents (SMA), qui permet d'utiliser l'algorithme de croissance de régions de manière plus efficace. La segmentation est réalisée sur des images réelles.
|
23 |
A Genetic Algorithm that Exchanges Neighboring Centers for Fuzzy c-Means ClusteringChahine, Firas Safwan 01 January 2012 (has links)
Clustering algorithms are widely used in pattern recognition and data mining applications. Due to their computational efficiency, partitional clustering algorithms are better suited for applications with large datasets than hierarchical clustering algorithms. K-means is among the most popular partitional clustering algorithm, but has a major shortcoming: it is extremely sensitive to the choice of initial centers used to seed the algorithm. Unless k-means is carefully initialized, it converges to an inferior local optimum and results in poor quality partitions. Developing improved method for selecting initial centers for k-means is an active area of research. Genetic algorithms (GAs) have been successfully used to evolve a good set of initial centers. Among the most promising GA-based methods are those that exchange neighboring centers between candidate partitions in their crossover operations.
K-means is best suited to work when datasets have well-separated non-overlapping clusters. Fuzzy c-means (FCM) is a popular variant of k-means that is designed for applications when clusters are less well-defined. Rather than assigning each point to a unique cluster, FCM determines the degree to which each point belongs to a cluster. Like k-means, FCM is also extremely sensitive to the choice of initial centers. Building on GA-based methods for initial center selection for k-means, this dissertation developed an evolutionary program for center selection in FCM called FCMGA. The proposed algorithm utilized region-based crossover and other mechanisms to improve the GA.
To evaluate the effectiveness of FCMGA, three independent experiments were conducted using real and simulated datasets. The results from the experiments demonstrate the effectiveness and consistency of the proposed algorithm in identifying better quality solutions than extant methods. Moreover, the results confirmed the effectiveness of region-based crossover in enhancing the search process for the GA and the convergence speed of FCM. Taken together, findings in these experiments illustrate that FCMGA was successful in solving the problem of initial center selection in partitional clustering algorithms.
|
24 |
Diseño y desarrollo de un sistema para la detección automática de retinopatía diabética en imágenes digitalesArenas Cavalli, José Tomas Alejandro January 2012 (has links)
Memoria para optar al título de Ingeniero Civil Electricista / Memoria para optar al título de Ingeniero Civil Industrial / La detección automática de la patología oftalmológica denominada retinopatía diabética tiene el potencial de prevenir casos de pérdida de visión y ceguera, en caso de impulsar la exanimación masiva de pacientes con diabetes. Este trabajo apunta a diseñar y desarrollar un clasificador a nivel de prototipo que permita discriminar entre pacientes con y sin presencia de la enfermedad, por medio del procesamiento automático de imágenes de fondo de ojo digitales. Los procedimientos se basan en la adaptación e integración de algoritmos publicados.
Las etapas desarrolladas en el procesamiento digital de las imágenes de retina para este objetivo fueron: localización de vasos sanguíneos, localización de disco óptico (DO), detección de lesiones claras y detección de lesiones rojas. Las técnicas utilizadas para cada una de las etapas fueron, respectivamente: Gabor wavelets y clasificadores bayesianos; características de los vasos y predicción de posición mediante regresores kNN; segmentación mediante fuzzy c-means y clasificación usando una red neuronal multicapas; y, operadores morfológicos ajustados óptimamente.
La base de datos de imágenes para el entrenamiento y prueba de los métodos desarrollados cuenta con 285 imágenes de un centro médico local, incluyendo 214 normales y 71 con la enfermedad. Los resultados específicos fueron: 100% de precisión en la ubicación del DO en las 142 imágenes de prueba; identificación del 91,4% de las imágenes con lesiones claras, i.e., la sensibilidad, mientras se reconocieron 53,3% de las imágenes sin lesiones claras, i.e., la especificidad (84,1% de sensibilidad y 84,7% de especificidad a nivel de pixel) en las mismas 142 muestras; y, 97% de sensibilidad y 92% de especificidad en la detección de lesiones rojas en 155 imágenes. El desempeño en la ubicación de la red vascular es medido por el resultado del resto de los pasos. El rendimiento general del sistema es de un 88,7% y 49,1% en cuanto a sensibilidad y especificidad, respectivamente.
Algunas medidas fundamentales son necesarias para la implementación a futuro. En primer lugar, acrecentar la base de datos de imágenes para entrenamiento y prueba. Además, es posible pulir cada una de las etapas intermedias de las cuatro grandes fases. Con todo, una ronda de implementación a nivel usuario de un prototipo general permitirá evaluación y mejora de los métodos diseñados y desarrollados.
|
25 |
An investigation into fuzzy clustering quality and speed : fuzzy C-means with effective seedingStetco, Adrian January 2017 (has links)
Cluster analysis, the automatic procedure by which large data sets can be split into similar groups of objects (clusters), has innumerable applications in a wide range of problem domains. Improvements in clustering quality (as captured by internal validation indexes) and speed (number of iterations until cost function convergence), the main focus of this work, have many desirable consequences. They can result, for example, in faster and more precise detection of illness onset based on symptoms or it could provide investors with a rapid detection and visualization of patterns in financial time series and so on. Partitional clustering, one of the most popular ways of doing cluster analysis, can be classified into two main categories: hard (where the clusters discovered are disjoint) and soft (also known as fuzzy; clusters are non-disjoint, or overlapping). In this work we consider how improvements in the speed and solution quality of the soft partitional clustering algorithm Fuzzy C-means (FCM) can be achieved through more careful and informed initialization based on data content. By carefully selecting the cluster centers in a way which disperses the initial cluster centers through the data space, the resulting FCM++ approach samples starting cluster centers during the initialization phase. The cluster centers are well spread in the input space, resulting in both faster convergence times and higher quality solutions. Moreover, we allow the user to specify a parameter indicating how far and apart the cluster centers should be picked in the dataspace right at the beginning of the clustering procedure. We show FCM++'s superior behaviour in both convergence times and quality compared with existing methods, on a wide rangeof artificially generated and real data sets. We consider a case study where we propose a methodology based on FCM++for pattern discovery on synthetic and real world time series data. We discuss a method to utilize both Pearson correlation and Multi-Dimensional Scaling in order to reduce data dimensionality, remove noise and make the dataset easier to interpret and analyse. We show that by using FCM++ we can make an positive impact on the quality (with the Xie Beni index being lower in nine out of ten cases for FCM++) and speed (with on average 6.3 iterations compared with 22.6 iterations) when trying to cluster these lower dimensional, noise reduced, representations of the time series. This methodology provides a clearer picture of the cluster analysis results and helps in detecting similarly behaving time series which could otherwise come from any domain. Further, we investigate the use of Spherical Fuzzy C-Means (SFCM) with the seeding mechanism used for FCM++ on news text data retrieved from a popular British newspaper. The methodology allows us to visualize and group hundreds of news articles based on the topics discussed within. The positive impact made by SFCM++ translates into a faster process (with on average 12.2 iterations compared with the 16.8 needed by the standard SFCM) and a higher quality solution (with the Xie Beni being lower for SFCM++ in seven out of every ten runs).
|
26 |
Mejora de las probabilidades de pertenencia en clasificación difusa con aplicación al cálculo preciso de órbitasSoto Espinosa, Jesús Antonio 30 June 2005 (has links)
No description available.
|
27 |
An Empirical Study On Fuzzy C-means Clustering For Turkish Banking SystemAltinel, Fatih 01 September 2012 (has links) (PDF)
Banking sector is very sensitive to macroeconomic and political instabilities and they are prone to crises. Since banks are integrated with almost all of the economic agents and with other banks, these crises affect entire societies. Therefore, classification or rating of banks with respect to their credibility becomes important. In this study we examine different models for classification of banks. Choosing one of those models, fuzzy c-means clustering, banks are grouped into clusters using 48 different ratios which can be classified under capital, assets quality, liquidity, profitability, income-expenditure structure, share in sector, share in group and branch ratios. To determine the inter-dependency between these variables, covariance and correlation between variables is analyzed. Principal component analysis is used to decrease the number of factors. As a a result, the representation space of data has been reduced from 48 variables to a 2 dimensional space. The observation is that 94.54% of total variance is produced by these two factors. Empirical results indicate that as the number of clusters is increased, the number of iterations required for minimizing the objective function fluctuates and is not monotonic. Also, as the number of clusters used increases, initial non-optimized maximum objective function values as well as optimized final minimum objective function values monotonically decrease together. Another observation is that the &lsquo / difference between initial non-optimized and final optimized values of objective function&rsquo / starts to diminish as number of clusters increases.
|
28 |
Fuzzy Entropy Based Fuzzy c-Means Clustering with Deterministic and Simulated Annealing MethodsFURUHASHI, Takeshi, YASUDA, Makoto 01 June 2009 (has links)
No description available.
|
29 |
Estimation of Parameters in Support Vector RegressionChan, Yi-Chao 21 July 2006 (has links)
The selection and modification of kernel functions is a very important problem in the field of support vector learning. However, the kernel function of a support vector machine has great influence on its performance. The kernel function projects the dataset from the original data space into the feature space, and therefore the problems which couldn¡¦t be done in low dimensions could be done in a higher dimension through the transform of the kernel function. In this thesis, we adopt the FCM clustering algorithm to group data patterns into clusters, and then use a statistical approach to calculate the standard deviation of each pattern with respect to the other patterns in the same cluster. Therefore we can make a proper estimation on the distribution of data patterns and assign a proper standard deviation for each pattern. The standard deviation is the same as the variance of a radial basis function. Then we have the origin data patterns and the variance of each data pattern for support vector learning. Experimental results have shown that our approach can derive better kernel functions than other methods, and also can have better learning and generalization abilities.
|
30 |
Accelerated Fuzzy ClusteringParker, Jonathon Karl 01 January 2013 (has links)
Clustering algorithms are a primary tool in data analysis, facilitating the discovery of groups and structure in unlabeled data. They are used in a wide variety of industries and applications. Despite their ubiquity, clustering algorithms have a flaw: they take an unacceptable amount of time to run as the number of data objects increases. The need to compensate for this flaw has led to the development of a large number of techniques intended to accelerate their performance. This need grows greater every day, as collections of unlabeled data grow larger and larger. How does one increase the speed of a clustering algorithm as the number of data objects increases and at the same time preserve the quality of the results? This question was studied using the Fuzzy c-means clustering algorithm as a baseline. Its performance was compared to the performance of four of its accelerated variants. Four key design principles of accelerated clustering algorithms were identified. Further study and exploration of these principles led to four new and unique contributions to the field of accelerated fuzzy clustering. The first was the identification of a statistical technique that can estimate the minimum amount of data needed to ensure a multinomial, proportional sample. This technique was adapted to work with accelerated clustering algorithms. The second was the development of a stopping criterion for incremental algorithms that minimizes the amount of data required, while maximizing quality. The third and fourth techniques were new ways of combining representative data objects. Five new accelerated algorithms were created to demonstrate the value of these contributions. One additional discovery made during the research was that the key design principles most often improve performance when applied in tandem. This discovery was applied during the creation of the new accelerated algorithms. Experiments show that the new algorithms improve speedup with minimal quality loss, are demonstrably better than related methods and occasionally are an improvement in both speedup and quality over the base algorithm.
|
Page generated in 0.0709 seconds