• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 24
  • 12
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 61
  • 61
  • 18
  • 14
  • 10
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A kernel-based fuzzy clustering algorithm and its application in classification

Wang, Jiun-hau 25 July 2006 (has links)
In this paper, we purpose a kernel-based fuzzy clustering algorithm to cluster data patterns in the feature space. Our method uses kernel functions to project data from the original space into a high dimensional feature space, and data are divided into groups though their similarities in the feature space with an incremental clustering approach. After clustering, data patterns of the same cluster in the feature space are then grouped with an arbitrarily shaped boundary in the original space. As a result, clusters with arbitrary shapes are discovered in the original space. Clustering, which can be taken as unsupervised classification, has also been utilized in resolving classification problems. So, we extend our method to process the classification problems. By working in the high dimensional feature space where the data are expected to more separable, we can discover the inner structure of the data distribution. Therefore, our method has the advantage of dealing with new incoming data pattern efficiently. The effectiveness of our method is demonstrated in the experiment.
2

Finding all maximal cliques in dynamic graphs

Stix, Volker January 2002 (has links) (PDF)
Clustering applications dealing with perception based or biased data lead to models with non-disjunct clusters. There, objects to be clustered are allowed to belong to several clusters at the same time which results in a fuzzy clustering. It can be shown that this is equivalent to searching all maximal cliques in dynamic graphs like G_t=(V,E_t), where E_(t-1) in E_t, t=1,... ,T; E_0=(). In this article algorithms are provided to track all maximal cliques in a fully dynamic graph. It is naturally to raise the question about the maximum clique, having all maximal cliques. Therefore this article discusses potentials and drawbacks for this problem as well. (author's abstract) / Series: Working Papers on Information Systems, Information Business and Operations
3

ENHANCING FUZZY CLUSTERING METHODS FOR IMAGE SEGMENTATION USING SPATIAL INFORMATION

CHEN, SHANGYE 30 April 2019 (has links)
No description available.
4

Clustering of nonstationary data streams: a survey of fuzzy partitional methods

Abdullatif, Amr R.A., Masulli, F., Rovetta, S. 20 January 2020 (has links)
Yes / Data streams have arisen as a relevant research topic during the past decade. They are real‐time, incremental in nature, temporally ordered, massive, contain outliers, and the objects in a data stream may evolve over time (concept drift). Clustering is often one of the earliest and most important steps in the streaming data analysis workflow. A comprehensive literature is available about stream data clustering; however, less attention is devoted to the fuzzy clustering approach, even though the nonstationary nature of many data streams makes it especially appealing. This survey discusses relevant data stream clustering algorithms focusing mainly on fuzzy methods, including their treatment of outliers and concept drift and shift. / Ministero dell‘Istruzione, dell‘Universitá e della Ricerca.
5

Diagrammes d'Euler pour la visualisation de communautés et d'ensembles chevauchants

Simonetto, Paolo 02 December 2011 (has links) (PDF)
Dans cette thèse, nous proposons une méthode pour la visualisation d'ensembles chevauchant et de basé sur les diagrammes d'Euler. Les diagrammes d'Euler sont probablement les plus intuitifs pour représenter de manière schématique les ensembles qui partagent des éléments. Cette métaphore visuelle est ainsi un outil puissant en termes de visualisation d'information. Cependant, la génération automatique de ces diagrammes présente encore de nombreux problèmes difficiles. Premièrement, tous les clustering chevauchants ne peuvent pas être dessinées avec les diagrammes d'Euler classiques. Deuxièmement, la plupart des algorithmes existants permettent uniquement de représenter les diagrammes de dimensions modestes. Troisièmement, les besoins des applications réelles requièrent un processus plus fiable et plus rapide. Dans cette thèse, nous décrivons une version étendue des diagrammes d'Euler. Cette extension permet de modéliser l'ensemble des instances de la classe des clustering chevauchants. Nous proposons ensuite un algorithme automatique de génération de cette extension des diagrammes d'Euler. Enfin, nous présentons une implémentation logicielle et des expérimentations de ce nouvel algorithme.
6

Similarity-Driven Cluster Merging Method for Unsupervised Fuzzy Clustering

Xiong, Xuejian, Tan, Kian Lee 01 1900 (has links)
In this paper, a similarity-driven cluster merging method is proposed for unsupervised fuzzy clustering. The cluster merging method is used to resolve the problem of cluster validation. Starting with an overspecified number of clusters in the data, pairs of similar clusters are merged based on the proposed similarity-driven cluster merging criterion. The similarity between clusters is calculated by a fuzzy cluster similarity matrix, while an adaptive threshold is used for merging. In addition, a modified generalized objective function is used for prototype-based fuzzy clustering. The function includes the p-norm distance measure as well as principal components of the clusters. The number of the principal components is determined automatically from the data being clustered. The performance of this unsupervised fuzzy clustering algorithm is evaluated by several experiments of an artificial data set and a gene expression data set. / Singapore-MIT Alliance (SMA)
7

An integrated and intelligent metaheuristic for constrained vehicle routing

Joubert, Johannes Wilhelm 20 July 2007 (has links)
South African metropolitan areas are experiencing rapid growth and require an increase in network infrastructure. Increased congestion negatively impacts both public and freight transport costs. The concept of City Logistics is concerned with the mobility of cities, and entails the process of optimizing urban logistics activities by concerning the social, environmental, economic, financial, and energy impacts of urban freight movement. In a costcompetitive environment, freight transporters often use sophisticated vehicle routing and scheduling applications to improve fleet utilization and reduce the cost of meeting customer demands. In this thesis, the candidate builds on the observation that vehicle routing and scheduling algorithms are inherent problem specific, with no single algorithm providing a dominant solution to all problem environments. Commercial applications mostly deploy a single algorithm in a multitude of environments which would often be better serviced by various different algorithms. This thesis algorithmically implements the ability of human decision makers to choose an appropriate solution algorithm when solving scheduling problems. The intent of the routing agent is to classify the problem as representative of a traditional problem set, based on its characteristics, and then to solve the problem with the most appropriate solution algorithm known for the traditional problem set. A not-so-artificially-intelligent-vehicle-routing-agent™ is proposed and developed in this thesis. To be considered intelligent, an agent is firstly required to be able to recognize its environment. Fuzzy c-means clustering is employed to analyze the geographic dispersion of the customers (nodes) from an unknown routing problem to determine to which traditional problem set it relates best. Cluster validation is used to classify the routing problem into a traditional problem set. Once the routing environment is classified, the agent selects an appropriate metaheuristic to solve the complex variant of the Vehicle Routing Problem. Multiple soft time windows, a heterogeneous fleet, and multiple scheduling are addressed in the presence of time-dependent travel times. A new initial solution heuristic is proposed that exploits the inherent configuration of customer service times through a concept referred to as time window compatibility. A high-quality initial solution is subsequently improved by the Tabu Search metaheuristic through both an adaptive memory, and a self-selection structure. As an alternative to Tabu Search, a Genetic Algorithm is developed in this thesis. Two new crossover mechanisms are proposed that outperform a number of existing crossover mechanisms. The first proposed mechanism successfully uses the concept of time window compatibility, while the second builds on an idea used from a different sweeping-arc heuristic. A neural network is employed to assist the intelligent routing agent to choose, based on its knowledge base, between the two metaheuristic algorithms available to solve the unknown problem at hand. The routing agent then not only solves the complex variant of the problem, but adapts to the problem environment by evaluating its decisions, and updating, or reaffirming its knowledge base to ensure improved decisions are made in future. / Thesis (PhD (Industrial Engineering))--University of Pretoria, 2007. / Industrial and Systems Engineering / PhD / unrestricted
8

Evaluating of Fuzzy Clustering Results / Hodnocení Výsledků Fuzzy Shlukování

Říhová, Elena January 2013 (has links)
Cluster analysis is a multivariate statistical classification method, implying different methods and procedures. Clustering methods can be divided into hard and fuzzy; the latter one provides a more precise picture of the information by clustering objects than hard clustering. But in practice, the optimal number of clusters is not known a priori, and therefore it is necessary to determine the optimal number of clusters. To solve this problem, the validity indices help us. However, there are many different validity indices to choose from. One of the goals of this work is to create a structured overview of existing validity indices and techniques for evaluating fuzzy clustering results in order to find the optimal number of clusters. The main aim was to propose a new index for evaluating the fuzzy clustering results, especially in cases with a large number of clusters (defined as more than five). The newly designed coefficient is based on the degrees of membership and on the distance (Euclidean distance) between the objects, i.e. based on principles from both fuzzy and hard clustering. The suitability of selected validity indices was applied on real and generated data sets with known optimal number of clusters a priory. These data sets have different sizes, different numbers of variables, and different numbers of clusters. The aim of the current work is regarded as fulfilled. A key contribution of this work was a new coefficient (E), which is appropriate for evaluating situations with both large and small numbers of clusters. Because the new validity index is based on the principles of both fuzzy clustering and hard clustering, it is able to correctly determine the optimal number of clusters on both small and large data sets. A second contribution of this research was a structured overview of existing validity indices and techniques for evaluating the fuzzy clustering results.
9

Processing of Graded Signaling Systems

Wadewitz, Philip 04 December 2015 (has links)
No description available.
10

A robust and reliable data-driven prognostics approach based on Extreme Learning Machine and Fuzzy Clustering / Une approche robuste et fiable de pronostic guidé par les données robustes et basée sur l'apprentissage automatique extrême et la classification floue

Javed, kamran 09 April 2014 (has links)
Le pronostic industriel vise à étendre le cycle de vie d’un dispositif physique, tout en réduisant les couts d’exploitation et de maintenance. Pour cette raison, le pronostic est considéré comme un processus clé avec des capacités de prédiction. En effet, des estimations précises de la durée de vie avant défaillance d’un équipement, Remaining Useful Life (RUL), permettent de mieux définir un plan d’action visant à accroitre la sécurité, réduire les temps d’arrêt, assurer l’achèvement de la mission et l’efficacité de la production.Des études récentes montrent que les approches guidées par les données sont de plus en plus appliquées pour le pronostic de défaillance. Elles peuvent être considérées comme des modèles de type boite noire pour l’ étude du comportement du système directement `a partir des données de surveillance d’ état, pour définir l’ état actuel du système et prédire la progression future de défauts. Cependant, l’approximation du comportement des machines critiques est une tâche difficile qui peut entraîner des mauvais pronostic. Pour la compréhension de la modélisation du pronostic guidé par les données, on considère les points suivants. 1) Comment traiter les données brutes de surveillance pour obtenir des caractéristiques appropriées reflétant l’ évolution de la dégradation? 2) Comment distinguer les états de dégradation et définir des critères de défaillance (qui peuvent varier d’un cas `a un autre)? 3) Comment être sûr que les modèles définis seront assez robustes pour montrer une performance stable avec des entrées incertaines s’ écartant des expériences acquises, et seront suffisamment fiables pour intégrer des données inconnues (c’est `a dire les conditions de fonctionnement, les variations de l’ingénierie, etc.)? 4) Comment réaliser facilement une intégration sous des contraintes et des exigence industrielles? Ces questions sont des problèmes abordés dans cette thèse. Elles ont conduit à développer une nouvelle approche allant au-delà des limites des méthodes classiques de pronostic guidé par les données. / Prognostics and Health Management (PHM) aims at extending the life cycle of engineerin gassets, while reducing exploitation and maintenance costs. For this reason,prognostics is considered as a key process with future capabilities. Indeed, accurateestimates of the Remaining Useful Life (RUL) of an equipment enable defining furtherplan of actions to increase safety, minimize downtime, ensure mission completion andefficient production.Recent advances show that data-driven approaches (mainly based on machine learningmethods) are increasingly applied for fault prognostics. They can be seen as black-boxmodels that learn system behavior directly from Condition Monitoring (CM) data, usethat knowledge to infer its current state and predict future progression of failure. However,approximating the behavior of critical machinery is a challenging task that canresult in poor prognostics. As for understanding, some issues of data-driven prognosticsmodeling are highlighted as follows. 1) How to effectively process raw monitoringdata to obtain suitable features that clearly reflect evolution of degradation? 2) Howto discriminate degradation states and define failure criteria (that can vary from caseto case)? 3) How to be sure that learned-models will be robust enough to show steadyperformance over uncertain inputs that deviate from learned experiences, and to bereliable enough to encounter unknown data (i.e., operating conditions, engineering variations,etc.)? 4) How to achieve ease of application under industrial constraints andrequirements? Such issues constitute the problems addressed in this thesis and have ledto develop a novel approach beyond conventional methods of data-driven prognostics.

Page generated in 0.0369 seconds