• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • Tagged with
  • 12
  • 12
  • 12
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Distributed Hierarchical Clustering

Loganathan, Satish Kumar January 2018 (has links)
No description available.
2

Técnicas de combinação para agrupamento centralizado e distribuído de dados / Ensemble techniques for centralized and distributed clustering

Naldi, Murilo Coelho 24 January 2011 (has links)
A grande quantidade de dados gerada em diversas áreas do conhecimento cria a necessidade do desenvolvimento de técnicas de mineração de dados cada vez mais eficientes e eficazes. Técnicas de agrupamento têm sido utilizadas com sucesso em várias áreas, especialmente naquelas em que não há conhecimento prévio sobre a organização dos dados. Contudo, a utilização de diferentes algoritmos de agrupamento, ou variações de um mesmo algoritmo, pode gerar uma ampla variedade de resultados. Tamanha variedade cria a necessidade de métodos para avaliar e selecionar bons resultados. Uma forma de avaliar esses resultados consiste em utilizar índices de validação de agrupamentos. Entretanto, uma grande diversidade de índices de validação foi proposta na literatura, o que torna a escolha de um único índice de validação uma tarefa penosa caso os desempenhos dos índices comparados sejam desconhecidos para a classe de problemas de interesse. Com a finalidade de obter um consenso entre resultados, é possível combinar um conjunto de agrupamentos ou índices de validação em uma única solução final. Combinações de agrupamentos (clustering ensembles) foram bem sucedidas em obter soluções robustas a variações no cenário de aplicação, o que faz do uso de comitês de agrupamentos uma alternativa interessante para encontrar soluções de qualidade razoável, segundo diferentes índices de validação. Adicionalmente, utilizar uma combinação de índices de validação pode tornar a avaliação de agrupamentos mais completa, uma vez que uma maioria dos índices combinados pode compensar o fraco desempenho do restante. Em alguns casos, não é possível lidar com um único conjunto de dados centralizado, por razões físicas ou questões de privacidade, o que gera a necessidade de distribuir o processo de mineração. Combinações de agrupamentos também podem ser estendidas para problemas de agrupamento de dados distribuídos, uma vez que informações sobre os dados, oriundas de diferentes fontes, podem ser combinadas em uma única solução global. O principal objetivo desse trabalho consiste em investigar técnicas de combinação de agrupamentos e de índices de validação aplicadas na seleção de agrupamentos para combinação e na mineração distribuída de dados. Adicionalmente, algoritmos evolutivos de agrupamento são estudados com a finalidade de selecionar soluções de qualidade dentre os resultados obtidos. As técnicas desenvolvidas possuem complexidade computacional reduzida e escalabilidade, o que permite sua aplicação em grandes conjuntos de dados ou cenários em que os dados encontram-se distribuídos / The large amount of data resulting from different areas of knowledge creates the need for development of data mining techniques increasingly efficient and effective. Clustering techniques have been successfully applied to several areas, especially when there is no prior knowledge about the data organization. Nevertheless, the use of different clustering algorithms, or variations of the same algorithm, can generate a wide variety of results, what raises the need to create methods to assess and select good results. One way to evaluate these results consists on using cluster validation indexes. However, a wide variety of validation indexes was proposed in the literature, which can make choosing a single index challenging if the performance of the compared indexes is unknown for the application scenario. In order to obtain a consensus among different options, a set of clustering results or validation indexes can be combined into a single final solution. Clustering ensembles successfully obtained results robust to variations in the application scenario, which makes them an attractive alternative to find solutions of reasonable quality, according to different validation indexes. Moreover, using a combination of validation indexes can promote a more powerful evaluation, as the majority of the combined indexes can compensate the poor performance of individual indexes. In some cases, it is not possible to work with a single centralized data set, for physical reasons or privacy concerns, which creates the need to distribute the mining process. Clustering ensembles can be extended to distributed data mining problems, since information about the data from distributed sources can be combined into a single global solution. The main objective of this research resides in investigating combination techniques for validation indexes and clustering results applied to clustering ensemble selection and distributed clustering. Additionally, evolutionary clustering algorithms are studied to select quality solutions among the obtained results. The techniques developed have scalability and reduced computational complexity, allowing their usage in large data sets or scenarios with distributed data
3

Técnicas de combinação para agrupamento centralizado e distribuído de dados / Ensemble techniques for centralized and distributed clustering

Murilo Coelho Naldi 24 January 2011 (has links)
A grande quantidade de dados gerada em diversas áreas do conhecimento cria a necessidade do desenvolvimento de técnicas de mineração de dados cada vez mais eficientes e eficazes. Técnicas de agrupamento têm sido utilizadas com sucesso em várias áreas, especialmente naquelas em que não há conhecimento prévio sobre a organização dos dados. Contudo, a utilização de diferentes algoritmos de agrupamento, ou variações de um mesmo algoritmo, pode gerar uma ampla variedade de resultados. Tamanha variedade cria a necessidade de métodos para avaliar e selecionar bons resultados. Uma forma de avaliar esses resultados consiste em utilizar índices de validação de agrupamentos. Entretanto, uma grande diversidade de índices de validação foi proposta na literatura, o que torna a escolha de um único índice de validação uma tarefa penosa caso os desempenhos dos índices comparados sejam desconhecidos para a classe de problemas de interesse. Com a finalidade de obter um consenso entre resultados, é possível combinar um conjunto de agrupamentos ou índices de validação em uma única solução final. Combinações de agrupamentos (clustering ensembles) foram bem sucedidas em obter soluções robustas a variações no cenário de aplicação, o que faz do uso de comitês de agrupamentos uma alternativa interessante para encontrar soluções de qualidade razoável, segundo diferentes índices de validação. Adicionalmente, utilizar uma combinação de índices de validação pode tornar a avaliação de agrupamentos mais completa, uma vez que uma maioria dos índices combinados pode compensar o fraco desempenho do restante. Em alguns casos, não é possível lidar com um único conjunto de dados centralizado, por razões físicas ou questões de privacidade, o que gera a necessidade de distribuir o processo de mineração. Combinações de agrupamentos também podem ser estendidas para problemas de agrupamento de dados distribuídos, uma vez que informações sobre os dados, oriundas de diferentes fontes, podem ser combinadas em uma única solução global. O principal objetivo desse trabalho consiste em investigar técnicas de combinação de agrupamentos e de índices de validação aplicadas na seleção de agrupamentos para combinação e na mineração distribuída de dados. Adicionalmente, algoritmos evolutivos de agrupamento são estudados com a finalidade de selecionar soluções de qualidade dentre os resultados obtidos. As técnicas desenvolvidas possuem complexidade computacional reduzida e escalabilidade, o que permite sua aplicação em grandes conjuntos de dados ou cenários em que os dados encontram-se distribuídos / The large amount of data resulting from different areas of knowledge creates the need for development of data mining techniques increasingly efficient and effective. Clustering techniques have been successfully applied to several areas, especially when there is no prior knowledge about the data organization. Nevertheless, the use of different clustering algorithms, or variations of the same algorithm, can generate a wide variety of results, what raises the need to create methods to assess and select good results. One way to evaluate these results consists on using cluster validation indexes. However, a wide variety of validation indexes was proposed in the literature, which can make choosing a single index challenging if the performance of the compared indexes is unknown for the application scenario. In order to obtain a consensus among different options, a set of clustering results or validation indexes can be combined into a single final solution. Clustering ensembles successfully obtained results robust to variations in the application scenario, which makes them an attractive alternative to find solutions of reasonable quality, according to different validation indexes. Moreover, using a combination of validation indexes can promote a more powerful evaluation, as the majority of the combined indexes can compensate the poor performance of individual indexes. In some cases, it is not possible to work with a single centralized data set, for physical reasons or privacy concerns, which creates the need to distribute the mining process. Clustering ensembles can be extended to distributed data mining problems, since information about the data from distributed sources can be combined into a single global solution. The main objective of this research resides in investigating combination techniques for validation indexes and clustering results applied to clustering ensemble selection and distributed clustering. Additionally, evolutionary clustering algorithms are studied to select quality solutions among the obtained results. The techniques developed have scalability and reduced computational complexity, allowing their usage in large data sets or scenarios with distributed data
4

Learning Decision List from Distributed Data Sources

Charllo, Bala Vignesh January 2018 (has links)
No description available.
5

An Architecture For High-performance Privacy-preserving And Distributed Data Mining

Secretan, James 01 January 2009 (has links)
This dissertation discusses the development of an architecture and associated techniques to support Privacy Preserving and Distributed Data Mining. The field of Distributed Data Mining (DDM) attempts to solve the challenges inherent in coordinating data mining tasks with databases that are geographically distributed, through the application of parallel algorithms and grid computing concepts. The closely related field of Privacy Preserving Data Mining (PPDM) adds the dimension of privacy to the problem, trying to find ways that organizations can collaborate to mine their databases collectively, while at the same time preserving the privacy of their records. Developing data mining algorithms for DDM and PPDM environments can be difficult and there is little software to support it. In addition, because these tasks can be computationally demanding, taking hours of even days to complete data mining tasks, organizations should be able to take advantage of high-performance and parallel computing to accelerate these tasks. Unfortunately there is no such framework that is able to provide all of these services easily for a developer. In this dissertation such a framework is developed to support the creation and execution of DDM and PPDM applications, called APHID (Architecture for Private, High-performance Integrated Data mining). The architecture allows users to flexibly and seamlessly integrate cluster and grid resources into their DDM and PPDM applications. The architecture is scalable, and is split into highly de-coupled services to ensure flexibility and extensibility. This dissertation first develops a comprehensive example algorithm, a privacy-preserving Probabilistic Neural Network (PNN), which serves a basis for analysis of the difficulties of DDM/PPDM development. The privacy-preserving PNN is the first such PNN in the literature, and provides not only a practical algorithm ready for use in privacy-preserving applications, but also a template for other data intensive algorithms, and a starting point for analyzing APHID's architectural needs. After analyzing the difficulties in the PNN algorithm's development, as well as the shortcomings of researched systems, this dissertation presents the first concrete programming model joining high performance computing resources with a privacy preserving data mining process. Unlike many of the existing PPDM development models, the platform of services is language independent, allowing layers and algorithms to be implemented in popular languages (Java, C++, Python, etc.). An implementation of a PPDM algorithm is developed in Java utilizing the new framework. Performance results are presented, showing that APHID can enable highly simplified PPDM development while speeding up resource intensive parts of the algorithm.
6

Distributed Document Clustering and Cluster Summarization in Peer-to-Peer Environments

Hammouda, Khaled M. January 2007 (has links)
This thesis addresses difficult challenges in distributed document clustering and cluster summarization. Mining large document collections poses many challenges, one of which is the extraction of topics or summaries from documents for the purpose of interpretation of clustering results. Another important challenge, which is caused by new trends in distributed repositories and peer-to-peer computing, is that document data is becoming more distributed. We introduce a solution for interpreting document clusters using keyphrase extraction from multiple documents simultaneously. We also introduce two solutions for the problem of distributed document clustering in peer-to-peer environments, each satisfying a different goal: maximizing local clustering quality through collaboration, and maximizing global clustering quality through cooperation. The keyphrase extraction algorithm efficiently extracts and scores candidate keyphrases from a document cluster. The algorithm is called CorePhrase and is based on modeling document collections as a graph upon which we can leverage graph mining to extract frequent and significant phrases, which are used to label the clusters. Results show that CorePhrase can extract keyphrases relevant to documents in a cluster with very high accuracy. Although this algorithm can be used to summarize centralized clusters, it is specifically employed within distributed clustering to both boost distributed clustering accuracy, and to provide summaries for distributed clusters. The first method for distributed document clustering is called collaborative peer-to-peer document clustering, which models nodes in a peer-to-peer network as collaborative nodes with the goal of improving the quality of individual local clustering solutions. This is achieved through the exchange of local cluster summaries between peers, followed by recommendation of documents to be merged into remote clusters. Results on large sets of distributed document collections show that: (i) such collaboration technique achieves significant improvement in the final clustering of individual nodes; (ii) networks with larger number of nodes generally achieve greater improvements in clustering after collaboration relative to the initial clustering before collaboration, while on the other hand they tend to achieve lower absolute clustering quality than networks with fewer number of nodes; and (iii) as more overlap of the data is introduced across the nodes, collaboration tends to have little effect on improving clustering quality. The second method for distributed document clustering is called hierarchically-distributed document clustering. Unlike the collaborative model, this model aims at producing one clustering solution across the whole network. It specifically addresses scalability of network size, and consequently the distributed clustering complexity, by modeling the distributed clustering problem as a hierarchy of node neighborhoods. Summarization of the global distributed clusters is achieved through a distributed version of the CorePhrase algorithm. Results on large document sets show that: (i) distributed clustering accuracy is not affected by increasing the number of nodes for networks of single level; (ii) we can achieve decent speedup by making the hierarchy taller, but on the expense of clustering quality which degrades as we go up the hierarchy; (iii) in networks that grow arbitrarily, data gets more fragmented across neighborhoods causing poor centroid generation, thus suggesting we should not increase the number of nodes in the network beyond a certain level without increasing the data set size; and (iv) distributed cluster summarization can produce accurate summaries similar to those produced by centralized summarization. The proposed algorithms offer high degree of flexibility, scalability, and interpretability of large distributed document collections. Achieving the same results using current methodologies require centralization of the data first, which is sometimes not feasible.
7

Distributed Document Clustering and Cluster Summarization in Peer-to-Peer Environments

Hammouda, Khaled M. January 2007 (has links)
This thesis addresses difficult challenges in distributed document clustering and cluster summarization. Mining large document collections poses many challenges, one of which is the extraction of topics or summaries from documents for the purpose of interpretation of clustering results. Another important challenge, which is caused by new trends in distributed repositories and peer-to-peer computing, is that document data is becoming more distributed. We introduce a solution for interpreting document clusters using keyphrase extraction from multiple documents simultaneously. We also introduce two solutions for the problem of distributed document clustering in peer-to-peer environments, each satisfying a different goal: maximizing local clustering quality through collaboration, and maximizing global clustering quality through cooperation. The keyphrase extraction algorithm efficiently extracts and scores candidate keyphrases from a document cluster. The algorithm is called CorePhrase and is based on modeling document collections as a graph upon which we can leverage graph mining to extract frequent and significant phrases, which are used to label the clusters. Results show that CorePhrase can extract keyphrases relevant to documents in a cluster with very high accuracy. Although this algorithm can be used to summarize centralized clusters, it is specifically employed within distributed clustering to both boost distributed clustering accuracy, and to provide summaries for distributed clusters. The first method for distributed document clustering is called collaborative peer-to-peer document clustering, which models nodes in a peer-to-peer network as collaborative nodes with the goal of improving the quality of individual local clustering solutions. This is achieved through the exchange of local cluster summaries between peers, followed by recommendation of documents to be merged into remote clusters. Results on large sets of distributed document collections show that: (i) such collaboration technique achieves significant improvement in the final clustering of individual nodes; (ii) networks with larger number of nodes generally achieve greater improvements in clustering after collaboration relative to the initial clustering before collaboration, while on the other hand they tend to achieve lower absolute clustering quality than networks with fewer number of nodes; and (iii) as more overlap of the data is introduced across the nodes, collaboration tends to have little effect on improving clustering quality. The second method for distributed document clustering is called hierarchically-distributed document clustering. Unlike the collaborative model, this model aims at producing one clustering solution across the whole network. It specifically addresses scalability of network size, and consequently the distributed clustering complexity, by modeling the distributed clustering problem as a hierarchy of node neighborhoods. Summarization of the global distributed clusters is achieved through a distributed version of the CorePhrase algorithm. Results on large document sets show that: (i) distributed clustering accuracy is not affected by increasing the number of nodes for networks of single level; (ii) we can achieve decent speedup by making the hierarchy taller, but on the expense of clustering quality which degrades as we go up the hierarchy; (iii) in networks that grow arbitrarily, data gets more fragmented across neighborhoods causing poor centroid generation, thus suggesting we should not increase the number of nodes in the network beyond a certain level without increasing the data set size; and (iv) distributed cluster summarization can produce accurate summaries similar to those produced by centralized summarization. The proposed algorithms offer high degree of flexibility, scalability, and interpretability of large distributed document collections. Achieving the same results using current methodologies require centralization of the data first, which is sometimes not feasible.
8

Approximate Clustering Algorithms for High Dimensional Streaming and Distributed Data

Carraher, Lee A. 22 May 2018 (has links)
No description available.
9

Smart Meters Big Data : Behavioral Analytics via Incremental Data Mining and Visualization

Singh, Shailendra January 2016 (has links)
The big data framework applied to smart meters offers an exception platform for data-driven forecasting and decision making to achieve sustainable energy efficiency. Buying-in consumer confidence through respecting occupants' energy consumption behavior and preferences towards improved participation in various energy programs is imperative but difficult to obtain. The key elements for understanding and predicting household energy consumption are activities occupants perform, appliances and the times that appliances are used, and inter-appliance dependencies. This information can be extracted from the context rich big data from smart meters, although this is challenging because: (1) it is not trivial to mine complex interdependencies between appliances from multiple concurrent data streams; (2) it is difficult to derive accurate relationships between interval based events, where multiple appliance usage persist; (3) continuous generation of the energy consumption data can trigger changes in appliance associations with time and appliances. To overcome these challenges, we propose an unsupervised progressive incremental data mining technique using frequent pattern mining (appliance-appliance associations) and cluster analysis (appliance-time associations) coupled with a Bayesian network based prediction model. The proposed technique addresses the need to analyze temporal energy consumption patterns at the appliance level, which directly reflect consumers' behaviors and provide a basis for generalizing household energy models. Extensive experiments were performed on the model with real-world datasets and strong associations were discovered. The accuracy of the proposed model for predicting multiple appliances usage outperformed support vector machine during every stage while attaining accuracy of 81.65\%, 85.90\%, 89.58\% for 25\%, 50\% and 75\% of the training dataset size respectively. Moreover, accuracy results of 81.89\%, 75.88\%, 79.23\%, 74.74\%, and 72.81\% were obtained for short-term (hours), and long-term (day, week, month, and season) energy consumption forecasts, respectively.
10

A scalable evolutionary learning classifier system for knowledge discovery in stream data mining

Dam, Hai Huong, Information Technology & Electrical Engineering, Australian Defence Force Academy, UNSW January 2008 (has links)
Data mining (DM) is the process of finding patterns and relationships in databases. The breakthrough in computer technologies triggered a massive growth in data collected and maintained by organisations. In many applications, these data arrive continuously in large volumes as a sequence of instances known as a data stream. Mining these data is known as stream data mining. Due to the large amount of data arriving in a data stream, each record is normally expected to be processed only once. Moreover, this process can be carried out on different sites in the organisation simultaneously making the problem distributed in nature. Distributed stream data mining poses many challenges to the data mining community including scalability and coping with changes in the underlying concept over time. In this thesis, the author hypothesizes that learning classifier systems (LCSs) - a class of classification algorithms - have the potential to work efficiently in distributed stream data mining. LCSs are an incremental learner, and being evolutionary based they are inherently adaptive. However, they suffer from two main drawbacks that hinder their use as fast data mining algorithms. First, they require a large population size, which slows down the processing of arriving instances. Second, they require a large number of parameter settings, some of them are very sensitive to the nature of the learning problem. As a result, it becomes difficult to choose a right setup for totally unknown problems. The aim of this thesis is to attack these two problems in LCS, with a specific focus on UCS - a supervised evolutionary learning classifier system. UCS is chosen as it has been tested extensively on classification tasks and it is the supervised version of XCS, a state of the art LCS. In this thesis, the architectural design for a distributed stream data mining system will be first introduced. The problems that UCS should face in a distributed data stream task are confirmed through a large number of experiments with UCS and the proposed architectural design. To overcome the problem of large population sizes, the idea of using a Neural Network to represent the action in UCS is proposed. This new system - called NLCS { was validated experimentally using a small fixed population size and has shown a large reduction in the population size needed to learn the underlying concept in the data. An adaptive version of NLCS called ANCS is then introduced. The adaptive version dynamically controls the population size of NLCS. A comprehensive analysis of the behaviour of ANCS revealed interesting patterns in the behaviour of the parameters, which motivated an ensemble version of the algorithm with 9 nodes, each using a different parameter setting. In total they cover all patterns of behaviour noticed in the system. A voting gate is used for the ensemble. The resultant ensemble does not require any parameter setting, and showed better performance on all datasets tested. The thesis concludes with testing the ANCS system in the architectural design for distributed environments proposed earlier. The contributions of the thesis are: (1) reducing the UCS population size by an order of magnitude using a neural representation; (2) introducing a mechanism for adapting the population size; (3) proposing an ensemble method that does not require parameter setting; and primarily (4) showing that the proposed LCS can work efficiently for distributed stream data mining tasks.

Page generated in 0.1257 seconds