Spelling suggestions: "subject:"data treams"" "subject:"data upstreams""
1 |
The GDense Algorithm for Clustering Data Streams with High QualityLin, Shu-Yi 25 June 2009 (has links)
In recent years, mining data streams has been widely studied. A data streams is a
sequence of dynamic, continuous, unbounded and real time data items with a very
high data rate that can only be read once. In data mining, clustering is one of use-
ful techniques for discovering interesting data in the underlying data objects. The
problem of clustering can be defined formally as follows: given n data points in the d-
dimensional metric space, partition the data points into k clusters such that the data
points within a cluster are more similar to each other than data points in different
clusters. In the data streams environment, the difficulties of data streams clustering
contain storage overhead, low clustering quality and a low updating efficiency. Cur-
rent clustering algorithms can be broadly classified into four categories: partition,
hierarchical, density-based and grid-based approaches. The advantage of the grid-
based algorithm is that it can handle large databases. Based on the density-based
approach, the insertion or deletion of data affects the current clustering only in the
neighborhood of this data. Combining the advantages of the grid-based approach
and density-based approach, the CDS-Tree algorithm was proposed. Although it can
handle large databases, its clustering quality is restricted to the grid partition and the
threshold of a dense cell. Therefore, in this thesis, we present a new clustering algo-
rithm with high quality, GDense, for data streams. The GDense algorithm has high
quality due to two kinds of partition: cells and quadcells, and two kinds of threshold:
£_ and (1/4) . Moreover, in our GDense algorithm, in the data insertion part, the
7 cases takes 3 factors about the cell and the quadcell into consideration. In the
deletion part, the 10 cases take 5 factors about the cell into consideration. From our
simulation results, no matter what condition (including the number of data points,
the number of cells, the size of the sliding window, and the threshold of dense cell)
is, the clustering purity of our GDense algorithm is always higher than that of the
CDS-Tree algorithm. Moreover, we make a comparison of the purity between the our
GDense algorithm and the CDS-Tree algorithm with outliers. No matter whether the
number of outliers is large or small, the clustering purity of our GDense algorithm is
still higher than that of the CDS-Tree and we can improve about 20% the clustering
purity as compared to the CDS-Tree algorithm.
|
2 |
Clustering of nonstationary data streams: a survey of fuzzy partitional methodsAbdullatif, Amr R.A., Masulli, F., Rovetta, S. 20 January 2020 (has links)
Yes / Data streams have arisen as a relevant research topic during the past decade. They are real‐time, incremental in nature, temporally ordered, massive, contain outliers, and the objects in a data stream may evolve over time (concept drift). Clustering is often one of the earliest and most important steps in the streaming data analysis workflow. A comprehensive literature is available about stream data clustering; however, less attention is devoted to the fuzzy clustering approach, even though the nonstationary nature of many data streams makes it especially appealing. This survey discusses relevant data stream clustering algorithms focusing mainly on fuzzy methods, including their treatment of outliers and concept drift and shift. / Ministero dell‘Istruzione, dell‘Universitá e della Ricerca.
|
3 |
\"Identificação de correlações usando a Teoria dos Fractais\" / Correlation identification using the fractal theorySousa, Elaine Parros Machado de 29 March 2006 (has links)
O volume de informação manipulada em sistemas apoiados por computador tem crescido tanto no número de objetos que compõem os conjuntos de dados quanto na quantidade e na complexidade dos atributos. Em conjuntos de dados do mundo real, a uniformidade na distribuição de valores e a independência entre atributos são propriedades bastante incomuns. De fato, dados reais são em geral caracterizados pela ampla presença de correlações entre seus atributos. Além disso, num mesmo conjunto podem existir correlações de naturezas diversas, como correlações lineares, não-lineares e não-polinomiais. Todo esse cenário pode degradar a performance dos algoritmos que manipulam e, principalmente, dos que realizam análises dos dados. Além da grande quantidade de objetos a serem tratados e do número elevado de atributos, as correlações nem sempre são conhecidas, o que pode comprometer a eficácia de tais algoritmos. Nesse contexto, as técnicas de redução de dimensionalidade permitem diminuir o número de atributos de um conjunto de dados, minimizando assim os problemas decorrentes da alta dimensionalidade. Algumas delas são baseadas na análise de correlações e, com o objetivo de reduzir a perda de informação relevante causada pela remoção de atributos, procuram eliminar apenas aqueles que sejam correlacionados aos restantes. No entanto, essas técnicas geralmente analisam como cada atributo está correlacionado a todos os demais, tratando o conjunto de atributos como um todo e usando ferramentas de análise estatística. Esta tese propõe uma abordagem diferente, baseada na Teoria dos Fractais, para detectar a existência de correlações e identificar subconjuntos de atributos correlacionados. Para cada correlação encontrada é possível ainda identificar quais são os atributos que melhor a descrevem. Conseqüentemente, um subconjunto de atributos relevantes para representar as características fundamentais dos dados é determinado, não apenas com base em correlações globais entre todos os atributos, mas também levando em consideração especificidades de correlações que envolvem subconjuntos reduzidos. A técnica apresentada é uma ferramenta a ser utilizada em etapas de pré-processamento de atividades de descoberta de conhecimento, principalmente em operações de seleção de atributos para redução de dimensionalidade. A proposta para a identificação de correlações e os conceitos que a fundamentam são validados por meio de estudos experimentais usando tanto dados sintéticos quanto reais. Finalmente, os conceitos básicos da Teoria dos Fractais são aplicados na análise de comportamento de data streams, também constituindo uma contribuição relevante desta tese de doutorado. / The volume of information processed by computer-based systems has grown not only in the amount of data but also in number and complexity of attributes. In real world datasets, uniform value distribution and independence between attributes are rather uncommon properties. In fact, real data is usually characterized by vast existence of correlated attributes. Moreover, a dataset can present different types of correlations, such as linear, non-linear and non-polynomial. This entire scenario may degrade performance of data management and, particularly, data analysis algorithms, as they need to deal with large amount of data and high number of attributes. Furthermore, correlations are usually unknown, which may jeopardize the efficacy of these algorithms. In this context, dimensionality reduction techniques can reduce the number of attributes in datasets, thus minimizing the problems caused by high dimensionality. Some of these techniques are based on correlation analysis and try to eliminate only attributes that are correlated to those remaining, aiming at diminishing the loss of relevant information imposed by attribute removal. However, techniques proposed so far usually analyze how each attribute is correlated to all the others, considering the attribute set as a whole and applying statistical analysis tools. This thesis presents a different approach, based on the Theory of Fractals, to detect the existence of correlations and to identify subsets of correlated attributes. In addition, the proposed technique makes it possible to identify which attributes can better describe each correlation. Consequently, a subset of attributes relevant to represent the fundamental characteristics of the dataset is determined, not only based on global correlations but also considering particularities of correlations concerning smaller attribute subsets. The proposed technique works as a tool to be used in preprocessing steps of knowledge discovery activities, mainly in feature selection operations for dimensionality reduction. The technique of correlation detection and its main concepts are validated through experimental studies with synthetic and real data. Finally, as an additional relevant contribution of this thesis, the basic concepts of the Theory of Fractals are also applied to analyze data streams behavior.
|
4 |
\"Identificação de correlações usando a Teoria dos Fractais\" / Correlation identification using the fractal theoryElaine Parros Machado de Sousa 29 March 2006 (has links)
O volume de informação manipulada em sistemas apoiados por computador tem crescido tanto no número de objetos que compõem os conjuntos de dados quanto na quantidade e na complexidade dos atributos. Em conjuntos de dados do mundo real, a uniformidade na distribuição de valores e a independência entre atributos são propriedades bastante incomuns. De fato, dados reais são em geral caracterizados pela ampla presença de correlações entre seus atributos. Além disso, num mesmo conjunto podem existir correlações de naturezas diversas, como correlações lineares, não-lineares e não-polinomiais. Todo esse cenário pode degradar a performance dos algoritmos que manipulam e, principalmente, dos que realizam análises dos dados. Além da grande quantidade de objetos a serem tratados e do número elevado de atributos, as correlações nem sempre são conhecidas, o que pode comprometer a eficácia de tais algoritmos. Nesse contexto, as técnicas de redução de dimensionalidade permitem diminuir o número de atributos de um conjunto de dados, minimizando assim os problemas decorrentes da alta dimensionalidade. Algumas delas são baseadas na análise de correlações e, com o objetivo de reduzir a perda de informação relevante causada pela remoção de atributos, procuram eliminar apenas aqueles que sejam correlacionados aos restantes. No entanto, essas técnicas geralmente analisam como cada atributo está correlacionado a todos os demais, tratando o conjunto de atributos como um todo e usando ferramentas de análise estatística. Esta tese propõe uma abordagem diferente, baseada na Teoria dos Fractais, para detectar a existência de correlações e identificar subconjuntos de atributos correlacionados. Para cada correlação encontrada é possível ainda identificar quais são os atributos que melhor a descrevem. Conseqüentemente, um subconjunto de atributos relevantes para representar as características fundamentais dos dados é determinado, não apenas com base em correlações globais entre todos os atributos, mas também levando em consideração especificidades de correlações que envolvem subconjuntos reduzidos. A técnica apresentada é uma ferramenta a ser utilizada em etapas de pré-processamento de atividades de descoberta de conhecimento, principalmente em operações de seleção de atributos para redução de dimensionalidade. A proposta para a identificação de correlações e os conceitos que a fundamentam são validados por meio de estudos experimentais usando tanto dados sintéticos quanto reais. Finalmente, os conceitos básicos da Teoria dos Fractais são aplicados na análise de comportamento de data streams, também constituindo uma contribuição relevante desta tese de doutorado. / The volume of information processed by computer-based systems has grown not only in the amount of data but also in number and complexity of attributes. In real world datasets, uniform value distribution and independence between attributes are rather uncommon properties. In fact, real data is usually characterized by vast existence of correlated attributes. Moreover, a dataset can present different types of correlations, such as linear, non-linear and non-polynomial. This entire scenario may degrade performance of data management and, particularly, data analysis algorithms, as they need to deal with large amount of data and high number of attributes. Furthermore, correlations are usually unknown, which may jeopardize the efficacy of these algorithms. In this context, dimensionality reduction techniques can reduce the number of attributes in datasets, thus minimizing the problems caused by high dimensionality. Some of these techniques are based on correlation analysis and try to eliminate only attributes that are correlated to those remaining, aiming at diminishing the loss of relevant information imposed by attribute removal. However, techniques proposed so far usually analyze how each attribute is correlated to all the others, considering the attribute set as a whole and applying statistical analysis tools. This thesis presents a different approach, based on the Theory of Fractals, to detect the existence of correlations and to identify subsets of correlated attributes. In addition, the proposed technique makes it possible to identify which attributes can better describe each correlation. Consequently, a subset of attributes relevant to represent the fundamental characteristics of the dataset is determined, not only based on global correlations but also considering particularities of correlations concerning smaller attribute subsets. The proposed technique works as a tool to be used in preprocessing steps of knowledge discovery activities, mainly in feature selection operations for dimensionality reduction. The technique of correlation detection and its main concepts are validated through experimental studies with synthetic and real data. Finally, as an additional relevant contribution of this thesis, the basic concepts of the Theory of Fractals are also applied to analyze data streams behavior.
|
5 |
Detecção de novidade em fluxos contínuos de dados multiclasse / Novelty detection in multiclass data streamsPaiva, Elaine Ribeiro de Faria 08 May 2014 (has links)
Mineração de fluxos contínuos de dados é uma área de pesquisa emergente que visa extrair conhecimento a partir de grandes quantidades de dados, gerados continuamente. Detecção de novidade é uma tarefa de classificação que consiste em reconhecer que um exemplo ou conjunto de exemplos em um fluxo de dados diferem significativamente dos exemplos vistos anteriormente. Essa é uma importante tarefa para fluxos contínuos de dados, principalmente porque novos conceitos podem aparecer, desaparecer ou evoluir ao longo do tempo. A maioria dos trabalhos da literatura apresentam a detecção de novidade como uma tarefa de classificação binária. Poucos trabalhos tratam essa tarefa como multiclasse, mas usam medidas de avaliação binária. Em vários problemas, o correto seria tratar a detecção de novidade em fluxos contínuos de dados como uma tarefa multiclasse, no qual o conceito conhecido do problema é formado por uma ou mais classes, e diferentes novas classes podem aparecer ao longo do tempo. Esta tese propõe um novo algoritmo MINAS para detecção de novidade em fluxos contínuos de dados. MINAS considera que a detecção de novidade é uma tarefa multiclasse. Na fase de treinamento, MINAS constrói um modelo de decisão com base em um conjunto de exemplos rotulados. Na fase de aplicação, novos exemplos são classificados usando o modelo de decisão atual, ou marcados como desconhecidos. Grupos de exemplos desconhecidos podem formar padrões-novidade válidos, que são então adicionados ao modelo de decisão. O modelo de decisão é atualizado ao longo do fluxo a fim de refletir mudanças nas classes conhecidas e permitir inserção de padrões-novidade. Esta tese também propõe uma nova metodologia para avaliação de algoritmos para detecção de novidade em fluxos contínuos de dados. Essa metodologia associa os padrões-novidade não rotulados às classes reais do problema, permitindo assim avaliar a matriz de confusão que é incremental e retangular. Além disso, a metodologia de avaliação propõe avaliar os exemplos desconhecidos separadamente e utilizar medidas de avaliação multiclasse. Por último, esta tese apresenta uma série de experimentos executados usando o MINAS e os principais algoritmos da literatura em bases de dados artificiais e reais. Além disso, o MINAS foi aplicado a um problema real, que consiste no reconhecimento de atividades humanas usando dados de acelerômetro. Os resultados experimentais mostram o potencial do algoritmo e da metodologia propostos / Data stream mining is an emergent research area that aims to extract knowledge from large amounts of continuously generated data. Novelty detection is a classification task that assesses if an example or a set of examples differ significantly from the previously seen examples. This is an important task for data streams, mainly because new concepts may appear, disappear or evolve over time. Most of the work found in the novelty detection literature presents novelty detection as a binary classification task. A few authors treat this task as multiclass, but even they use binary evaluation measures. In several real problems, novelty detection in data streams must be treated as a multiclass task, in which, the known concept about the problem is composed by one or more classes and different new classes may appear over time. This thesis proposes a new algorithm MINAS for novelty detection in data streams. MINAS deals with novelty detection as a multiclass task. In the training phase, MINAS builds a decision model based on a labeled data set. In the application phase, new examples are classified using the decision model, or marked with an unknown profile. Groups of unknown examples can be later used to create valid novelty patterns, which are added to the current decision model. The decision model is updated as new data arrives in the stream in order to reflect changes in the known classes and to allow the addition of novelty patterns. This thesis also proposes a new methodology to evaluate classifiers for novelty detection in data streams. This methodology associates the unlabeled novelty patterns to the true problem classes, allowing the evaluation of a confusion matrix that is incremental and rectangular. In addition, the proposed methodology allows the evaluation of unknown examples separately and the use multiclass evaluation measures. Additionally, this thesis presents a set of experiments carried out comparing the MINAS algorithm and the main novelty detection algorithms found in the literature, using artificial and real data sets. Finally, MINAS was applied to a human activity recognition problem using accelerometer data. The experimental results show the potential of the proposed algorithm and methodologies
|
6 |
Semi-Supervised Hybrid Windowing Ensembles for Learning from Evolving StreamsFloyd, Sean Louis Alan 03 June 2019 (has links)
In this thesis, learning refers to the intelligent computational extraction of knowledge from data. Supervised learning tasks require data to be annotated with labels, whereas for unsupervised learning, data is not labelled. Semi-supervised learning deals with data sets that are partially labelled. A major issue with supervised and semi-supervised learning of data streams is late-arriving or missing class labels. Assuming that correctly labelled data will always be available and timely is often unfeasible, and, as such, supervised methods are not directly applicable in the real world. Therefore, real-world problems usually require the use of semi-supervised or unsupervised learning techniques. For instance, when considering a spam detection task, it is not reasonable to assume that all spam will be identified (correctly labelled) prior to learning. Additionally, in semi-supervised learning, "the instances having the highest [predictive] confidence are not necessarily the most useful ones" [41]. We investigate how self-training performs without its selective heuristic in a streaming setting.
This leads us to our contributions. We extend an existing concept drift detector to operate without any labelled data, by using a sliding window of our ensemble's prediction confidence, instead of a boolean indicating whether the ensemble's predictions are correct. We also extend selective self-training, a semi-supervised learning method, by using all predictions, and not only those with high predictive confidence. Finally, we introduce a novel windowing type for ensembles, as sliding windows are very time consuming and regular tumbling windows are not a suitable replacement. Our windowing technique can be considered a hybrid of the two: we train each sub-classifier in the ensemble with tumbling windows, but delay training in such a way that only one sub-classifier can update its model per iteration.
We found, through statistical significance tests, that our framework is (roughly 160 times) faster than current state of the art techniques, and achieves comparable predictive accuracy. That being said, more research is needed to further reduce the quantity of labelled data used for training, while also increasing its predictive accuracy.
|
7 |
Exploration Framework For Detecting Outliers In Data StreamsSean, Viseth 27 April 2016 (has links)
Current real-world applications are generating a large volume of datasets that are often continuously updated over time. Detecting outliers on such evolving datasets requires us to continuously update the result. Furthermore, the response time is very important for these time critical applications. This is challenging. First, the algorithm is complex; even mining outliers from a static dataset once is already very expensive. Second, users need to specify input parameters to approach the true outliers. While the number of parameters is large, using a trial and error approach online would be not only impractical and expensive but also tedious for the analysts. Worst yet, since the dataset is changing, the best parameter will need to be updated to respond to user exploration requests. Overall, the large number of parameter settings and evolving datasets make the problem of efficiently mining outliers from dynamic datasets very challenging. Thus, in this thesis, we design an exploration framework for detecting outliers in data streams, called EFO, which enables analysts to continuously explore anomalies in dynamic datasets. EFO is a continuous lightweight preprocessing framework. EFO embraces two optimization principles namely "best life expectancy" and "minimal trial," to compress evolving datasets into a knowledge-rich abstraction of important interrelationships among data. An incremental sorting technique is also used to leverage the almost ordered lists in this framework. Thereafter, the knowledge abstraction generated by EFO not only supports traditional outlier detection requests but also novel outlier exploration operations on evolving datasets. Our experimental study conducted on two real datasets demonstrates that EFO outperforms state-of-the-art technique in terms of CPU processing costs when varying stream volume, velocity and outlier rate.
|
8 |
Agrupamento de fluxos de dados utilizando dimensão fractal / Clustering data streams using fractal dimensionBones, Christian Cesar 15 March 2018 (has links)
Realizar o agrupamento de fluxos de dados contínuos e multidimensionais (multidimensional data streams) é uma tarefa dispendiosa, visto que esses tipos de dados podem possuir características peculiares e que precisam ser consideradas, dentre as quais destacam-se: podem ser infinitos, tornando inviável, em muitas aplicações realizar mais de uma leitura dos dados; ponto de dados podem possuir diversas dimensões e a correlação entre as dimensões pode impactar no resultado final da análise e; são capazes de evoluir com o passar do tempo. Portanto, faz-se necessário o desenvolvimento de métodos computacionais adequados a essas características, principalmente nas aplicações em que realizar manualmente tal tarefa seja algo impraticável em razão do volume de dados, por exemplo, na análise e predição do comportamento climático. Nesse contexto, o objetivo desse trabalho de pesquisa foi propor técnicas computacionais, eficientes e eficazes, que contribuíssem para a extração de conhecimento de fluxos de dados com foco na tarefa de agrupamento de fluxos de dados similares. Assim, no escopo deste trabalho, foram desenvolvidos dois métodos para agrupamento de fluxos de dados evolutivos, multidimensionais e potencialmente infinitos, ambos baseados no conceito de dimensão fractal, até então não utilizada nesse contexto na literatura: o eFCDS, acrônimo para evolving Fractal Clustering of Data Streams, e o eFCC, acrônimo para evolving Fractal Clusters Construction. O eFCDS utiliza a dimensão fractal para mensurar a correlação, linear ou não, existente entre as dimensões dos dados de um fluxo de dados multidimensional num período de tempo. Esta medida, calculada para cada fluxo de dados, é utilizada como critério de agrupamento de fluxos de dados com comportamentos similares ao longo do tempo. O eFCC, por outro lado, realiza o agrupamento de fluxos de dados multidimensionais de acordo com dois critérios principais: comportamento ao longo do tempo, considerando a medida de correlação entre as dimensões dos dados de cada fluxo de dados, e a distribuição de dados em cada grupo criado, analisada por meio da dimensão fractal do mesmo. Ambos os métodos possibilitam ainda a identificação de outliers e constroem incrementalmente os grupos ao longo do tempo. Além disso, as soluções propostas para tratamento de correlações em fluxos de dados multidimensionais diferem dos métodos apresentados na literatura da área, que em geral utilizam técnicas de sumarização e identificação de correlações lineares aplicadas apenas à fluxos de dados unidimensionais. O eFCDS e o eFCC foram testados e confrontados com métodos da literatura que também se propõem a agrupar fluxos de dados. Nos experimentos realizados com dados sintéticos e reais, tanto o eFCDS quanto o eFCC obtiveram maior eficiência na construção dos agrupamentos, identificando os fluxos de dados com comportamento semelhante e cujas dimensões se correlacionam de maneira similar. Além disso, o eFCC conseguiu agrupar os fluxos de dados que mantiveram distribuição dos dados semelhante em um período de tempo. Os métodos possuem como uma das aplicações imediatas a extração de padrões de interesse de fluxos de dados proveniente de sensores climáticos, com o objetivo de apoiar pesquisas em Agrometeorologia. / To cluster multidimensional data streams is an expensive task since this kind of data could have some peculiarities characteristics that must be considered, among which: they are potencially infinite, making many reads impossible to perform; data can have many dimensions and the correlation among them could have an affect on the analysis; as the time pass through they are capable of evolving. Therefore, it is necessary the development of appropriate computational methods to these characteristics, especially in the areas where performing such task manually is impractical due to the volume of data, for example, in the analysis and prediction of climate behavior. In that context, the research goal was to propose efficient and effective techniques that clusters multidimensional evolving data streams. Among the applications that handles with that task, we highlight the evolving Fractal Clustering of Data Streams, and the eFCC acronym for evolving Fractal Clusters Construction. The eFCDS calculates the data streams fractal dimension to correlate the dimensions in a non-linear way and to cluster those with the biggest similarity over a period of time, evolving the clusters as new data is read. Through calculating the fractal dimension and then cluster the data streams the eFCDS applies an innovative strategy, distinguishing itself from the state-of-art methods that perform clustering using summaries techniques and linear correlation to build their clusters over unidimensional data streams. The eFCDS also identifies those data streams who showed anomalous behavior in the analyzed time period treating them as outliers. The other method devoleped is called eFCC. It also builds data streams clusters, however, they are built on a two premises basis: the data distribution should be likely the same and second the behavior should be similar in the same time period. To perform that kind of clustering the eFCC calculates the clusters fractal dimension itself and the data streams fractal dimension, following the evolution in the data, relocating the data streams from one group to another when necessary and identifying those that become outlier. Both eFCDS and eFCC were evaluated and confronted with their competitor, that also propose to cluster data streams and not only data points. Through a detailed experimental evaluation using synthetic and real data, both methods have achieved better efficiency on building the groups, better identifying data streams with similar behavior during a period of time and whose dimensions correlated in a similar way, as can be observed in the result chapter 6. Besides that, the eFCC also cluster the data streams which maintained similar data distribution over a period of time. As immediate application the methods developed in this thesis can be used to extract patterns of interest from climate sensors aiming to support researches in agrometeorology.
|
9 |
An Efficient Subset-Lattice Algorithm for Mining Closed Frequent Itemsets in Data StreamsPeng, Wei-hau 25 June 2009 (has links)
Online mining association rules over data streams is an important issue in the area of data mining, where an association rule means that the presence of some items in a transaction will imply the presence of other items in the same transaction. There are many applications of using association rules in data streams, such as market analysis, network security, sensor networks and web tracking.
Mining closed frequent itemsets is a further work of mining association rules, which aims to find the subsets of frequent itemsets that could extract all frequent itemsets. Formally, a
closed frequent itemset is an frequent itemset which has no superset with the same support as it. Since data streams are continuous, high-speed, and unbounded, archiving everything from data streams is impossible. That is, we can only scan once for the data streams and it is a main-memory database. Therefore, previous algorithms to mine closed frequent itemsets in the traditional database are not suitable for data streams. On the other hand, many applications are interested in the most recent data, and there is a model to deal with the most recent data in data streams, called emph{Sliding Window Model}, which acquires the recent data with a window size meets this characteristic. One of well-known algorithms for mining closed frequent itemsets which based on the sliding window model is the NewMoment algorithm. However, the NewMoment algorithm could not efficiently mine closed frequent itemsets in data streams, since they will generate closed frequent itemsets and many unclosed frequent itemsets. Moreover, when data in the sliding window is incrementally updated, the NewMoment algorithm needs to reconstruct the whole tree structure. Therefore, in this thesis, we propose a
sliding window approach, the Subset-Lattice algorithm, which embeds the subset property into the lattice structure to efficiently mine closed frequent itemsets. Basically, Our proposed algorithm considers five kinds of set concepts : (1) equivalent, (2) superset, (3) subset, (4) intersection, (5) empty relation, when data items are inserted. We judge closed frequent itemsets without generating unclosed frequent itemsets by these five kinds of set concepts.
Moreover, when data in the sliding window is incrementally updated, our Subset-Lattice algorithm will not reconstruct the whole lattice structure. Therefore, our Subset-Lattice algorithm is more efficient than the Moment algorithm. Furthermore, we use the bit-pattern to represent the itemsets, and use bit-operations to speed up the set-checking. From our simulation results, we show that our Subset-Lattice algorithm needs less memory and less processing time than the NewMoment algorithm. When window slides, the execution time could be saved up to 50\%.
|
10 |
Practical Verified Computation with Streaming Interactive ProofsThaler, Justin R 14 October 2013 (has links)
As the cloud computing paradigm has gained prominence, the need for verifiable computation has grown urgent. Protocols for verifiable computation enable a weak client to outsource difficult computations to a powerful, but untrusted, server. These protocols provide the client with a (probabilistic) guarantee that the server performed the requested computations correctly, without requiring the client to perform the computations herself. / Engineering and Applied Sciences
|
Page generated in 0.0473 seconds