Spelling suggestions: "subject:"analysis off algorithms"" "subject:"analysis oof algorithms""
31 |
Critical Sets in Latin Squares and Associated StructuresBean, Richard Winston Unknown Date (has links)
A critical set in a Latin square of order n is a set of entries in an n×n array which can be embedded in precisely one Latin square of order n, with the property that if any entry of the critical set is deleted, the remaining set can be embedded in more than one Latin square of order n. The number of critical sets grows super-exponentially as the order of the Latin square increases. It is difficult to find patterns in Latin squares of small order (order 5 or less) which can be generalised in the process of creating new theorems. Thus, I have written many algorithms to find critical sets with various properties in Latin squares of order greater than 5, and to deal with other related structures. Some algorithms used in the body of the thesis are presented in Chapter 3; results which arise from the computational studies and observations of the patterns and subsequent results are presented in Chapters 4, 5, 6, 7 and 8. The cardinality of the largest critical set in any Latin square of order n is denoted by lcs(n). In 1978 Curran and van Rees proved that lcs(n)<=n²-n. In Chapter 4, it is shown that lcs(n)<=n²-3n+3. Chapter 5 provides new bounds on the maximum number of intercalates in Latin squares of orders m×2^α (m odd, α>=2) and m×2^α+1 (m odd, α>=2 and α≠3), and a new lower bound on lcs(4m). It also discusses critical sets in intercalate-rich Latin squares of orders 11 and 14. In Chapter 6 a construction is given which verifies the existence of a critical set of size n²÷ 4 + 1 when n is even and n>=6. The construction is based on the discovery of a critical set of size 17 for a Latin square of order 8. In Chapter 7 the representation of Steiner trades of volume less than or equal to nine is examined. Computational results are used to identify those trades for which the associated partial Latin square can be decomposed into six disjoint Latin interchanges. Chapter 8 focusses on critical sets in Latin squares of order at most six and extensive computational routines are used to identify all the critical sets of different sizes in these Latin squares.
|
32 |
Critical Sets in Latin Squares and Associated StructuresBean, Richard Winston Unknown Date (has links)
A critical set in a Latin square of order n is a set of entries in an n×n array which can be embedded in precisely one Latin square of order n, with the property that if any entry of the critical set is deleted, the remaining set can be embedded in more than one Latin square of order n. The number of critical sets grows super-exponentially as the order of the Latin square increases. It is difficult to find patterns in Latin squares of small order (order 5 or less) which can be generalised in the process of creating new theorems. Thus, I have written many algorithms to find critical sets with various properties in Latin squares of order greater than 5, and to deal with other related structures. Some algorithms used in the body of the thesis are presented in Chapter 3; results which arise from the computational studies and observations of the patterns and subsequent results are presented in Chapters 4, 5, 6, 7 and 8. The cardinality of the largest critical set in any Latin square of order n is denoted by lcs(n). In 1978 Curran and van Rees proved that lcs(n)<=n²-n. In Chapter 4, it is shown that lcs(n)<=n²-3n+3. Chapter 5 provides new bounds on the maximum number of intercalates in Latin squares of orders m×2^α (m odd, α>=2) and m×2^α+1 (m odd, α>=2 and α≠3), and a new lower bound on lcs(4m). It also discusses critical sets in intercalate-rich Latin squares of orders 11 and 14. In Chapter 6 a construction is given which verifies the existence of a critical set of size n²÷ 4 + 1 when n is even and n>=6. The construction is based on the discovery of a critical set of size 17 for a Latin square of order 8. In Chapter 7 the representation of Steiner trades of volume less than or equal to nine is examined. Computational results are used to identify those trades for which the associated partial Latin square can be decomposed into six disjoint Latin interchanges. Chapter 8 focusses on critical sets in Latin squares of order at most six and extensive computational routines are used to identify all the critical sets of different sizes in these Latin squares.
|
33 |
Critical Sets in Latin Squares and Associated StructuresBean, Richard Winston Unknown Date (has links)
A critical set in a Latin square of order n is a set of entries in an n×n array which can be embedded in precisely one Latin square of order n, with the property that if any entry of the critical set is deleted, the remaining set can be embedded in more than one Latin square of order n. The number of critical sets grows super-exponentially as the order of the Latin square increases. It is difficult to find patterns in Latin squares of small order (order 5 or less) which can be generalised in the process of creating new theorems. Thus, I have written many algorithms to find critical sets with various properties in Latin squares of order greater than 5, and to deal with other related structures. Some algorithms used in the body of the thesis are presented in Chapter 3; results which arise from the computational studies and observations of the patterns and subsequent results are presented in Chapters 4, 5, 6, 7 and 8. The cardinality of the largest critical set in any Latin square of order n is denoted by lcs(n). In 1978 Curran and van Rees proved that lcs(n)<=n²-n. In Chapter 4, it is shown that lcs(n)<=n²-3n+3. Chapter 5 provides new bounds on the maximum number of intercalates in Latin squares of orders m×2^α (m odd, α>=2) and m×2^α+1 (m odd, α>=2 and α≠3), and a new lower bound on lcs(4m). It also discusses critical sets in intercalate-rich Latin squares of orders 11 and 14. In Chapter 6 a construction is given which verifies the existence of a critical set of size n²÷ 4 + 1 when n is even and n>=6. The construction is based on the discovery of a critical set of size 17 for a Latin square of order 8. In Chapter 7 the representation of Steiner trades of volume less than or equal to nine is examined. Computational results are used to identify those trades for which the associated partial Latin square can be decomposed into six disjoint Latin interchanges. Chapter 8 focusses on critical sets in Latin squares of order at most six and extensive computational routines are used to identify all the critical sets of different sizes in these Latin squares.
|
34 |
Algoritmos de agrupamento particionais baseados na Meta-heurística de otimização por busca em grupoPACÍFICO, Luciano Demétrio Santos 26 August 2016 (has links)
Submitted by Irene Nascimento (irene.kessia@ufpe.br) on 2016-10-17T18:58:21Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
tese-ldsp-cin-ufpe.pdf: 2057113 bytes, checksum: 40e1baebc2bc4840cd9803fdc16d952f (MD5) / Made available in DSpace on 2016-10-17T18:58:21Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
tese-ldsp-cin-ufpe.pdf: 2057113 bytes, checksum: 40e1baebc2bc4840cd9803fdc16d952f (MD5)
Previous issue date: 2016-08-26 / CNPQ / A Análise de Agrupamentos, também conhecida por Aprendizagem Não-Supervisionada,
é uma técnica importante para a análise exploratória de dados, tendo sido largamente
empregada em diversas aplicações, tais como mineração de dados, segmentação de imagens,
bioinformática, dentre outras. A análise de agrupamentos visa a distribuição de um
conjunto de dados em grupos, de modo que indivíduos em um mesmo grupo estejam mais
proximamente relacionados (mais similares) entre si, enquanto indivíduos pertencentes a
grupos diferentes tenham um alto grau de dissimilaridade entre si.
Do ponto de vista de otimização, a análise de agrupamentos é considerada como um caso
particular de problema de NP-Difícil, pertencendo à categoria da otimização combinatória.
Técnicas tradicionais de agrupamento (como o algoritmo K-Means) podem sofrer algumas
limitações na realização da tarefa de agrupamento, como a sensibilidade à inicialização
do algoritmo, ou ainda a falta de mecanismos que auxiliem tais métodos a escaparem de
pontos ótimos locais.
Meta-heurísticas como Algoritmos Evolucionários (EAs) e métodos de Inteligência de
Enxames (SI) são técnicas de busca global inspirados na natureza que têm tido crescente
aplicação na solução de uma grande variedade de problemas difíceis, dada a capacidade de
tais métodos em executar buscas minuciosas pelo espaço do problema, tentando evitar
pontos de ótimos locais. Nas últimas décadas, EAs e SI têm sido aplicadas com sucesso
ao problema de agrupamento de dados. Nesse contexto, a meta-heurística conhecida por
Otimização por Busca em Grupo (GSO) vem sendo aplicada com sucesso na solução de
problemas difíceis de otimização, obtendo desempenhos superiores a técnicas evolucionárias
tradicionais, como os Algoritmos Genéticos (GA) e a Otimização por Enxame de Partículas
(PSO). No contexto de análise de agrupamentos, EAs e SIs são capazes de oferecer boas
soluções globais ao problema, porém, por sua natureza estocástica, essas abordagens
podem ter taxas de convergência mais lentas quando comparadas a outros métodos de
agrupamento.
Nesta tese, o GSO é adaptado ao contexto de análise de agrupamentos particional. Modelos
híbridos entre o GSO e o K-Means são apresentados, de modo a agregar o potencial de
exploração oferecido pelas buscas globais do GSO à velocidade de exploitação de regiões
locais oferecida pelo K-Means, fazendo com que os sistemas híbridos formados sejam
capazes de oferecerem boas soluções aos problemas de agrupamento tratados.
O trabalho apresenta um estudo da influência do K-Means quando usado como operador
de busca local para a inicialização populacional do GSO, assim como operador para
refinamento da melhor solução encontrada pela população do GSO durante o processo
geracional desenvolvido por esta técnica.
Uma versão cooperativa coevolucionária do modelo GSO também foi adaptada ao contexto
da análise de agrupamentos particional, resultando em um método com grande potencial
para o paralelismo, assim como para uso em aplicações de agrupamentos distribuídos.
Os resultados experimentais, realizados tanto com bases de dados reais, quanto com o
uso de conjuntos de dados sintéticos, apontam o potencial dos modelos alternativos de
inicialização da população propostos para o GSO, assim como de sua versão cooperativa
coevolucionária, ao lidar com problemas tradicionais de agrupamento de dados, como a
sobreposição entre as classes do problema, classes desbalanceadas, dentre outros, quando
em comparação com métodos de agrupamento existentes na literatura. / Cluster analysis, also known as unsupervised learning, is an important technique for
exploratory data analysis, and it has being widely employed in many applications such as
data mining, image segmentation, bioinformatics, and so on. Clustering aims to distribute
a data set in groups, in such a way that individuals from the same group are more closely
related (more similar) among each other, while individuals from different groups have a
high degree of dissimilarity among each other.
From an optimization perspective, clustering is considered as a particular kind of NP-hard
problem, belonging in the combinatorial optimization category. Traditional clustering
techniques (like K-Means algorithm) may suffer some limitations when dealing with
clustering task, such as the sensibility to the algorithm initialization, or the lack of
mechanisms to help these methods to escape from local minima points.
Meta-heuristics such as EAs and SI methods are nature-inspired global search techniques
which have been increasingly applied to solve a great variety of difficult problems, given
their capability to perform thorough searches through a problem space, attempting to
avoid local optimum points. From the past few decades, EAs and SI approaches have
been successfully applied to tackle clustering problems. In this context, Group Search
Optimization (GSO) meta-heuristic has been successfully applied to solve hard optimization
problems, obtaining better performances than traditional evolutionary techniques, such as
Genetic Algorithms (GA) and Particle Swarm Optimization (PSO). In clustering context,
EAs an SIs are able to obtain good global solutions to the problem at hand, however,
according to their stochastic nature, these approaches may have slow convergence rates in
comparison to other clustering methods.
In this thesis, GSO is adapted to the context of partitional clustering analysis. Hybrid
models of GSO and K-Means are presented, in such a way that the exploration offered
by GSO global searches are combined with fast exploitation of local regions provided
by K-Means, generating new hybrid systems capable of obtaining good solutions to the
clustering problems at hands.
The work also presents a study on the influence of K-Means when adopted as a local
search operator for GSO population initialization, just like its application as an refinement
operator for the best solution found by GSO population during GSO generative process.
A cooperative coevolutionary variant of GSO model is adapted to the context of partitional
clustering, resulting in a method with great potential to parallelism, as much as for the
use in distributed clustering applications.
Experimental results, performed as with the use of real data sets, as with the use of
synthetic data sets, showed the potential of proposed alternative population initialization
models and the potential of GSO cooperative coevolutionary variant when dealing with
classic clustering problems, such as data overlapping, data unbalancing, and so on, in
comparison to other clustering algorithms from literature.
|
35 |
Representações cache eficientes para índices baseados em Wavelet treesSILVA, Israel Batista Freitas da 12 December 2016 (has links)
Submitted by Rafael Santana (rafael.silvasantana@ufpe.br) on 2017-08-30T19:22:34Z
No. of bitstreams: 2
license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5)
Israel Batista Freitas da Silva.pdf: 1433243 bytes, checksum: 5b1ac5501cae385e4811343e1426e6c9 (MD5) / Made available in DSpace on 2017-08-30T19:22:34Z (GMT). No. of bitstreams: 2
license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5)
Israel Batista Freitas da Silva.pdf: 1433243 bytes, checksum: 5b1ac5501cae385e4811343e1426e6c9 (MD5)
Previous issue date: 2016-12-12 / CNPQ, FACEPE. / Hoje em dia, há um exponencial crescimento do volume de informação no mundo. Esta explosão cria uma demanda por técnicas mais eficientes de indexação e consulta de dados, uma vez que, para serem úteis, eles precisarão ser manipuláveis. Casamento de padrões se refere à busca de um texto menor (padrão) em um texto muito maior (texto), reportando a quantidade de ocorrências e/ou as localizações das ocorrências. Para tal, pode-se construir uma estrutura chamada índice que pré-processará o texto e permitirá que consultas sejam feitas eficientemente. A eficiência prática de um índice, além da sua eficiência teórica, pode definir o quão utilizado ele será, e isto está diretamente ligado a como ele se comporta nas arquiteturas dos computadores atuais. O principal objetivo deste estudo é analisar o uso da estrutura Wavelet Tree como índice avaliando o impacto da reorganização interna dos seus dados quanto à localidade espacial e, assim propor formas de organização que reduzam efetivamente a quantidade de cache misses ocorridos na execução de operações neste índice. Através de análises empíricas com dados simulados e dados textuais obtidos de dois repositórios públicos, avaliou-se alguns aspectos de cinco tipos de organizações para os dados da estrutura com o objetivo de compará-las quanto ao tempo de execução e quantidade de cache misses ocorridos. Adicionalmente, uma análise teórica da complexidade da quantidade de cache misses ocorridos para operação de consulta de um padrão é descrita para uma das organizações propostas. Dois experimentos realizados sugerem comportamentos assintóticos para duas das organizações analisadas. Um terceiro experimento executado mostra que, para quatro das cinco organizações apresentadas, houve uma sistemática redução na quantidade de cache misses ocorridos para a cache de menor nível. Entretanto a redução de cache misses para cache de menor nível não se refletiu integralmente numa diferença no tempo de execução das operações, tendo sido esta menos significativa, nem na quantidade de cache misses ocorridos na cache de maior nível, onde houveram variações positivas e negativas.Os resultados obtidos permitem concluir que a escolha de uma representação adequada pode acarretar numa melhora significativa de utilização da cache. Diferentemente do modelo teórico, o custo de acesso à memória responde apenas por uma fração do tempo de computação das operações sobre as Wavelet Trees, pelo que a diminuição no número de cache misses não se traduziu integralmente no tempo de execução. No entanto, este fator pode ser crítico em situações mais extremas de utilização de memória. / Today, there is an exponential growth in the volume of information in the world. This increase creates the demand for more efficient indexing and querying techniques, since, to be useful, that data needs to be manageable. Pattern matching means searching for a string (pattern) in a much bigger string (text), reporting the number of occurrences and/or its locations. To do that, we need to build a data structure known as index. This structure will preprocess the text to allow for efficient queries. The adoption of an index depends heavily on its efficiency, and this is directly related to how well it performs on current machine architectures. The main objective of this work is to analyze the Wavelet Tree data structure as an index, assessing the impact of its internal organization with respect to spatial locality, and propose ways to organize its data as to reduce the amount of cache misses incurred by its operations. We performed an empirical analysis using both real and simulated textual data to compare the running time and cache behavior of Wavelet Trees using five different proposals of internal data layout. A theoretical analysis about the cache complexity of a query operation is also presented for the most efficient layout. Two experiments suggest good asymptotic behavior for two of the analyzed layouts. A third experiment shows that for four of the five layouts, there was a systematic reduction in the number of cache misses for the lowest level cache. Despite this, this reduction was not reflected in the runtime, neither in the performance for the highest level cache. The results obtained allow us to conclude that the choice of a suitable layout can lead to a significant improvement in cache usage. Unlike the theoretical model, however, the cost of memory access only accounts for a fraction of the operations’ computation time on the Wavelet Trees, so the decrease in the number of cache misses did not translate fully into gains in the execution time. However, this factor can still be critical in more extreme memory utilization situations.
|
36 |
RANDOMIZED NUMERICAL LINEAR ALGEBRA APPROACHES FOR APPROXIMATING MATRIX FUNCTIONSEvgenia-Maria Kontopoulou (9179300) 28 July 2020 (has links)
<p>This work explores how randomization can be exploited to deliver sophisticated</p><p>algorithms with provable bounds for: (i) The approximation of matrix functions, such</p><p>as the log-determinant and the Von-Neumann entropy; and (ii) The low-rank approximation</p><p>of matrices. Our algorithms are inspired by recent advances in Randomized</p><p>Numerical Linear Algebra (RandNLA), an interdisciplinary research area that exploits</p><p>randomization as a computational resource to develop improved algorithms for</p><p>large-scale linear algebra problems. The main goal of this work is to encourage the</p><p>practical use of RandNLA approaches to solve Big Data bottlenecks at industrial</p><p>level. Our extensive evaluation tests are complemented by a thorough theoretical</p><p>analysis that proves the accuracy of the proposed algorithms and highlights their</p><p>scalability as the volume of data increases. Finally, the low computational time and</p><p>memory consumption, combined with simple implementation schemes that can easily</p><p>be extended in parallel and distributed environments, render our algorithms suitable</p><p>for use in the development of highly efficient real-world software.</p>
|
37 |
ALGORITHMS FOR DEGREE-CONSTRAINED SUBGRAPHS AND APPLICATIONSS M Ferdous (11804924) 19 December 2021 (has links)
A degree-constrained subgraph construction (DCS) problem aims to find an optimal spanning subgraph (w.r.t an objective function) subject to certain degree constraints on the vertices. DCS generalizes many combinatorial optimization problems such as Matchings and Edge Covers and has many practical and real-world applications. This thesis focuses on DCS problems where there are only upper and lower bounds on the degrees, known as b-matching and b-edge cover problems, respectively. We explore linear and submodular functions as the objective functions of the subgraph construction.<br><br>The contributions of this thesis involve both the design of new approximation algorithms for these DCS problems, and also their applications to real-world contexts.<br>We designed, developed, and implemented several approximation algorithms for DCS problems. Although some of these problems can be solved exactly in polynomial time, often these algorithms are expensive, tedious to implement, and have little to no concurrency. On the contrary, many of the approximation algorithms developed here run in nearly linear time, are simple to implement, and are concurrent. Using the local dominance framework, we developed the first parallel algorithm submodular b-matching. For weighted b-edge cover, we improved the classic Greedy algorithm using the lazy evaluation technique. We also propose and analyze several approximation algorithms using the primal-dual linear programming framework and reductions to matching. We evaluate the practical performance of these algorithms through extensive experimental results.<br><br>The second contribution of the thesis is to utilize the novel algorithms in real-world applications. We employ submodular b-matching to generate a balanced task assignment for processors to build Fock matrices in the NWChemEx quantum chemistry software. Our load-balanced assignment results in a four-fold speedup per iteration of the Fock matrix computation and scales to 14,000 cores of the Summit supercomputer at Oak Ridge National Laboratory. Using approximate b-edge cover, we propose the first shared-memory and distributed-memory parallel algorithms for the adaptive anonymity problem. Minimum weighted b-edge cover and maximum weight b-matching are shown to be applicable to constructing graphs from datasets for machine learning tasks. We provide a mathematical optimization framework connecting the graph construction problem to the DCS problem.
|
38 |
EDGE COMPUTING APPROACH TO INDOOR TEMPERATURE PREDICTION USING MACHINE LEARNINGHyemin Kim (11565625) 22 November 2021 (has links)
<p>This paper aims to present a novel approach to real-time indoor temperature forecasting to meet energy consumption constraints in buildings, utilizing computing resources available at the edge of a network, close to data sources. This work was inspired by the irreversible effects of global warming accelerated by greenhouse gas emissions from burning fossil fuels. As much as human activities have heavy impacts on global energy use, it is of utmost importance to reduce the amount of energy consumed in every possible scenario where humans are involved. According to the US Environmental Protection Agency (EPA), one of the biggest greenhouse gas sources is commercial and residential buildings, which took up 13 percent of 2019 greenhouse gas emissions in the United States. In this context, it is assumed that information of the building environment such as indoor temperature and indoor humidity, and predictions based on the information can contribute to more accurate and efficient regulation of indoor heating and cooling systems. When it comes to indoor temperature, distributed IoT devices in buildings can enable more accurate temperature forecasting and eventually help to build administrators in regulating the temperature in an energy-efficient way, but without damaging the indoor environment quality. While the IoT technology shows potential as a complement to HVAC control systems, the majority of existing IoT systems integrate a remote cloud to transfer and process all data from IoT sensors. Instead, the proposed IoT system incorporates the concept of edge computing by utilizing small computer power in close proximity to sensors where the data are generated, to overcome problems of the traditional cloud-centric IoT architecture. In addition, as the microcontroller at the edge supports computing, the machine learning-based prediction of indoor temperature is performed on the microcomputer and transferred to the cloud for further processing. The machine learning algorithm used for prediction, ANN (Artificial Neural Network) is evaluated based on error metrics and compared with simple prediction models.</p>
|
39 |
Optimalizace investičních strategií pomocí genetických algoritmů / Optimization of Investment Strategy Using Genetic AlgorithmsNovák, Tomáš January 2015 (has links)
This thesis is focused on the design and optimization of automated trading system, which will be traded in FOREX. The aim is to create a business strategy that is relatively safe, stable and profitable. Optimization and testing on historical data are a prerequisite for the deployment into real trading.
|
40 |
Graph-based Analysis of Dynamic SystemsSchiller, Benjamin 15 December 2016 (has links)
The analysis of dynamic systems provides insights into their time-dependent characteristics. This enables us to monitor, evaluate, and improve systems from various areas. They are often represented as graphs that model the system's components and their relations. The analysis of the resulting dynamic graphs yields great insights into the system's underlying structure, its characteristics, as well as properties of single components. The interpretation of these results can help us understand how a system works and how parameters influence its performance. This knowledge supports the design of new systems and the improvement of existing ones.
The main issue in this scenario is the performance of analyzing the dynamic graph to obtain relevant properties. While various approaches have been developed to analyze dynamic graphs, it is not always clear which one performs best for the analysis of a specific graph. The runtime also depends on many other factors, including the size and topology of the graph, the frequency of changes, and the data structures used to represent the graph in memory. While the benefits and drawbacks of many data structures are well-known, their runtime is hard to predict when used for the representation of dynamic graphs. Hence, tools are required to benchmark and compare different algorithms for the computation of graph properties and data structures for the representation of dynamic graphs in memory. Based on deeper insights into their performance, new algorithms can be developed and efficient data structures can be selected.
In this thesis, we present four contributions to tackle these problems: A benchmarking framework for dynamic graph analysis, novel algorithms for the efficient analysis of dynamic graphs, an approach for the parallelization of dynamic graph analysis, and a novel paradigm to select and adapt graph data structures. In addition, we present three use cases from the areas of social, computer, and biological networks to illustrate the great insights provided by their graph-based analysis.
We present a new benchmarking framework for the analysis of dynamic graphs, the Dynamic Network Analyzer (DNA). It provides tools to benchmark and compare different algorithms for the analysis of dynamic graphs as well as the data structures used to represent them in memory. DNA supports the development of new algorithms and the automatic verification of their results. Its visualization component provides different ways to represent dynamic graphs and the results of their analysis.
We introduce three new stream-based algorithms for the analysis of dynamic graphs. We evaluate their performance on synthetic as well as real-world dynamic graphs and compare their runtimes to snapshot-based algorithms. Our results show great performance gains for all three algorithms. The new stream-based algorithm StreaM_k, which counts the frequencies of k-vertex motifs, achieves speedups up to 19,043 x for synthetic and 2882 x for real-world datasets.
We present a novel approach for the distributed processing of dynamic graphs, called parallel Dynamic Graph Analysis (pDNA). To analyze a dynamic graph, the work is distributed by a partitioner that creates subgraphs and assigns them to workers. They compute the properties of their respective subgraph using standard algorithms. Their results are used by the collator component to merge them to the properties of the original graph. We evaluate the performance of pDNA for the computation of five graph properties on two real-world dynamic graphs with up to 32 workers. Our approach achieves great speedups, especially for the analysis of complex graph measures.
We introduce two novel approaches for the selection of efficient graph data structures. The compile-time approach estimates the workload of an analysis after an initial profiling phase and recommends efficient data structures based on benchmarking results. It achieves speedups of up to 5.4 x over baseline data structure configurations for the analysis of real-word dynamic graphs. The run-time approach monitors the workload during analysis and exchanges the graph representation if it finds a configuration that promises to be more efficient for the current workload. Compared to baseline configurations, it achieves speedups up to 7.3 x for the analysis of a synthetic workload.
Our contributions provide novel approaches for the efficient analysis of dynamic graphs and tools to further investigate the trade-offs between different factors that influence the performance.:1 Introduction
2 Notation and Terminology
3 Related Work
4 DNA - Dynamic Network Analyzer
5 Algorithms
6 Parallel Dynamic Network Analysis
7 Selection of Efficient Graph Data Structures
8 Use Cases
9 Conclusion
A DNA - Dynamic Network Analyzer
B Algorithms
C Selection of Efficient Graph Data Structures
D Parallel Dynamic Network Analysis
E Graph-based Intrusion Detection System
F Molecular Dynamics
|
Page generated in 0.1126 seconds