• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 7
  • 5
  • 1
  • Tagged with
  • 62
  • 62
  • 17
  • 16
  • 11
  • 11
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

K-way Partitioning Of Signed Bipartite Graphs

Omeroglu, Nurettin Burak 01 September 2012 (has links) (PDF)
Clustering is the process in which data is differentiated, classified according to some criteria. As a result of partitioning process, data is grouped into clusters for specific purpose. In a social network, clustering of people is one of the most popular problems. Therefore, we mainly concentrated on finding an efficient algorithm for this problem. In our study, data is made up of two types of entities (e.g., people, groups vs. political issues, religious beliefs) and distinct from most previous works, signed weighted bipartite graphs are used to model relations among them. For the partitioning criterion, we use the strength of the opinions between the entities. Our main intention is to partition the data into k-clusters so that entities within clusters represent strong relationship. One such example from a political domain is the opinion of people on issues. Using the signed weights on the edges, these bipartite graphs can be partitioned into two or more clusters. In political domain, a cluster represents strong relationship among a group of people and a group of issues. After partitioning, each cluster in the result set contains like-minded people and advocated issues. Our work introduces a general mechanism for k-way partitioning of signed bipartite graphs. One of the great advantages of our thesis is that it does not require any preliminary information about the structure of the input dataset. The idea has been illustrated on real and randomly generated data and promising results have been shown.
32

Resource-aware Load Balancing System With Artificial Neural Networks

Yildiz, Ali 01 September 2006 (has links) (PDF)
As the distributed systems becomes popular, efficient load balancing systems taking better decisions must be designed. The most important reasons that necessitate load balancing in a distributed system are the heterogeneous hosts having different com- puting powers, external loads and the tasks running on different hosts but communi- cating with each other. In this thesis, a load balancing approach, called RALBANN, developed using graph partitioning and artificial neural networks (ANNs) is de- scribed. The aim of RALBANN is to integrate the successful load balancing deci- sions of graph partitioning algorithms with the efficient decision making mechanism of ANNs. The results showed that using ANNs to make efficient load balancing can be very beneficial. If trained enough, ANNs may load the balance as good as graph partitioning algorithms more efficiently.
33

A Parallel Graph Partitioner for STAPL

Castet, Nicolas 03 October 2013 (has links)
Multi-core architectures are present throughout a large selection of computing devices from cell phones to super-computers. Parallel applications running on these devices solve bigger problems in a shorter time. Writing those applications is a difficult task for programmers. They need to deal with low-level parallel mechanisms such as data distribution, inter-processor communication, and task placement. The goal of the Standard Template Adaptive Parallel Library (STAPL) is to provide a generic high-level framework to develop parallel applications. One of the first steps of a parallel application is to partition and distribute the data throughout the system. An important data structure for parallel applications to store large amounts of data and model many types of relations is the graph. A mesh, which is a special type of graph, is often used to model a spatial domain in scientific applications. Graph and mesh partitioning has many applications such as VLSI circuit design, parallel task scheduling, and data distribution. Data distribution, significantly impacts the performance of a parallel application. In this thesis, we introduce the STAPL Parallel Graph Partitioner Framework. This framework provides a generic infrastructure to partition arbitrary graphs and meshes and to build customized partitioners. It includes the state of the art parallel k-way multilevel scheme to partition arbitrary graphs, a parallel mesh partitioner with parameterized partition shape, and a customized partitioner used for discrete ordinates particle transport computations. This framework is also part of a generic library, STAPL, allowing the partitioning of the data and development of the whole parallel application to be done in the same environment. We show the user-friendly interface of the framework and its scalability for partitioning different mesh and graph benchmarks on a Cray XE6 system. We also highlight the performance of our customized unstructured mesh partitioner for a discrete ordinates particle transport code. The developed columnar decompositions significantly reduce the execution time of simultaneous sweeps on unstructured meshes.
34

Load balancing for parallel coupled simulations / Equilibrage de la charge des simulations parallèles couplées

Predari, Maria 09 December 2016 (has links)
Dans le contexte du calcul scientique, l'équilibrage de la charge est un problème crucial qui conditionne la performance des simulations numériques parallèles. L'objectif est de répartir la charge de travail entre un nombre de processeurs donné, afin de minimiser le temps global d'exécution. Une stratégie populaire pour résoudre ce problème consiste à modéliser la simulation à l'aide d'un graphe et à appliquer des algorithmes de partitionnement. En outre, les simulations numériques tendent à se complexifier, notamment en mixant plusieurs codes représentant des physiques différentes ou des échelles différentes. On parle alors de couplage de codes multi-physiques ou multi-échelles. Dans ce contexte, le problème de l'équilibrage de charge devient également plus difficile, car il ne s'agit plus d'équilibrer chacun des codes séparément, mais l'ensemble de ces codes pris dans leur globalité. Dans ce travail, on propose de resoudre ce problème en utilisant le modèle de partitionnement à sommets fixes qui pourrait représenter efficacement les contraintes supplémentaires imposées par les codes couplés (co-partitionnement). Nous avons donc développé un algorithme direct de partitionnement de graphe qui gère des sommets fixes. L'algorithme a été implémenté dans le partitionneur Scotch et une série d'expériences ont été menées sur la collection des graphes DIMACS. Ensuite nous avons proposé trois algorithmes de co-partitionnement qui respectent les contraintes issues des codes couplés respectifs. Nous avons egalement validé nos algorithmes par une étude expérimentale en comparant nos méthodes aux strategies actuelles sur des cas artificiels ainsi que sur des codes réels couplés. / Load balancing is an important step conditioning the performance of parallel applications. The goal is to distribute roughly equal amounts of computational load across a number of processors, while minimising interprocessor communication. A common approach to model the problem is based on graph structures and graph partitioning algorithms. Moreover, new challenges involve the simulation of more complex physical phenomena, where different parts of the computational domain exhibit different physical behavior. Such simulations follow the paradigm of multi-physics or multi-scale modeling approaches. Combining such different models in massively parallel computations is still a challenge to reach high performance. Additionally, traditional load balancing algorithms are often inadequate, and more sophisticated solutions should be explored. In this thesis, we propose new graph partitioning algorithms that balance the load of such simulations, refered to as co-partitioning. We formulate this problem with the use of graph partitioning with initially fixed vertices which we believe represents efficiently the additional constraints of coupled simulations. We have therefore developed a direct algorithm for graph partitioning that manages successfully problems with fixed vertices. The algorithm is implemented inside Scotch partitioner and a series of experiments were carried out on the DIMACS graph collection. Moreover we proposed three copartitioning algorithms that respect the constraints of the respective coupled codes. We finally validated our algorithms by an experimental study comparing our methods with current strategies on artificial cases and on real-life coupled simulations.
35

Detecção de comunidades em redes complexas utilizando estratégia multinível / Community detection in complex networks: a multilevel approach

Leonardo Jesus Almeida 05 October 2009 (has links)
O grande volume de dados armazenados em meio digital dificulta a anáalise e extração de informações por um ser humano sem que seja utilizada alguma ferramenta computacional inteligente. A área de Aprendizado de Máquina (AM) estuda e desenvolve algoritmos para o processamento e obtenção automática de conhecimento em dados digitais. Tradicionalmente, os algoritmos de AM modelam os dados analisados com base na abordagem proposicional; entretanto, recentemente com a disponibilidade de conjuntos de dados relacionais novas abordagens têm sido estudadas, como a modelagem utilizando redes complexas. Redes complexas é uma área de pesquisa recente e ativa que têm atraíido a atenção de pesquisadores e tem sido aplicada em diversos domínios. Mais especificamente, o estudo de detecção de comunidades em redes complexas é o tema principal deste trabalho. Detectar comunidades consiste em buscar grupos de vértices densamente conectados entre si em uma rede. Detectar a melhor divisão em comunidades de uma rede é um problema NP-completo, o que requer que o desenvolvimento de soluções viáveis baseiem-se em heurísticas como, por exemplo, medidas de qualidade. Newman prop^os a medida de modularidade Q que tem se mostrado eficiiente na análise de comunidades em redes. Este trabalho apresenta o Algoritmo Multinível de Otimização de Modularidade (AMOM) que é baseado a na otimização da medida de modularidade e integrado na estratégia multinível. A estratégia multinível é composta de três fases: (i) sucessivas compactações da rede inicial com base em contrações de arestas e fus~oes de vértices, (ii) particionamento da rede reduzida utilizando Algoritmo de Otimização de Modularidade (AOM) modificado, e (iii) sucessivas descompactações das redes intermediárias até que se retorne a rede inicial. O principal atrativo da estratégia é viabilizar a utilização de algoritmos custosos no particionamento do grafo compactado, uma vez que neste grafo a quantidade de vértices e arestas é uma fração reduzida em relação ao grafo inicial. O trabalho também propõe dois novos métodos para refinamento dos particionamentos durante a fase de uncoasening. A fiim de avaliar a escalabilidade e eficiiência da metodologia proposta foram realizados experimentos empíricos em redes consideradas benchmark. Os resultados demonstram um significativo ganho de desempenho, mantendo bons resultados qualitativos / Human based analysis of large amount of data is a hard task when no intelligent computer aid is provided. In this context, Machine Learning (ML) algorithms are aimed at automatically processing and obtaining knowledge from data. In general, ML algorithms use a propositional representation of data such as an attribute-value table. However, this model is not suitable for relational information modeling, which can be better accomplished using graphs or networks. In this context, complex networks have been call attention of scientific community recently and many applications in different domains have been developed. In special, one of complex networks research trends is the community detection field which is the main focus of this work. Community detection is the problem of finding dense and disjoint connected groups of vertices in a network. The problem is a well know NP-complete task which requires heuristics approaches, like quality measures, to be addressed. Newman introduced a specific quality measure called modularity that proved to be useful for analysis communities in networks. This work presents a new algorithm, called Multilevel Modularity Optimization Algorithm, based on modularity measure optimization integrated in a multilevel graph partitioning strategy. The multilevel graph partitioning scheme consists of three phases: (i) reduction of the size (coarsen) of original graph by collapsing vertices and edges, (ii) partitioning the coarsened graph, and (iii) uncoarsen it to construct a partition for the original graph. The rationale behind this strategy is to apply a computationally expensive method in a coarsened graph, i.e., with a significantly reduced number of vertices and edges. In addition, it is proposed two new methods that uses modularity and clustering coefficient for partition refinement. Empirical evaluation on benchmarks networks using this approach demonstrate a significant speed up gain compared to the original modularity-based algorithm, keeping a good quality clusters partitioning
36

Multiple Operator Metaheuristics for Graph Partitioning Problems / Heuristiques à opérateurs multiples pour des problèmes de partitionnement de graphe

Ma, Fuda 28 June 2016 (has links)
Les problèmes de partitionnement de graphique sont une classe bien connue des problèmes d'optimisation combinatoire NP-difficiles avec un large éventail d'applications, telles que la conception de plans VLSI, la physique statistique, la planification d'une équipe sportive, la segmentation d'images et la structuration de protéines. En raison de la grande complexité de ces problèmes, les approches heuristiques et métaheuristiques sont couramment utilisées pour aborder les problèmes difficiles. Cette thèse considère trois problèmes représentatifs de cette famille, incluant le problème "max-k-cut", le problème "max-bisection" et le problème de séparation de sommets (VSP). Elle vise à élaborer des algorithmes heuristiques efficaces basés sur une ensemble d'opérateurs de recherche complémentaires. Plus précisément, nous développons une heuristique à opérateur multiple (MOH) pour "max-k-cut", un algorithme de recherche Tabu itérée (ITS) pour "max-bisection" et un algorithme "path relinking" (PR-VSP) pour VSP. Des résultats expérimentaux sur des jeux de test standard démontrent que les algorithmes proposés rivalisent favorablement avec les approches existantes de la littérature. L'utilisation combinée de plusieurs opérateurs de recherche est analysée afin de mettre en évidence l'influence de ces opérateurs sur la performance des algorithmes. / Graph partitioning problems are a class of well-known NP-hard combinatorial optimization problems with a wide range of applications, such as VLSI layout design, statistical physics, sports team scheduling, image segmentation, and protein conformation for instances. This thesis considers three representative problems in this family, including the max-k-cut problem, the max-bisection problem and the vertex separator problem (VSP). Due to high computational complexity, heuristic and metaheuristic approaches are commonly used for approximating the challenging problems. This thesis is devoted to developing efficient metaheuristic algorithms based on a collection of complementary search operators. Specifically, we develop a multiple operator heuristic (MOH) for max-k-cut, an iterated tabu search (ITS) algorithm for max-bisection and a path relinking (PR-VSP) algorithm for VSP. Extensive computational experiments and comparisons demonstrate that the proposed algorithms compete favorably with state-of-the-art approaches in the literature. The combined use of multiple search operators is analyzed to shed lights on the influence over the performance of the algorithms.
37

Allocation Strategies for Data-Oriented Architectures

Kiefer, Tim 09 October 2015 (has links)
Data orientation is a common design principle in distributed data management systems. In contrast to process-oriented or transaction-oriented system designs, data-oriented architectures are based on data locality and function shipping. The tight coupling of data and processing thereon is implemented in different systems in a variety of application scenarios such as data analysis, database-as-a-service, and data management on multiprocessor systems. Data-oriented systems, i.e., systems that implement a data-oriented architecture, bundle data and operations together in tasks which are processed locally on the nodes of the distributed system. Allocation strategies, i.e., methods that decide the mapping from tasks to nodes, are core components in data-oriented systems. Good allocation strategies can lead to balanced systems while bad allocation strategies cause skew in the load and therefore suboptimal application performance and infrastructure utilization. Optimal allocation strategies are hard to find given the complexity of the systems, the complicated interactions of tasks, and the huge solution space. To ensure the scalability of data-oriented systems and to keep them manageable with hundreds of thousands of tasks, thousands of nodes, and dynamic workloads, fast and reliable allocation strategies are mandatory. In this thesis, we develop novel allocation strategies for data-oriented systems based on graph partitioning algorithms. Therefore, we show that systems from different application scenarios with different abstraction levels can be generalized to generic infrastructure and workload descriptions. We use weighted graph representations to model infrastructures with bounded and unbounded, i.e., overcommited, resources and possibly non-linear performance characteristics. Based on our generalized infrastructure and workload model, we formalize the allocation problem, which seeks valid and balanced allocations that minimize communication. Our allocation strategies partition the workload graph using solution heuristics that work with single and multiple vertex weights. Novel extensions to these solution heuristics can be used to balance penalized and secondary graph partition weights. These extensions enable the allocation strategies to handle infrastructures with non-linear performance behavior. On top of the basic algorithms, we propose methods to incorporate heterogeneous infrastructures and to react to changing workloads and infrastructures by incrementally updating the partitioning. We evaluate all components of our allocation strategy algorithms and show their applicability and scalability with synthetic workload graphs. In end-to-end--performance experiments in two actual data-oriented systems, a database-as-a-service system and a database management system for multiprocessor systems, we prove that our allocation strategies outperform alternative state-of-the-art methods.
38

Contributions à des problèmes de partitionnement de graphe sous contraintes de ressources / Contributions to graph partitioning problems under resource constraints

Nguyen, Dang Phuong 06 December 2016 (has links)
Le problème de partitionnement de graphe est un problème fondamental en optimisation combinatoire. Le problème revient à décomposer l'ensemble des nœuds d'un graphe en plusieurs sous-ensembles disjoints de nœuds (ou clusters), de sorte que la somme des poids des arêtes dont les extrémités se trouvent dans différents clusters est réduite au minimum. Dans cette thèse, nous étudions le problème de partitionnement de graphes avec des poids (non négatifs) sur les nœuds et un ensemble de contraintes supplémentaires sur les clusters (GPP-SC) précisant que la capacité totale (par exemple, le poids total des nœuds dans un cluster, la capacité totale sur les arêtes ayant au moins une extrémité dans un cluster) de chaque groupe ne doit pas dépasser une limite prédéfinie (appelée limite de capacité). Ceci diffère des variantes du problème de partitionnement de graphe le plus souvent abordées dans la littérature en ce que:_ Le nombre de clusters n'est pas imposé (et fait partie de la solution),_ Les poids des nœuds ne sont pas homogènes.Le sujet de ce travail est motivé par le problème de la répartition des tâches dans les structures multicœurs. Le but est de trouver un placement admissible de toutes les tâches sur les processeurs tout en respectant leur capacité de calcul et de minimiser le volume total de la communication inter-processeur. Ce problème peut être formulé comme un problème de partitionnement de graphe sous contraintes de type sac-à-dos (GPKC) sur des graphes peu denses, un cas particulier de GPP-SC. En outre, dans de telles applications, le cas des incertitudes sur les poids des nœuds (poids qui correspondent par exemple à la durée des tâches) doit être pris en compte.La première contribution de ce travail est de prendre en compte le caractère peu dense des graphes G = (V,E) typiques rencontrés dans nos applications. La plupart des modèles de programmation mathématique existants pour le partitionnement de graphe utilisent O(|V|^3) contraintes métriques pour modéliser les partitions de nœuds et donc supposent implicitement que G est un graphe complet. L'utilisation de ces contraintes métriques dans le cas où G n'est pas complet nécessite l'ajout de contraintes qui peuvent augmenter considérablement la taille du programme. Notre résultat montre que, pour le cas où G est un graphe peu dense, nous pouvons réduire le nombre de contraintes métriques à O(|V||E|) [1], [4]... / The graph partitioning problem is a fundamental problem in combinatorial optimization. The problem refers to partitioning the set of nodes of an edge weighted graph in several disjoint node subsets (or clusters), so that the sum of the weights of the edges whose end-nodes are in different clusters is minimized. In this thesis, we study the graph partitioning problem on graph with (non negative) node weights with additional set constraints on the clusters (GPP-SC) specifying that the total capacity (e.g. the total node weight, the total capacity over the edges having at least one end-node in the cluster) of each cluster should not exceed a specified limit (called capacity limit). This differs from the variants of graph partitioning problem most commonly addressed in the literature in that:-The number of clusters is not imposed (and is part of the solution),-The weights of the nodes are not homogeneous.The subject of the present work is motivated by the task allocation problem in multicore structures. The goal is to find a feasible placement of all tasks to processors while respecting their computing capacity and minimizing the total volume of interprocessor communication. This problem can be formulated as a graph partitioning problem under knapsack constraints (GPKC) on sparse graphs, a special case of GPP-SC. Moreover, in such applications, the case of uncertain node weights (weights correspond for example to task durations) has to be taken into account.The first contribution of the present work is to take into account the sparsity character of the graph G = (V,E). Most existing mathematical programming models for the graph partitioning problem use O(|V|^3) metric constraints to model the partition of nodes and thus implicitly assume that G is a complete graph. Using these metric constraints in the case where G is not complete requires adding edges and constraints which may greatly increase the size of the program. Our result shows that for the case where G is a sparse graph, we can reduce the number of metric constraints to O(|V||E|).The second contribution of present work is to compute lower bounds for large size graphs. We propose a new programming model for the graph partitioning problem that make use of only O(m) variables. The model contains cycle inequalities and all inequalities related to the paths in the graph to formulate the feasible partitions. Since there are an exponential number of constraints, solving the model needs a separation method to speed up the computation times. We propose such a separation method that use an all pair shortest path algorithm thus is polynomial time. Computational results show that our new model and method can give tight lower bounds for large size graphs of thousands of nodes.....
39

Distributed frequent subgraph mining in the cloud / Fouille de sous-graphes fréquents dans les nuages

Aridhi, Sabeur 29 November 2013 (has links)
Durant ces dernières années, l’utilisation de graphes a fait l’objet de nombreux travaux, notamment en bases de données, apprentissage automatique, bioinformatique et en analyse des réseaux sociaux. Particulièrement, la fouille de sous-graphes fréquents constitue un défi majeur dans le contexte de très grandes bases de graphes. De ce fait, il y a un besoin d’approches efficaces de passage à l’échelle pour la fouille de sous-graphes fréquents surtout avec la haute disponibilité des environnements de cloud computing. Cette thèse traite la fouille distribuée de sous-graphe fréquents sur cloud. Tout d’abord, nous décrivons le matériel nécessaire pour comprendre les notions de base de nos deux domaines de recherche, à savoir la fouille de sous-graphe fréquents et le cloud computing. Ensuite, nous présentons les contributions de cette thèse. Dans le premier axe, une nouvelle approche basée sur le paradigme MapReduce pour approcher la fouille de sous-graphes fréquents à grande échelle. L’approche proposée offre une nouvelle technique de partitionnement qui tient compte des caractéristiques des données et qui améliore le partitionnement par défaut de MapReduce. Une telle technique de partitionnement permet un équilibrage des charges de calcul sur une collection de machine distribuée et de remplacer la technique de partitionnement par défaut de MapReduce. Nous montrons expérimentalement que notre approche réduit considérablement le temps d’exécution et permet le passage à l’échelle du processus de fouille de sous-graphe fréquents à partir de grandes bases de graphes. Dans le deuxième axe, nous abordons le problème d’optimisation multi-critères des paramètres liés à l’extraction distribuée de sous-graphes fréquents dans un environnement de cloud tout en optimisant le coût monétaire global du stockage et l’interrogation des données dans le nuage. Nous définissons des modèles de coûts de gestion et de fouille de données avec une plateforme de fouille de sous-graphe à grande échelle sur une architecture cloud. Nous présentons une première validation expérimentale des modèles de coûts proposés. / Recently, graph mining approaches have become very popular, especially in certain domains such as bioinformatics, chemoinformatics and social networks. One of the most challenging tasks in this setting is frequent subgraph discovery. This task has been highly motivated by the tremendously increasing size of existing graph databases. Due to this fact, there is urgent need of efficient and scaling approaches for frequent subgraph discovery especially with the high availability of cloud computing environments. This thesis deals with distributed frequent subgraph mining in the cloud. First, we provide the required material to understand the basic notions of our two research fields, namely graph mining and cloud computing. Then, we present the contributions of this thesis. In the first axis, we propose a novel approach for large-scale subgraph mining, using the MapReduce framework. The proposed approach provides a data partitioning technique that consider data characteristics. It uses the densities of graphs in order to partition the input data. Such a partitioning technique allows a balanced computational loads over the distributed collection of machines and replace the default arbitrary partitioning technique of MapReduce. We experimentally show that our approach decreases significantly the execution time and scales the subgraph discovery process to large graph databases. In the second axis, we address the multi-criteria optimization problem of tuning thresholds related to distributed frequent subgraph mining in cloud computing environments while optimizing the global monetary cost of storing and querying data in the cloud. We define cost models for managing and mining data with a large scale subgraph mining framework over a cloud architecture. We present an experimental validation of the proposed cost models in the case of distributed subgraph mining in the cloud.
40

Scalable System-Wide Traffic Flow Predictions Using Graph Partitioning and Recurrent Neural Networks

Reginbald Ivarsson, Jón January 2018 (has links)
Traffic flow predictions are an important part of an Intelligent Transportation System as the ability to forecast accurately the traffic conditions in a transportation system allows for proactive rather than reactive traffic control. Providing accurate real-time traffic predictions is a challenging problem because of the nonlinear and stochastic features of traffic flow. An increasingly widespread deployment of traffic sensors in a growing transportation system produces greater volume of traffic flow data. This results in problems concerning fast, reliable and scalable traffic predictions.The thesis explores the feasibility of increasing the scalability of real-time traffic predictions by partitioning the transportation system into smaller subsections. This is done by using data collected by Trafikverket from traffic sensors in Stockholm and Gothenburg to construct a traffic sensor graph of the transportation system. In addition, three graph partitioning algorithms are designed to divide the traffic sensor graph according to vehicle travel time. Finally, the produced transportation system partitions are used to train multi-layered long shortterm memory recurrent neural networks for traffic density predictions. Four different types of models are produced and evaluated based on root mean squared error, training time and prediction time, i.e. transportation system model, partitioned transportation models, single sensor models, and overlapping partition models.Results of the thesis show that partitioning a transportation system is a viable solution to produce traffic prediction models as the average prediction accuracy for each traffic sensor across the different types of prediction models are comparable. This solution tackles scalability issues that are caused by increased deployment of traffic sensors to the transportation system. This is done by reducing the number of traffic sensors each prediction model is responsible for which results in less complex models with less amount of input data. A more decentralized and effective solution can be achieved since the models can be distributed to the edge of the transportation system, i.e. near the physical location of the traffic sensors, reducing prediction and response time of the models. / Prognoser för trafikflödet är en viktig del av ett intelligent transportsystem, eftersom möjligheten att prognostisera exakt trafiken i ett transportsystem möjliggör proaktiv snarare än reaktiv trafikstyrning. Att tillhandahålla noggrann trafikprognosen i realtid är ett utmanande problem på grund av de olinjära och stokastiska egenskaperna hos trafikflödet. En alltmer utbredd använding av trafiksensorer i ett växande transportsystem ger större volym av trafikflödesdata. Detta leder till problem med snabba, pålitliga och skalbara trafikprognoser.Avhandlingen undersöker möjligheten att öka skalbarheten hos realtidsprognoser genom att dela transportsystemet i mindre underavsnitt. Detta görs genom att använda data som samlats in av Trafikverket från trafiksensorer i Stockholm och Göteborg för att konstruera en trafiksensor graf för transportsystemet. Dessutom är tre grafpartitioneringsalgoritmer utformade för att dela upp trafiksensor grafen enligt fordonets körtid. Slutligen används de producerade transportsystempartitionerna för att träna multi-layered long short memory neurala nät för förspänning av trafiktäthet. Fyra olika typer av modeller producerades och utvärderades baserat på rotvärdes kvadratfel, träningstid och prediktionstid, d.v.s. transportsystemmodell, partitionerade transportmodeller, enkla sensormodeller och överlappande partitionsmodeller.Resultat av avhandlingen visar att partitionering av ett transportsystem är en genomförbar lösning för att producera trafikprognosmodeller, eftersom den genomsnittliga prognoser noggrannheten för varje trafiksensor över de olika typerna av prediktionsmodeller är jämförbar. Denna lösning tar itu med skalbarhetsproblem som orsakas av ökad användning av trafiksensorer till transportsystemet. Detta görs genom att minska antal trafiksensorer varje trafikprognosmodell är ansvarig för. Det resulterar i mindre komplexa modeller med mindre mängd inmatningsdata. En mer decentraliserad och effektiv lösning kan uppnås eftersom modellerna kan distribueras till transportsystemets kant, d.v.s. nära trafiksensorns fysiska läge, vilket minskar prognosoch responstid för modellerna.

Page generated in 0.1183 seconds