• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 14
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 29
  • 29
  • 29
  • 16
  • 12
  • 12
  • 11
  • 9
  • 9
  • 9
  • 9
  • 9
  • 7
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Exploitation d'infrastructures hétérogènes de calcul distribué pour la simulation Monte-Carlo dans le domaine médical / Exploiting Heterogeneous Distributed Systems for Monte-Carlo Simulations in the Medical Field

Pop, Sorina 21 October 2013 (has links)
Les applications Monte-Carlo sont facilement parallélisables, mais une parallélisation efficace sur des grilles de calcul est difficile à réaliser. Des stratégies avancées d'ordonnancement et de parallélisation sont nécessaires pour faire face aux taux d'erreur élevés et à l'hétérogénéité des ressources sur des architectures distribuées. En outre, la fusion des résultats partiels est également une étape critique. Dans ce contexte, l'objectif principal de notre travail est de proposer de nouvelles stratégies pour une exécution plus rapide et plus fiable des applications Monte-Carlo sur des grilles de calcul. Ces stratégies concernent à la fois le phase de calcul et de fusion des applications Monte-Carlo et visent à être utilisées en production. Dans cette thèse, nous introduisons une approche de parallélisation basée sur l'emploi des tâches pilotes et sur un nouvel algorithme de partitionnement dynamique. Les résultats obtenus en production sur l'infrastructure de grille européenne (EGI) en utilisant l'application GATE montrent que l'utilisation des tâches pilotes apporte une forte amélioration par rapport au système d'ordonnancement classique et que l'algorithme de partitionnement dynamique proposé résout le problème d'équilibrage de charge des applications Monte-Carlo sur des systèmes distribués hétérogènes. Puisque toutes les tâches finissent presque simultanément, notre méthode peut être considérée comme optimale à la fois en termes d'utilisation des ressources et de temps nécessaire pour obtenir le résultat final (makespan). Nous proposons également des stratégies de fusion avancées avec plusieurs tâches de fusion. Une stratégie utilisant des sauvegardes intermédiaires de résultat (checkpointing) est utilisée pour permettre la fusion incrémentale à partir des résultats partiels et pour améliorer la fiabilité. Un modèle est proposé pour analyser le comportement de la plateforme complète et aider à régler ses paramètres. Les résultats expérimentaux montrent que le modèle correspond à la réalité avec une erreur relative de 10% maximum, que l'utilisation de plusieurs tâches de fusion parallèles réduit le temps d'exécution total de 40% en moyenne, que la stratégie utilisant des sauvegardes intermédiaires permet la réalisation de très longues simulations sans pénaliser le makespan. Pour évaluer notre équilibrage de charge et les stratégies de fusion, nous mettons en œuvre une simulation de bout-en-bout de la plateforme décrite ci-dessus. La simulation est réalisée en utilisant l'environnement de simulation SimGrid. Les makespan réels et simulés sont cohérents, et les conclusions tirées en production sur l'influence des paramètres tels que la fréquence des sauvegardes intermédiaires et le nombre de tâches de fusion sont également valables en simulation. La simulation ouvre ainsi la porte à des études paramétriques plus approfondies. / Particle-tracking Monte-Carlo applications are easily parallelizable, but efficient parallelization on computing grids is difficult to achieve. Advanced scheduling strategies and parallelization methods are required to cope with failures and resource heterogeneity on distributed architectures. Moreover, the merging of partial simulation results is also a critical step. In this context, the main goal of our work is to propose new strategies for a faster and more reliable execution of Monte-Carlo applications on computing grids. These strategies concern both the computing and merging phases of Monte-Carlo applications and aim at being used in production. In this thesis, we introduce a parallelization approach based on pilots jobs and on a new dynamic partitioning algorithm. Results obtained on the production European Grid Infrastructure (EGI) using the GATE application show that pilot jobs bring strong improvement w.r.t. regular metascheduling and that the proposed dynamic partitioning algorithm solves the load-balancing problem of particle-tracking Monte-Carlo applications executed in parallel on distributed heterogeneous systems. Since all tasks complete almost simultaneously, our method can be considered optimal both in terms of resource usage and makespan. We also propose advanced merging strategies with multiple parallel mergers. Checkpointing is used to enable incremental result merging from partial results and to improve reliability. A model is proposed to analyze the behavior of the complete framework and help tune its parameters. Experimental results show that the model fits the real makespan with a relative error of maximum 10%, that using multiple parallel mergers reduces the makespan by 40% on average, that checkpointing enables the completion of very long simulations and that it can be used without penalizing the makespan. To evaluate our load balancing and merging strategies, we implement an end-to-end SimGrid-based simulation of the previously described framework for Monte-Carlo computations on EGI. Simulated and real makespans are consistent, and conclusions drawn in production about the influence of application parameters such as the checkpointing frequency and the number of mergers are also made in simulation. These results open the door to better and faster experimentation. To illustrate the outcome of the proposed framework, we present some usage statistics and a few examples of results obtained in production. These results show that our experience in production is significant in terms of users and executions, that the dynamic load balancing can be used extensively in production, and that it significantly improves performance regardless of the variable grid conditions.
22

Modèles de distribution pour la simulation de trafic multi-agent / Distributed models for multi-agent traffic simulation

Mastio, Matthieu 12 July 2017 (has links)
L'analyse et la prévision du comportement des réseaux de transport sont aujourd'hui des éléments cruciaux pour la mise en place de politiques de gestion territoriale. La simulation informatique du trafic routier est un outil puissant permettant de tester des stratégies de gestion avant de les déployer dans un contexte opérationnel. La simulation du trafic à l'échelle d'un ville requiert cependant une puissance de calcul très importante, dépassant les capacité d'un seul ordinateur.Dans cette thèse, nous étudions des méthodes permettant d'effectuer des simulations de trafic multi-agent à large échelle. Nous proposons des solutions permettant de distribuer l'exécution de telles simulations sur un grand nombre de coe urs de calcul. L'une d'elle distribue directement les agents sur les coeurs disponibles, tandis que la seconde découpe l'environnement sur lequel les agents évoluent. Les méthodes de partitionnement de graphes sont étudiées à cet effet, et nous proposons une procédure de partitionnement spécialement adaptée à la simulation de trafic multi-agent. Un algorithme d'équilibrage de charge dynamique est également développé, afin d'optimiser les performances de la distribution de la simulation microscopique.Les solutions proposées ont été éprouvées sur un réseau réel représentant la zone de Paris-Saclay.Ces solutions sont génériques et peuvent être appliquées sur la plupart des simulateurs existants.Les résultats montrent que la distribution des agents améliore grandement les performances de la simulation macroscopique, tandis que le découpage de l'environnement est plus adapté à la simulation microscopique. Notre algorithme d'équilibrage de charge améliore en outre significativement l'efficacité de la distribution de l'environnement / Nowadays, analysis and prediction of transport network behavior are crucial elements for the implementation of territorial management policies. Computer simulation of road traffic is a powerful tool for testing management strategies before deploying them in an operational context. Simulation of city-wide traffic requires significant computing power exceeding the capacity of a single computer.This thesis studies the methods to perform large-scale multi-agent traffic simulations. We propose solutions allowing the distribution of such simulations on a large amount of computing cores.One of them distributes the agents directly on the available cores, while the second splits the environment on which the agents evolve. Graph partitioning methods are studied for this purpose, and we propose a partitioning procedure specially adapted to the multi-agent traffic simulation. A dynamic load balancing algorithm is also developed to optimize the performance of the microscopic simulation distribution.The proposed solutions have been tested on a real network representing the Paris-Saclay area.These solutions are generic and can be applied to most existing simulators.The results show that the distribution of the agents greatly improves the performance of the macroscopic simulation, whereas the environment distribution is more suited to microscopic simulation. Our load balancing algorithm also significantly improves the efficiency of the environment based distribution
23

A simulation workflow to evaluate the performance of dynamic load balancing with over decomposition for iterative parallel applications

Tesser, Rafael Keller January 2018 (has links)
Nesta tese é apresentado um novo workflow de simulação para avaliar o desempenho do balanceamento de carga dinâmico baseado em sobre-decomposição aplicado a aplicações paralelas iterativas. Seus objetivos são realizar essa avaliação com modificações mínimas da aplicação e a baixo custo em termos de tempo e de sua necessidade de recursos computacionais. Muitas aplicações paralelas sofrem com desbalanceamento de carga dinâmico (temporal) que não pode ser tratado a nível de aplicação. Este pode ser causado por características intrínsecas da aplicação ou por fatores externos de hardware ou software. Como demonstrado nesta tese, tal desbalanceamento é encontrado mesmo em aplicações cujo código não aparenta qualquer dinamismo. Portanto, faz-se necessário utilizar mecanismo de balanceamento de carga dinâmico a nível de runtime. Este trabalho foca no balanceamento de carga dinâmico baseado em sobre-decomposição. No entanto, avaliar e ajustar o desempenho de tal técnica pode ser custoso. Isso geralmente requer modificações na aplicação e uma grande quantidade de execuções para obter resultados estatisticamente significativos com diferentes combinações de parâmetros de balanceamento de carga Além disso, para que essas medidas sejam úteis, são usualmente necessárias grandes alocações de recursos em um sistema de produção. Simulated Adaptive MPI (SAMPI), nosso workflow de simulação, emprega uma combinação de emulação sequencial e replay de rastros para reduzir os custos dessa avaliação. Tanto emulação sequencial como replay de rastros requerem um único nó computacional. Além disso, o replay demora apenas uma pequena fração do tempo de uma execução paralela real da aplicação. Adicionalmente à simulação de balanceamento de carga, foram desenvolvidas técnicas de agregação espacial e rescaling a nível de aplicação, as quais aceleram o processo de emulação. Para demonstrar os potenciais benefícios do balanceamento de carga dinâmico com sobre-decomposição, foram avaliados os ganhos de desempenho empregando essa técnica a uma aplicação iterativa paralela da área de geofísica (Ondes3D). Adaptive MPI (AMPI) foi utilizado para prover o suporte a balanceamento de carga dinâmico, resultando em ganhos de desempenho de até 36.58% em 288 cores de um cluster Essa avaliação também é usada pra ilustrar as dificuldades encontradas nesse processo, assim justificando o uso de simulação para facilitá-la. Para implementar o workflow SAMPI, foi utilizada a interface SMPI do simulador SimGrid, tanto no modo de emulação, como no de replay de rastros. Para validar esse simulador, foram comparadas execuções simuladas (SAMPI) e reais (AMPI) da aplicação Ondes3D. As simulações apresentaram uma evolução do balanceamento de carga bastante similar às execuções reais. Adicionalmente, SAMPI estimou com sucesso a melhor heurística de balanceamento de carga para os cenários testados. Além dessa validação, nesta tese é demonstrado o uso de SAMPI para exploração de parâmetros de balanceamento de carga e para planejamento de capacidade computacional. Quanto ao desempenho da simulação, estimamos que o workflow completo é capaz de simular a execução do Ondes3D com 24 combinações de parâmetros de balanceamento de carga em 5 horas para o nosso cenário de terremoto mais pesado e 3 horas para o mais leve. / In this thesis we present a novel simulation workflow to evaluate the performance of dynamic load balancing with over-decomposition applied to iterative parallel applications at low-cost. Its goals are to perform such evaluation with minimal application modification and at a low cost in terms of time and of resource requirements. Many parallel applications suffer from dynamic (temporal) load imbalance that can not be treated at the application level. It may be caused by intrinsic characteristics of the application or by external software and hardware factors. As demonstrated in this thesis, such dynamic imbalance can be found even in applications whose codes do not hint at any dynamism. Therefore, we need to rely on runtime dynamic load balancing mechanisms, such as dynamic load balancing based on over-decomposition. The problem is that evaluating and tuning the performance of such technique can be costly. This usually entails modifications to the application and a large number of executions to get statistically sound performance measurements with different load balancing parameter combinations. Moreover, useful and accurate measurements often require big resource allocations on a production cluster. Our simulation workflow, dubbed Simulated Adaptive MPI (SAMPI), employs a combined sequential emulation and trace-replay simulation approach to reduce the cost of such an evaluation Both sequential emulation and trace-replay require a single computer node. Additionally, the trace-replay simulation lasts a small fraction of the real-life parallel execution time of the application. Besides the basic SAMPI simulation, we developed spatial aggregation and applicationlevel rescaling techniques to speed-up the emulation process. To demonstrate the real-life performance benefits of dynamic load balance with over-decomposition, we evaluated the performance gains obtained by employing this technique on a iterative parallel geophysics application, called Ondes3D. Dynamic load balancing support was provided by Adaptive MPI (AMPI). This resulted in up to 36.58% performance improvement, on 288 cores of a cluster. This real-life evaluation also illustrates the difficulties found in this process, thus justifying the use of simulation. To implement the SAMPI workflow, we relied on SimGrid’s Simulated MPI (SMPI) interface in both emulation and trace-replay modes.To validate our simulator, we compared simulated (SAMPI) and real-life (AMPI) executions of Ondes3D. The simulations presented a load balance evolution very similar to real-life and were also successful in choosing the best load balancing heuristic for each scenario. Besides the validation, we demonstrate the use of SAMPI for load balancing parameter exploration and for computational capacity planning. As for the performance of the simulation itself, we roughly estimate that our full workflow can simulate the execution of Ondes3D with 24 different load balancing parameter combinations in 5 hours for our heavier earthquake scenario and in 3 hours for the lighter one.
24

A Graphics Processing Unit Based Discontinuous Galerkin Wave Equation Solver with hp-Adaptivity and Load Balancing

Tousignant, Guillaume 13 January 2023 (has links)
In computational fluid dynamics, we often need to solve complex problems with high precision and efficiency. We propose a three-pronged approach to attain this goal. First, we use the discontinuous Galerkin spectral element method (DG-SEM) for its high accuracy. Second, we use graphics processing units (GPUs) to perform our computations to exploit available parallel computing power. Third, we implement a parallel adaptive mesh refinement (AMR) algorithm to efficiently use our computing power where it is most needed. We present a GPU DG-SEM solver with AMR and dynamic load balancing for the 2D wave equation. The DG-SEM is a higher-order method that splits a domain into elements and represents the solution within these elements as a truncated series of orthogonal polynomials. This approach combines the geometric flexibility of finite-element methods with the exponential convergence of spectral methods. GPUs provide a massively parallel architecture, achieving a higher throughput than traditional CPUs. They are relatively new as a platform in the scientific community, therefore most algorithms need to be adapted to that new architecture. We perform most of our computations in parallel on multiple GPUs. AMR selectively refines elements in the domain where the error is estimated to be higher than a prescribed tolerance, via two mechanisms: p-refinement increases the polynomial order within elements, and h-refinement splits elements into several smaller ones. This provides a higher accuracy in important flow regions and increases capabilities of modeling complex flows, while saving computing power in other parts of the domain. We use the mortar element method to retain the exponential convergence of high-order methods at the non-conforming interfaces created by AMR. We implement a parallel dynamic load balancing algorithm to even out the load imbalance caused by solving problems in parallel over multiple GPUs with AMR. We implement a space-filling curve-based repartitioning algorithm which ensures good locality and small interfaces. While the intense calculations of the high order approach suit the GPU architecture, programming of the highly dynamic adaptive algorithm on GPUs is the most challenging aspect of this work. The resulting solver is tested on up to 64 GPUs on HPC platforms, where it shows good strong and weak scaling characteristics. Several example problems of increasing complexity are performed, showing a reduction in computation time of up to 3× on GPUs vs CPUs, depending on the loading of the GPUs and other user-defined choices of parameters. AMR is shown to improve computation times by an order of magnitude or more.
25

Dynamische Lastbalancierung und Modellkopplung zur hochskalierbaren Simulation von Wolkenprozessen

Lieber, Matthias 26 September 2012 (has links) (PDF)
Die komplexen Interaktionen von Aerosolen, Wolken und Niederschlag werden in aktuellen Vorhersagemodellen nur ungenügend dargestellt. Simulationen mit spektraler Beschreibung von Wolkenprozessen können zu verbesserten Vorhersagen beitragen, sind jedoch weitaus rechenintensiver. Die Beschleunigung dieser Simulationen erfordert eine hochparallele Ausführung. In dieser Arbeit wird ein Konzept zur Kopplung spektraler Wolkenmikrophysikmodelle mit atmosphärischen Modellen entwickelt, das eine effiziente Nutzung der heute verfügbaren Parallelität der Größenordnung von 100.000 Prozessorkernen ermöglicht. Aufgrund des stark variierenden Rechenaufwands ist dafür eine hochskalierbare dynamische Lastbalancierung des Wolkenmikrophysikmodells unumgänglich. Dies wird durch ein hierarchisches Partitionierungsverfahren erreicht, das auf raumfüllenden Kurven basiert. Darüber hinaus wird eine hochskalierbare Verknüpfung von dynamischer Lastbalancierung und Modellkopplung durch ein effizientes Verfahren für die regelmäßige Bestimmung der Überschneidungen zwischen unterschiedlichen Partitionierungen ermöglicht. Durch die effiziente Nutzung von Hochleistungsrechnern ermöglichen die Ergebnisse der Arbeit die Anwendung spektraler Wolkenmikrophysikmodelle zur Simulation realistischer Szenarien auf hochaufgelösten Gittern. / Current forecast models insufficiently represent the complex interactions of aerosols, clouds and precipitation. Simulations with spectral description of cloud processes allow more detailed forecasts. However, they are much more computationally expensive. Reducing the runtime of such simulations requires a highly parallel execution. This thesis presents a concept for coupling spectral cloud microphysics models with atmospheric models that allows for efficient utilization of today\'s available parallelism in the order of 100.000 processor cores. Due to the strong workload variations, highly scalable dynamic load balancing of the cloud microphysics model is essential in order to reach this goal. This is achieved through a hierarchical partitioning method based on space-filling curves. Furthermore, a highly scalable connection of dynamic load balancing and model coupling is facilitated by an efficient method to regularly determine the intersections between different partitionings. The results of this thesis enable the application of spectral cloud microphysics models for the simulation of realistic scenarios with high resolution grids by efficient use of high performance computers.
26

Desenvolvimento e implementação de malhas adaptativas bloco-estruturadas para computação paralela em mecânica dos fluidos / Desenvolvimento e implementação de malhas adaptativas bloco-estruturadas para computação paralela em mecânica dos fluidos / Development and implementation of block-structured adaptive mesh refinement for parallel computations in fluid mechanics / Development and implementation of block-structured adaptive mesh refinement for parallel computations in fluid mechanics

Lima, Rafael Sene de 28 September 2012 (has links)
The numerical simulation of fluid flow involving complex geometries is greatly limited by the required spatial grid resolution. These flows often contain small regions with complex motions, while the remaining flow is relatively smooth. Adaptive mesh refinement (AMR) enables the spatial grid to be refined in local regions that require finer grids to resolve the flow. This work describes an approach to parallelization of a structured adaptive mesh refinement (SAMR) algorithm. This type of methodology is based on locally refined grids superimposed on coarser grids to achieve the desired resolution in numerical simulations. Parallel implementations of SAMR methods offer the potential for accurate simulations of high complexity fluid flows. However, they present interesting challenges in dynamic resource allocation, data-distribution and load-balancing. The overall efficiency of parallel SAMR applications is limited by the ability to partition the underlying grid hierarchies at run-time to expose all inherent parallelism, minimize communication and synchronization overheads, and balance load. The methodology is based on a message passing interface model (MPI) using the recursive coordinate bisection (RCB) for domain partition. For this work, a semi-implicit projection method has been implemented to solve the incompressible Navier Stokes equations. All numerical implementations are an extension of a sequential Fortran 90 code, called "AMR3D", developed in the work of Nós (2007) .The efficiency and robustness of the applied methodology are verified via convergence analysis using the method of manufactured solutions. Validations were performed by simulating an incompressible jet flow and a lid driven cavity flow. / A simulação numérica de escoamentos envolvendo geometrias complexas é fortemente limitada pela resolução da malha espacial. Na grande maioria dos escoamentos, há pequenas regiões do domínio onde o fluido se movimenta de forma complexa gerando gradientes elevados, enquanto que no restante do domínio o escoamento é relativamente calmo". O Refinamento Adaptativo de Malhas (Adaptive Mesh Refinement - AMR), possibilita que o refinamento da malha espacial seja mais apurado em regiões especificas, enquanto que nas demais regiões o refinamento pode ser mais grosseiro. O presente trabalho consiste no desenvolvimento de uma metodologia de paralelização para a solução das equações de Navier-Stokes em malhas adaptativas bloco-estruturadas (Structured Adaptive Mesh Refinement - SAMR) utilizando a interface MPI (Message Passing Interface) e o método de bisseção por coordenadas RCB (Recursive Coordinate Bisection) para o balanço de carga. Implementações de métodos SAMR em processamento paralelo oferecem a possibilidade de simulações precisas de escoamentos de elevada complexidade. No entanto, apresentam desafios interessantes quanto à dinamicidade na alocação e distribuição dos dados e no balanceamento de carga. Cabe ressaltar que a é ciência total das aplicações envolvendo métodos SAMR em processamento paralelo é fortemente dependente da qualidade do particionamento dinâmico de domínio, efetuado em tempo de execução, para que se garanta os menores custos de comunicação e sincronização possíveis, além de uma boa distribuição da carga computacional. Neste trabalho, utilizou-se o esquema semi-implícito proposto por Ceniceros et al. (2010) para avanço temporal. Todas as implementações foram efetuadas como uma extensão do código AMR3D", proposto por Nós (2007). A é ciência e a robustez do método proposto são verificadas por meio do método das soluções manufaturadas. As validações foram feitas por meio da simulação do escoamento em uma cavidade com tampa deslizante e de um jato incompressível. / Doutor em Engenharia Mecânica
27

Dynamische Lastbalancierung und Modellkopplung zur hochskalierbaren Simulation von Wolkenprozessen

Lieber, Matthias 03 September 2012 (has links)
Die komplexen Interaktionen von Aerosolen, Wolken und Niederschlag werden in aktuellen Vorhersagemodellen nur ungenügend dargestellt. Simulationen mit spektraler Beschreibung von Wolkenprozessen können zu verbesserten Vorhersagen beitragen, sind jedoch weitaus rechenintensiver. Die Beschleunigung dieser Simulationen erfordert eine hochparallele Ausführung. In dieser Arbeit wird ein Konzept zur Kopplung spektraler Wolkenmikrophysikmodelle mit atmosphärischen Modellen entwickelt, das eine effiziente Nutzung der heute verfügbaren Parallelität der Größenordnung von 100.000 Prozessorkernen ermöglicht. Aufgrund des stark variierenden Rechenaufwands ist dafür eine hochskalierbare dynamische Lastbalancierung des Wolkenmikrophysikmodells unumgänglich. Dies wird durch ein hierarchisches Partitionierungsverfahren erreicht, das auf raumfüllenden Kurven basiert. Darüber hinaus wird eine hochskalierbare Verknüpfung von dynamischer Lastbalancierung und Modellkopplung durch ein effizientes Verfahren für die regelmäßige Bestimmung der Überschneidungen zwischen unterschiedlichen Partitionierungen ermöglicht. Durch die effiziente Nutzung von Hochleistungsrechnern ermöglichen die Ergebnisse der Arbeit die Anwendung spektraler Wolkenmikrophysikmodelle zur Simulation realistischer Szenarien auf hochaufgelösten Gittern. / Current forecast models insufficiently represent the complex interactions of aerosols, clouds and precipitation. Simulations with spectral description of cloud processes allow more detailed forecasts. However, they are much more computationally expensive. Reducing the runtime of such simulations requires a highly parallel execution. This thesis presents a concept for coupling spectral cloud microphysics models with atmospheric models that allows for efficient utilization of today\'s available parallelism in the order of 100.000 processor cores. Due to the strong workload variations, highly scalable dynamic load balancing of the cloud microphysics model is essential in order to reach this goal. This is achieved through a hierarchical partitioning method based on space-filling curves. Furthermore, a highly scalable connection of dynamic load balancing and model coupling is facilitated by an efficient method to regularly determine the intersections between different partitionings. The results of this thesis enable the application of spectral cloud microphysics models for the simulation of realistic scenarios with high resolution grids by efficient use of high performance computers.
28

Técnicas de proteção e restauração em redes ópticas elásticas / Protection and restoration techniques in elastic optical networks

Lourenço, André Luiz Ferraz 26 November 2015 (has links)
As redes ópticas estão passando por mudanças significativas, impulsionadas pelo crescimento exponencial do tráfego, principalmente advindo de serviços multimídia e armazenamento em nuvem. Esta demanda exigirá aumento da capacidade da taxa de transmissão para padrões como 400 Gb/s e 1 Tb/s. Nesse contexto, foi proposta uma arquitetura de rede com grade de frequências granular flexível chamada elastic optical network (EON). A EON divide o espectro de frequências em fatias (slots) de tamanho fixo e aloca grupos de slots contíguos estritamente de acordo com os requisitos de banda das demandas de conexão, implicando eficiência de uso do espectro. Com o aumento significativo da taxa de transmissão, acentuou-se a preocupação em manter a sobrevivência da rede, já que pouco tempo de queda no serviço pode acarretar uma imensa perda de dados. Neste trabalho, investigamos esquemas de proteção baseados em caminhos compartilhados (shared-path protection, SPP) e esquemas de restauração de tráfego. Avaliamos esquemas divulgados na literatura como o dynamic load balancing shared-path protection (DLBSPP) e esquemas de restauração como o traffic aware restoration (TAR) e bandwidth squeezed restoration (BSR). Avaliamos também uma heurística de alocação de slots chamada inverted dual stack (IDS). O DLBSPP utiliza balanceamento dinâmico de carga para computar os caminhos primários e de proteção compartilhados. O TAR executa a restauração dinâmica ordenando as conexões por granularidade de banda. O BSR utiliza a capacidade de contração de banda do EON para restaurar conexões por meio da política de melhor esforço ou de banda garantida, dependendo do acordo de níveis de serviço do cliente. O esquema IDS concentra o maior número possível de slots compartilhados em uma região do espectro. As medidas de desempenho dos algoritmos são avaliadas segundo as métricas: probabilidade de bloqueio, taxa de utilização do espectro, número médio de hops e taxa de restauração falha. As simulações computacionais mostram o bom desempenho da utilização do esquema IDS com DLBSPP. / Optical networks are undergoing significant changes driven by the exponentially growing traffic, especially coming from multimedia and cloud storage services. This demand will require increasing of the transmission rate capacity as high as 400 Gb/s and 1 Tb/s. Within this context, it was proposed the elastic optical network (EON), which is a network architecture with flexible granular frequency grid. EON divides the frequency spectrum into slices (slots) of fixed size and allocates groups of contiguous slots strictly according to the bandwidth requirement of the connection demands, providing high spectrum use efficiency. The significant increase in transmission rate put emphasis on the need to maintain the survival of the network, since the occurrence of faults in the network nodes or links can cause huge loss of data. In this work, we investigate protection schemes based on shared-path protection (SPP) and traffic restoration schemes. We evaluate schemes related in the literature, such as the dynamic load balancing shared-path protection (DLBSPP), and restoration schemes such as the traffic aware restoration (TAR) and the bandwidth squeezed restoration (BSR). The DLBP scheme uses dynamic load balancing to compute primary and shared protection paths. The TAR performs dynamic restoration ordering the connections based on band granularity. The BSR uses EON\'s band squeezing feature to restore connections by means of the best effort or guaranteed bandwidth strategy, depending on the customer\'s service level agreement. IDS scheme concentrates the maximum possible number of shared slots in a given region of the spectrum. Performance of the algorithms are evaluated according to metrics: blocking probability, spectrum utilization rate, average number of hops and failure restoration rate. Computer simulations show that the use of the IDS scheme improves the performance of the investigated algorithms.
29

Técnicas de proteção e restauração em redes ópticas elásticas / Protection and restoration techniques in elastic optical networks

André Luiz Ferraz Lourenço 26 November 2015 (has links)
As redes ópticas estão passando por mudanças significativas, impulsionadas pelo crescimento exponencial do tráfego, principalmente advindo de serviços multimídia e armazenamento em nuvem. Esta demanda exigirá aumento da capacidade da taxa de transmissão para padrões como 400 Gb/s e 1 Tb/s. Nesse contexto, foi proposta uma arquitetura de rede com grade de frequências granular flexível chamada elastic optical network (EON). A EON divide o espectro de frequências em fatias (slots) de tamanho fixo e aloca grupos de slots contíguos estritamente de acordo com os requisitos de banda das demandas de conexão, implicando eficiência de uso do espectro. Com o aumento significativo da taxa de transmissão, acentuou-se a preocupação em manter a sobrevivência da rede, já que pouco tempo de queda no serviço pode acarretar uma imensa perda de dados. Neste trabalho, investigamos esquemas de proteção baseados em caminhos compartilhados (shared-path protection, SPP) e esquemas de restauração de tráfego. Avaliamos esquemas divulgados na literatura como o dynamic load balancing shared-path protection (DLBSPP) e esquemas de restauração como o traffic aware restoration (TAR) e bandwidth squeezed restoration (BSR). Avaliamos também uma heurística de alocação de slots chamada inverted dual stack (IDS). O DLBSPP utiliza balanceamento dinâmico de carga para computar os caminhos primários e de proteção compartilhados. O TAR executa a restauração dinâmica ordenando as conexões por granularidade de banda. O BSR utiliza a capacidade de contração de banda do EON para restaurar conexões por meio da política de melhor esforço ou de banda garantida, dependendo do acordo de níveis de serviço do cliente. O esquema IDS concentra o maior número possível de slots compartilhados em uma região do espectro. As medidas de desempenho dos algoritmos são avaliadas segundo as métricas: probabilidade de bloqueio, taxa de utilização do espectro, número médio de hops e taxa de restauração falha. As simulações computacionais mostram o bom desempenho da utilização do esquema IDS com DLBSPP. / Optical networks are undergoing significant changes driven by the exponentially growing traffic, especially coming from multimedia and cloud storage services. This demand will require increasing of the transmission rate capacity as high as 400 Gb/s and 1 Tb/s. Within this context, it was proposed the elastic optical network (EON), which is a network architecture with flexible granular frequency grid. EON divides the frequency spectrum into slices (slots) of fixed size and allocates groups of contiguous slots strictly according to the bandwidth requirement of the connection demands, providing high spectrum use efficiency. The significant increase in transmission rate put emphasis on the need to maintain the survival of the network, since the occurrence of faults in the network nodes or links can cause huge loss of data. In this work, we investigate protection schemes based on shared-path protection (SPP) and traffic restoration schemes. We evaluate schemes related in the literature, such as the dynamic load balancing shared-path protection (DLBSPP), and restoration schemes such as the traffic aware restoration (TAR) and the bandwidth squeezed restoration (BSR). The DLBP scheme uses dynamic load balancing to compute primary and shared protection paths. The TAR performs dynamic restoration ordering the connections based on band granularity. The BSR uses EON\'s band squeezing feature to restore connections by means of the best effort or guaranteed bandwidth strategy, depending on the customer\'s service level agreement. IDS scheme concentrates the maximum possible number of shared slots in a given region of the spectrum. Performance of the algorithms are evaluated according to metrics: blocking probability, spectrum utilization rate, average number of hops and failure restoration rate. Computer simulations show that the use of the IDS scheme improves the performance of the investigated algorithms.

Page generated in 0.0803 seconds