• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 146
  • 43
  • 19
  • 11
  • 7
  • 6
  • 3
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 289
  • 289
  • 61
  • 61
  • 53
  • 52
  • 48
  • 47
  • 40
  • 36
  • 35
  • 34
  • 33
  • 32
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Optimization and Spatial Queueing Models to Support Multi-Server Dispatching Policies with Multiple Servers per Station

Ansari, Sardar 03 December 2013 (has links)
In this thesis, we propose novel optimization and spatial queueing models that expand the currently existing methods by allowing multiple servers to be located at the same station and multiple servers to be dispatched to a single call. In particular, a mixed integer linear programming (MILP) model is introduced that determines how to locate and dispatch ambulances such that the coverage level is maximized. The model allows multiple servers to be located at the same station and balances the workload among them while maintaining contiguous first priority response districts. We also propose an extension to the approximate Hypercube queueing model by allowing multi-server dispatches. Computational results suggest that both models are effective in optimizing and analyzing the emergency systems. We also introduce the M[G]/M/s/s queueing model as an extension to the M/M/s/s model which allows for multiple servers to be assigned to a single customer.
152

Redistribution dynamique parallèle efficace de la charge pour les problèmes numériques de très grande taille / Efficient parallel dynamic load balancing for very large numerical problems

Fourestier, Sébastien 20 June 2013 (has links)
Cette thèse traite du problème de la redistribution dynamique parallèle efficace de la charge pour les problèmes numériques de très grande taille. Nous présentons tout d'abord un état de l'art des algorithmes permettant de résoudre les problèmes du partitionnement, du repartitionnement, du placement statique et du re-placement. Notre première contribution vise à étudier, dans un cadre séquentiel, les caractéristiques algorithmiques souhaitables pour les méthodes parallèles de repartitionnement. Nous y présentons notre contribution à la conception d'un schéma multi-niveaux k-aire pour le calcul sequentiel de repartitionnements. La partie la plus exigeante de cette adaptation concerne la phase d'expansion. L'une de nos contributions majeures a été de nous inspirer des méthodes d'influence afin d'adapter un algorithme de raffinement par diffusion au problème du repartitionnement.Notre deuxième contribution porte sur la mise en oeuvre de ces méthodes sur machines parallèles. L'adaptation du schéma multi-niveaux parallèle a nécessité une évolution des algorithmes et des structures de données mises en oeuvre pour le partitionnement. Ce travail est accompagné d'une analyse expérimentale, qui est rendue possible grâce à la mise en oeuvre des algorithmes considérés au sein de la bibliothèque Scotch. / This thesis concerns efficient parallel dynamic load balancing for large scale numerical problems. First, we present a state of the art of the algorithms used to solve the partitioning, repartitioning, mapping and remapping problems. Our first contribution, in the context of sequential processing, is to define the desirable features that parallel repartitioning tools need to possess. We present our contribution to the conception of a k-way multilevel framework for sequential repartitioning. The most challenging part of this work regards the uncoarsening phase. One of our main contributions is the adaptation of influence methods to a global diffusion-based heuristic for the repartitioning problem. Our second contribution is the parallelization of these methods. The adaptation of the aforementioned algorithms required some modification of the algorithms and data structure used by existing parallel partitioning routines. This work is backed by a thorough experimental analysis, which is made possible thanks to the implementation of our algorithms into the Scotch library.
153

Uma metodologia para desenvolvimento de programas paralelos eficientes em ambientes homogêneos e heterogêneos. / A methodo0logy for development of efficient parallel programs in homogeneous and heterogeneous systems.

Laine, Jean Marcos 28 July 2008 (has links)
Uma metodologia para desenvolvimento de programas paralelos eficientes deve especificar mecanismos capazes de caracterizar o comportamento das aplicações e permitir estudos sobre o desempenho de diferentes modelos de soluções. Nos ambientes distribuídos, em particular, a eficiência da solução também está relacionada a estratégia utilizada na divisão e distribuição do trabalho entre os processos que cooperam na solução do problema. Para abordar estes aspectos, uma metodologia, denominada PEMPIs-Het (Performance Estimation of MPI Programs in Heterogeneous Systems), é especificada e apresentada nesta tese. A metodologia permite a modelagem, avaliação e predição de desempenho de programas paralelos em ambientes homogêneos e heterogêneos. Técnicas de modelagem analítica são utilizadas para representar o comportamento das aplicações no ambiente distribuído. Um modelo gráfico, denominado DP*Graph++, é proposto para ilustrar as principais estruturas do código da aplicação e facilitar análises sobre a complexidade algorítmica do programa. Algumas aplicações são modeladas e a precisão das predições é verificada através de testes experimentais. Os modelos de desempenho permitem uma estimativa pontual para o tempo de execução da aplicação. Entretanto, uma estratégia alternativa, baseada em intervalos de predição, também é discutida e avaliada. Algumas estratégias para balanceamento de carga de aplicações paralelas distribuídas são implementadas e avaliadas. Estasestratégias utilizam informações de um vetor com índices de desempenho (Vector of Relative Performances - VRP), gerados pelos modelos analíticos, para especificar a divisão e distribuição do trabalho. Estes índices caracterizam a capacidade computacional das máquinas. Uma formalização matemática é apresentada para explicar como os índices são determinados. ) Testes experimentais são realizados para verificar a aplicabilidade das estratégias e a eficiência no balanceamento das cargas. / A methodology for developing efficient parallel programs must specify mechanisms capable of characterizing the behavior of applications and allow studies on the performance of different solution models. In distributed environments, in particular, the solution efficiency is also related to strategy adopted in the division and distribution of work among the processes which cooperate in the solution of the problem. To address these issues, a methodology, called PEMPIs-Het (Performance Estimation of MPI Programs in Heterogeneous Systems), is specified and presented in this thesis. The methodology allows performance modeling, evaluation and prediction of parallel programs in homogeneous and heterogeneous environments. Analytical modeling techniques are used to represent the applications behavior in the distributed environment. A graph model, called DP*Graph++, is proposed to illustrate the main structures of the application code and facilitate some analyses about the program algorithmic complexity. Some applications are modeled and the accuracy of predictions is verified by experimental tests. The models allow estimate a punctual performance to the application execution time. Meanwhile, an alternative strategy, based on prediction intervals, is also discussed and evaluated. Some strategies for load balancing of distributed parallel applications are implemented and evaluated. These strategies use information from a vector with performance indexes (Vector of RelativePerformances - VRP), generated by analytical models to specify the division and distribution of work. These indexes characterize the machines computational capacity. A mathematical formalization is presented to explain how the rates are determined. Experimental tests are conducted to verify the applicability and effectiveness of the strategies in load balancing
154

"Balanceamento de cargas de aplicações SPMD em sistemas computacionais distribuídos" / Load balancing of SPMD applications in distributed computational systems

Furquim, Gustavo Antonio 04 April 2006 (has links)
Este trabalho apresenta a implementação e a utilização da migração de processos SPMD (Single Program Multiple Data), a qual realiza somente a transferência dos dados, que estão sendo manipulados pelo processo, para realizar a migração. Seu principal objetivo foi o estudo do impacto do balanceamento de carga no desempenho de aplicações, desenvolvidas utilizando o modelo de programação SPMD. Depois de realizados testes com aplicações SPMD reais, em sistemas computacionais distribuídos utilizando a migração de processos SPMD, foi possível verificar que ganhos de desempenho podem ser alcançados, tanto na migração de processos quanto no tempo de execução de aplicações paralelas SPMD. / This research presents the implementation and use of the SPMD (Single Program Multiple Data) process migration, which only does the transference of the data that are being used by the process, to perform the process migration. Its main objective was the study of the load balancing impact in the performance of applications developed using the SPMD programming model. After performing the tests with real SPMD applications, in distributed computational systems using the SPMD process migration, it was achieved good performance gains, both in the process migration and in the time execution of applications SPMD parallel applications.
155

Estudo sobre a variação dos parâmetros do Tree Load Balancing Algorithm / Study about the variation of the Tree Load Balancing Algorithm parameters

Raphaloski, Evandro 21 February 2006 (has links)
Em um sistema localmente distribuído, quando diversas tarefas são aleatoriamente submetidas aos computadores de uma determinada rede, onde alguns ficam sobrecarregados e outros permaneçam ociosos, algoritmos de balanceamento de carga podem ser utilizados para homogeneizar e otimizar a alocação de recursos e, conseqüentemente, aumentar o desempenho computacional. Com a evolução dos sistemas distribuídos, houve a necessidade de aprimoramento desses algoritmos, a fim de suportarem sistemas distribuídos altamente escaláveis e possibilitarem o gerenciamento de ambientes heterogêneos. Visando suprir essas necessidades, recentemente foi proposto um algoritmo de balanceamento de carga denominado Tree Load Balancing Algorithm (TLBA). Para avaliar esse algoritmo foram desenvolvidos um simulador e um protótipo. Com as simulações foram comprovados seus benefícios para ambientes distribuídos heterogêneos altamente escaláveis e com o protótipo esses resultados foram validados. Este trabalho apresenta a implementação de um novo simulador, que possibilitou um estudo mais abrangente dos parâmetros de balanceamento de carga do TLBA, com base em amostras estatisticamente geradas e uma maior fidelidade em suas políticas de balanceamento. Suas novas características são simulação em tempo real, visualização da árvore lógica de acordo com as capacidades computacionais relativas, resultados em tabelas, gráficos gerados pelo simulador e estudos aplicados aos diferentes tipos de escalonamento e sistemas / In locally distributed systems, when plenty of tasks are randomically submitted to network computers where certain computers may be heavily loaded, while others lightly loaded, load balancing algorithms may be used on to homogenize and optimize the resources allocation, and hence improve the computer performance. Later on, the distributed systems evolution made necessary the upgrading of these algorithms in order to support the highly scalable distributed systems and handle with the management of the heterogeneous environments. To supply these needs, a new load balancing algorithm called Tree Load Balancing Algorithm (TLBA) has been recently proposed. The evaluation of this algorithm has been done by a simulator and a prototype which were developed for this purpose. The simulations have proved its benefits in highly scalable heterogeneous distributed environments, and with the prototype the results were validated. This work presents the implementation of a new simulator which allows detailed studies of the TLBA parameters, based on the statistically generated samples, and a higher fidelity on the implementation of its load balancing policies. Its new features are the real time simulations, visualization of the logical tree according to computers’ relative capacities, results in tables, graphics generated by the simulator and studies applied to the different types of scheduling and systems
156

Ambientes de execução para o modelo de atores em plataformas hierárquicas de memória compartilhada com processadores de múltiplos núcleos / Dealing with actor runtime environments on hierarchical shared memory multi-core platforms

Francesquini, Emilio de Camargo 16 May 2014 (has links)
O modelo de programação baseado em atores é frequentemente utilizado para o desenvolvimento de grandes aplicações e sistemas. Podemos citar como exemplo o serviço de bate-papo do Facebook ou ainda o WhatsApp. Estes sistemas dão suporte a milhares de usuários conectados simultaneamente levando em conta estritas restrições de desempenho e interatividade. Tais sistemas normalmente são amparados por infraestruturas de hardware com processadores de múltiplos núcleos. Normalmente, máquinas deste porte são baseadas em uma estrutura de memória compartilhada hierarquicamente (NUMA - Non-Uniform Memory Access). Nossa análise dos atuais ambientes de execução para atores e a pesquisa na literatura mostram que poucos estudos sobre a adequação deste ambientes a essas plataformas hierárquicas foram conduzidos. Estes ambientes de execução normalmente assumem que o espaço de memória é uniforme o que pode causar sérios problemas de desempenho. Nesta tese nós estudamos os desafios enfrentados por um ambiente de execução para atores quando da sua execução nestas plataformas. Estudamos particularmente os problemas de gerenciamento de memória, de escalonamento e de balanceamento de carga. Neste documento nós também analisamos e caracterizamos as aplicações baseadas no modelo de atores. Tal análise nos permitiu evidenciar o fato de que a execução de benchmarks e aplicações criam estruturas de comunicação peculiares entre os atores. Tais peculiaridades podem, então, ser utilizadas pelos ambientes de execução para otimizar o seu desempenho. A avaliação dos grafos de comunicação e a implementação da prova de conceito foram feitas utilizando um ambiente de execução real, a máquina virtual da linguagem Erlang. A linguagem Erlang utiliza o modelo de atores para concorrência com uma sintaxe clara e consistente. As modificações que nós efetuamos nesta máquina virtual permitiram uma melhora significativa no desempenho de certas aplicações através de uma melhor afinidade de comunicação entre os atores. O escalonamento e o balanceamento de carga também foram melhorados graças à utilização do conhecimento sobre o comportamento da aplicação e sobre a plataforma de hardware. / The actor model is present in several mission-critical systems, such as those supporting WhatsApp and Facebook Chat. These systems serve thousands of clients simultaneously, therefore demanding substantial computing resources usually provided by multi-processor and multi-core platforms. Non-Uniform Memory Access (NUMA) architectures account for an important share of these platforms. Yet, research on the suitability of the current actor runtime environments for these machines is very limited. Current runtime environments, in general, assume a flat memory space, thus not performing as well as they could. In this thesis we study the challenges hierarchical shared memory multi-core platforms present to actor runtime environments. In particular, we investigate aspects related to memory management, scheduling, and load-balancing. In this document, we analyze and characterize actor based applications to, in light of the above, propose improvements to actor runtime environments. This analysis highlighted the existence of peculiar communication structures. We argue that the comprehension of these structures and the knowledge about the underlying hardware architecture can be used in tandem to improve application performance. As a proof of concept, we implemented our proposal using a real actor runtime environment, the Erlang Virtual Machine (VM). Concurrency in Erlang is based on the actor model and the language has a consistent syntax for actor handling. Our modifications to the Erlang VM significantly improved the performance of some applications thanks to better informed decisions on scheduling and on load-balancing.
157

Gestion dynamique du parallélisme dans les architectures multi-cœurs pour applications mobiles / Dynamic parallelism adaptation in multicore architectures for mobile applications

Texier, Matthieu 08 December 2014 (has links)
Le nombre de smartphones vendus a récemment dépassé celui des ordinateurs. Ces appareils tendent à regrouper de plus en plus de fonctions, ceci grâce à des applications de plus en plus variées telles que la vidéo conférence, la réalité augmentée, ou encore les jeux vidéo. Le support de ces applications est assuré par des ressources de calculs hétérogènes qui sont spécifiques aux différents types de traitements et qui respectent les performances requises et les contraintes de consommation du système. Les applications graphiques, telles que les jeux vidéo, sont par exemple accélérées par un processeur graphique. Cependant les applications deviennent de plus en plus complexes. Une application de réalité augmentée va par exemple nécessiter du traitement d'image, du rendu graphique et un traitement des informations à afficher. Cette complexité induit souvent une variation de la charge de travail qui impacte les performances et donc les besoins en puissance de calcul de l'application. Ainsi, la parallélisation de l'application, généralement prévue pour une certaine charge, devient inappropriée. Ceci induit un gaspillage des ressources de calcul qui pourraient être exploitées par d'autres applications ou par d'autres étages de l'application. Un pipeline de rendu graphique a été choisi comme cas d'utilisation car c'est une application dynamique et qui est de plus en plus répandu dans les appareils mobiles. Cette application a été implémentée et parallélisée sur un simulateur d'architecture multi-cœurs. Un profilage a confirmé l'aspect dynamique, le temps de calcul de chaque donnée ainsi que le nombre d'objets à calculer variant de manière significative dans le temps et que la meilleure répartition du parallélisme évolue en fonction de la scène rendue. Ceci nous a amenés à définir un système permettant d'adapter, au fil de l'exécution, le parallélisme d'une application en fonction d'une prédiction faite de ses besoins. Le choix d'un nouveau parallélisme nécessite de connaître les besoins en puissance de calcul des différents étages, en surveillant les transferts de données entre les étages de l'application. Enfin, l'adaptation du parallélisme implique une nouvelle répartition des tâches en fonction des besoins des différents étages qui est effectuée grâce à un contrôleur central. Le système a été implémenté dans un simulateur précis au niveau TTLM afin d'estimer les gains de performances permis par l'adaptation dynamique. Une architecture permettant l'accélération de différents types d'applications que ce soit généralistes ou graphiques a été définie et comparée à d'autres architectures multi-cœurs. Le coût matériel de cette architecture a de plus été quantifié. Ainsi, pour un support matériel dont la complexité est inférieure à 1,5 % du design complet, on démontre des gains de performance allant jusqu'à 20 % par rapport à certains déploiements statiques, ainsi que la capacité à gérer dynamiquement un nombre de ressources de calcul variable. / The amount of smartphone sales recently surpassed the desktop computer ones. This is mainly due to the smart integration of many functionalities in the same architecture. This is also due to the wide variety of supported applications like augmented reality, video conferencing and video games. The support of these applications is made by heterogeneous computing resources specialized to support each application type thus allowing to meet required performance and power consumption. For example, multimedia applications are accelerated by hardware modules that help video encoding and decoding and video game 3D rendering is accelerated by specialized processors (GPU). However, applications become more and more complicated. As an example, augmented reality requires image processing, 3D rendering and computing the information to display. This complexity often comes with a variation of the computing load, which dynamically changes application performance requirements. When this application is implemented in parallel, the way parallelism is chosen for a specific workload, becomes inefficient for a different one. This leads to a waste in computing resources and our objective is to optimize the usage of all available computing resources at runtime. The selected use case is a graphic rendering pipeline application because it is a dynamic application, which is also widely used in mobile devices. This application has been implemented and parallelized on a multicore architecture simulator. The profiling shows that the dynamicity of the application, the time and the amount of data needed to compute vary. The profiling also shows that the best balance of the parallelism depends on the rendered scene; a dynamic load balancing is therefore required for this application. These studies brought us about defining a system allowing to dynamically adapt the application parallelism depending on a prediction of its computing requirements, which can be performed by monitoring the data exchanges between the application tasks. Then the new parallelism is calculated for each stage by a central controller that manages the whole application. This system has been implemented in a Timed-TLM simulator in order to estimate performance improvements allowed by the dynamic adaptation. An architecture allowing to accelerate mobile applications, such as general-purpose and 3D applications, has been defined and compared to other multicore architectures. The hardware complexity and the performance of the architecture have also been estimated. For an increased complexity lower that 1,5%, we demonstrate performance improvements up to 20% compared with static parallelisms. We also demonstrated the ability to support a variable amount of resources.
158

Uma metodologia para desenvolvimento de programas paralelos eficientes em ambientes homogêneos e heterogêneos. / A methodo0logy for development of efficient parallel programs in homogeneous and heterogeneous systems.

Jean Marcos Laine 28 July 2008 (has links)
Uma metodologia para desenvolvimento de programas paralelos eficientes deve especificar mecanismos capazes de caracterizar o comportamento das aplicações e permitir estudos sobre o desempenho de diferentes modelos de soluções. Nos ambientes distribuídos, em particular, a eficiência da solução também está relacionada a estratégia utilizada na divisão e distribuição do trabalho entre os processos que cooperam na solução do problema. Para abordar estes aspectos, uma metodologia, denominada PEMPIs-Het (Performance Estimation of MPI Programs in Heterogeneous Systems), é especificada e apresentada nesta tese. A metodologia permite a modelagem, avaliação e predição de desempenho de programas paralelos em ambientes homogêneos e heterogêneos. Técnicas de modelagem analítica são utilizadas para representar o comportamento das aplicações no ambiente distribuído. Um modelo gráfico, denominado DP*Graph++, é proposto para ilustrar as principais estruturas do código da aplicação e facilitar análises sobre a complexidade algorítmica do programa. Algumas aplicações são modeladas e a precisão das predições é verificada através de testes experimentais. Os modelos de desempenho permitem uma estimativa pontual para o tempo de execução da aplicação. Entretanto, uma estratégia alternativa, baseada em intervalos de predição, também é discutida e avaliada. Algumas estratégias para balanceamento de carga de aplicações paralelas distribuídas são implementadas e avaliadas. Estasestratégias utilizam informações de um vetor com índices de desempenho (Vector of Relative Performances - VRP), gerados pelos modelos analíticos, para especificar a divisão e distribuição do trabalho. Estes índices caracterizam a capacidade computacional das máquinas. Uma formalização matemática é apresentada para explicar como os índices são determinados. ) Testes experimentais são realizados para verificar a aplicabilidade das estratégias e a eficiência no balanceamento das cargas. / A methodology for developing efficient parallel programs must specify mechanisms capable of characterizing the behavior of applications and allow studies on the performance of different solution models. In distributed environments, in particular, the solution efficiency is also related to strategy adopted in the division and distribution of work among the processes which cooperate in the solution of the problem. To address these issues, a methodology, called PEMPIs-Het (Performance Estimation of MPI Programs in Heterogeneous Systems), is specified and presented in this thesis. The methodology allows performance modeling, evaluation and prediction of parallel programs in homogeneous and heterogeneous environments. Analytical modeling techniques are used to represent the applications behavior in the distributed environment. A graph model, called DP*Graph++, is proposed to illustrate the main structures of the application code and facilitate some analyses about the program algorithmic complexity. Some applications are modeled and the accuracy of predictions is verified by experimental tests. The models allow estimate a punctual performance to the application execution time. Meanwhile, an alternative strategy, based on prediction intervals, is also discussed and evaluated. Some strategies for load balancing of distributed parallel applications are implemented and evaluated. These strategies use information from a vector with performance indexes (Vector of RelativePerformances - VRP), generated by analytical models to specify the division and distribution of work. These indexes characterize the machines computational capacity. A mathematical formalization is presented to explain how the rates are determined. Experimental tests are conducted to verify the applicability and effectiveness of the strategies in load balancing
159

QoS-RCC: um mecanismo com orquestração de sobre-provisionamento de recursos e balanceamento de carga para roteamento orientado a QoS na internet do futuro / A mechanism with orchestration of the overprovisioning of resources and load balancing for QoS-oriented routing in future internet

Freitas, Leandro Alexandre 18 February 2011 (has links)
Submitted by Erika Demachki (erikademachki@gmail.com) on 2014-10-08T19:01:14Z No. of bitstreams: 2 Dissertação - LEANDRO ALEXANDRE FREITAS - 2011.pdf: 2571517 bytes, checksum: 785d31a14166c0c65a61a93c157e4e37 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2014-10-09T11:25:06Z (GMT) No. of bitstreams: 2 Dissertação - LEANDRO ALEXANDRE FREITAS - 2011.pdf: 2571517 bytes, checksum: 785d31a14166c0c65a61a93c157e4e37 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2014-10-09T11:25:06Z (GMT). No. of bitstreams: 2 Dissertação - LEANDRO ALEXANDRE FREITAS - 2011.pdf: 2571517 bytes, checksum: 785d31a14166c0c65a61a93c157e4e37 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2011-02-18 / The Future Internet concepts and designs of 4WARD project concerns a clean-slate architecture with various networking innovations, including a new connectivity paradigm called Generic Path (GP). In GP architecture, several facilities are designed to efficiently support complex value-added applications and services with assured Quality of Service (QoS). GPs mainly abstract underlying network heterogeneity, and any entity, regardless its scope (technology, location or architectural layer) communicate each other in a single way via a common interface. To that, cooperation with network-layer provisioning mechanisms is required in the sense to map data paths meeting session-demanded resources (QoS requirements - minimum bandwidth and maximum delay/loss experience) into appropriate GPs. In contrast as support today, robust and scalable QoS-provisioning facilities are strongly required for efficient GP allocations. Therefore, this dissertation introduces the QoS-Routing and Resource Control (QoSRRC), a set of GP-compliant facilities to cope with the hereinabove requirements. QoSRRC complements GP architecture with QoS-oriented routing, aided with load balancing, to select paths meeting session-demands while keeping residual bandwidth to increase user experience. For scalability, QoS-RRC operates based on an overprovisioning-centric approach, which places low state storage and network operations. Initial QoS-RRC performance evaluation was carried out in Network Simulator v.2 (NS2), demonstrating drastic improvements of flow delay experience and bandwidth use among a relevant state-of-the-art solution. Moreover, the impact of QoS-RRC compared to current IP QoS and routing standards on the user experience has been evaluated, by analysing main objective and subjective Quality of Experience (QoE) metrics, namely Peak Signal to Noise Ratio (PSNR), The Structural Similarity Index (SSIM), Video Quality Metric (VQM) and Mean Opinion Score (MOS). / Os conceitos e modelos para Internet do Futuro no Projeto 4WARD abordam uma arquitetura clean-slate ("recomeçar a Internet do zero") com várias inovações na rede, incluindo um novo paradigma de conectividade, chamado Caminho Genérico (Generic Path - GP). Na arquitetura GP, várias facilidades foram projetadas para suportar eficientemente complexas aplicações de valor agregado e serviços com garantia de Qualidade de Serviço (Quality of Service - QoS). Os GPs abstraem principalmente a heterogeneidade das redes e de qualquer entidade, independentemente de seu escopo (tecnologia, localização ou camada de arquitetural). Para isso, a cooperação da camada de rede com mecanismos de aprovisionamento é necessária, de modo a mapear as demandas dos recursos exigidos pela sessão (requisitos de QoS, como por exemplo largura de banda mínima e máximo atraso/perda) nos GPs adequados. Em contraste com o suporte atual, o aprovisionamento de QoS robusto e escalável é fortemente exigido para alocações eficientes de GPs. Portanto, esta dissertação apresenta o QoS-Routing and Resource Control (QoS-RRC), um mecanismo de apoio a criação de GPs de modo a lidar com suas exigências. O QoS-RRC complementa arquitetura GP com roteamento orientado a QoS, auxiliado com balanceamento de carga, para selecionar os caminhos que vão ao encontro as demandas da sessão, enquanto mantém largura de banda residual para aumentar a experiência do usuário. Para obter escalabilidade, o QoS-RRC opera com base em uma abordagem centrada no aprovisionamento, que emprega baixo armazenamento de estado e poucas operações de rede. A avaliação de desempenho do QoS-RRC foi realizada com o simulador Network Simulator v.2 (NS2), demonstrando drásticas melhorias da qualidade dos fluxos quanto a experiência de atraso e largura de banda, se comparado com as soluções do estado da arte. Além disso, o impacto do QoS-RRC em comparação com o atual QoS das redes IP e os mecanismos de roteamento padrão, sobre a experiência do usuário, também foi avaliado, analisando métricas objetivas e subjetivas de Qualidade da Experiência (Quality of Expericence - QoE), ou seja, Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM), Video Quality Metric (VQM) e Mean Opinion Score (MOS).
160

Load balancing in hybrid LiFi and RF networks

Wang, Yunlu January 2018 (has links)
The increasing number of mobile devices challenges the current radio frequency (RF) networks. The conventional RF spectrum for wireless communications is saturating, motivating to develop other unexplored frequency bands. Light Fidelity (LiFi) which uses more than 300 THz of the visible light spectrum for high-speed wireless communications, is considered a promising complementary technology to its RF counterpart. LiFi enables daily lighting infrastructures, i.e. light emitting diode (LED) lamps to realise data transmission, and maintains the lighting functionality at the same time. Since LiFi mainly relies on line-of-sight (LoS) transmission, users in indoor environments may experience blockages which significantly affects users' quality of service (QoS). Therefore, hybrid LiFi and RF networks (HLRNs) where LiFi supports high data rate transmission and RF offers reliable connectivity, can provide a potential solution to future indoor wireless communications. In HLRNs, efficient load balancing (LB) schemes are critical in improving the traffic performance and network utilisation. In this thesis, the optimisation-based scheme (OBS) and the evolutionary game theory (EGT) based scheme (EGTBS) are proposed for load balancing in HLRNs. Specifically, in OBS, two algorithms, the joint optimisation algorithm (JOA) and the separate optimisation algorithm (SOA) are proposed. Analysis and simulation results show that JOA can achieve the optimal performance in terms of user data rate while requiring high computational complexity. SOA reduces the computational complexity but achieves low user data rates. EGTBS is able to achieve a better performance/complexity trade-off than OBS and other conventional load balancing schemes. In addition, the effects of handover, blockages, orientation of LiFi receivers, and user data rate requirement on the throughput of HLRNs are investigated. Moreover, the packet latency in HLRNs is also studied in this thesis. The notion of LiFi service ratio is introduced, defined as the proportion of users served by LiFi in HLRNs. The optimal LiFi service ratio to minimise system delay is mathematically derived and a low-complexity packet flow assignment scheme based on this optimum ratio is proposed. Simulation results show that the theoretical optimum of the LiFi service ratio is very close to the practical solution. Also, the proposed packet flow assignment scheme can reduce at most 90% of packet delay compared to the conventional load balancing schemes at reduced computational complexity.

Page generated in 0.1288 seconds