• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 55
  • 20
  • 5
  • 3
  • 2
  • 1
  • Tagged with
  • 92
  • 34
  • 29
  • 20
  • 18
  • 17
  • 16
  • 16
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Análise da eficiência energética em navios mercantes e estudo de caso do consumo de combustível em navio aliviador do tipo Suezmax. / Analysis of merchant ships energy efficiency and case study of Suezmax shutle tanker fuel comsumption.

Schiller, Rodrigo Achilles 28 November 2016 (has links)
A necessidade de redução do consumo de combustíveis fósseis, devido ao cenário atual de tentar frear os efeitos do aquecimento global e de reduzir a poluição atmosférica, vem ditando uma série de transformações no setor de transporte naval. Este trabalho apresenta, inicialmente, as mudanças no âmbito normativo na questão do controle de emissões de poluentes e de eficiência de consumo de combustíveis em navios mercantes. Em seguida, com foco nas embarcações existentes, são apresentadas as principais técnicas operacionais com grande potencial de redução de consumo de combustível, destacando o método da redução da velocidade de navegação que, corretamente aplicado, tem impacto positivo tanto na redução dos custos operacionais, quanto no aumento expressivo de eficiência energética. Foi realizada uma análise numérica da variação do consumo de combustível em função da velocidade de um navio petroleiro Suezmax, adaptado para operações de alívio em plataformas do tipo FPSO em águas brasileiras. Com isso, estimou-se o potencial de aumento da eficiência energética da embarcação a partir de pequenas reduções de velocidade, e discutiu-se as possíveis aplicações desta melhoria, a partir do perfil operacional característico do navio tipo, de modo a não causar impacto econômico na operação. O estudo, ainda, avaliou a aplicação de duas metodologias numéricas diferentes, uma baseada apenas em equações de regressão, semi-empírica, e outra utilizando simulações de CFD para a estimativa de parâmetros sensíveis a forma do casco e de grande relevância para a determinação dos consumos característicos, analisando imprecisões e impactos no resultado final. / The need to reduce fossil fuels consumption due to the current scenario of trying to restrain global warming effects and reduce air pollution is dictating a series of transformations in shipping. This study introduces, at first, the changes of the regulatory framework concerning gas emissions control and fuel consumption efficiency on merchant ships. Secondly, the main operational procedures with high potential reduction of fuel consumption are discussed, with focus on existing vessels, using ship speed reduction procedure. This procedure shows the positive impacts on both operating costs reduction and also on energy efficiency increase if correctly applied. Finally, a numerical analysis of the fuel consumption variation with the speed was carried out for a Suezmax class oil tanker, which has been adapted to oil offloading operations for FPSOs in Brazilian offshore oil production systems. In this analysis, the discussions about the variations of vessel energy efficiency from small speed rate reductions and the possible applications of this improvement, taking into account the typical operating profile of the vessel in such a way to have significant economic impacts on the operation. This analysis also evaluated the application of two different numerical methods: one based only on regression equations produced by existing data, semi empirical method, and another using a CFD simulations for estimating the hull shape parameters that are most relevant for determining fuel consumption, analyzing inaccuracies and impact on the final results.
72

Uso de comunicação V2V para o descarregamento de dados em redes celulares: uma estratégia baseada em clusterização geográca para apoiar o sensoriamento veicular colaborativo / On the use of V2V communication for cellular data offloading: a geographic clustering-based strategy to support vehicular crowdsensing

Nunes, Douglas Fabiano de Sousa 20 December 2018 (has links)
A incorporação das tecnologias de computação e de comunicação nos veículos modernos está viabilizando uma nova geração de automóveis conectados. Com a capacidade de se organizarem em rede, nas chamadas redes veiculares ad hoc (VANETs), eles poderão, num futuro próximo, (i) tornar o trânsito mais seguro para os motoristas, passageiros e pedestres e/ou (ii) promover uma experiência de transporte mais agradável, com maior conforto. É neste contexto que se destaca o Sensoriamento Veicular Colaborativo (VCS), um paradigma emergente e promissor que explora as tecnologias já embarcados nos próprios veículos para a obtenção de dados in loco. O VCS tem demonstrado ser um modelo auspicioso para o desenvolvimento e implantação dos Sistemas Inteligentes de Transporte (ITSs). Ocorre, todavia, que, em grandes centros urbanos, dependendo do fenômeno a ser monitorado, as aplicações de VCS podem gerar um tráfego de dados colossal entre os veículos e o centro de monitoramento. Considerando que as informações dos automóveis são geralmente enviadas para um servidor remoto usando as infraestruturas das redes móveis, o número massivo de transmissões geradas durante as atividades de sensoriamento pode sobrecarregá-las e degradar consideravelmente a Qualidade de Serviço (QoS) que elas oferecem. Este documento de tese descreve e analisa uma abordagem de clusterização geográfica que se apoia no uso de comunicações Veículo-para-Veículo (V2V) para promover o descarregamento de dados do VCS em redes celulares, de forma a minimizar os impactos supracitados. Os resultados experimentais obtidos mostraram que o uso das comunicações V2V como método complementar de aquisição de dados in loco foi capaz de diminuir consideravelmente a quantidade transmissões realizadas sobre as redes móveis, sem a necessidade de implantação de novas infraestruturas de comunicação no ambiente, e com um reduzido atraso médio adicional fim a fim na obtenção das informações. A abordagem desenvolvida também se apresenta como uma plataforma de software flexível sobre a qual podem ser incorporadas técnicas de agregação de dados, o que possibilitaria aumentar ainda mais a preservação dos recursos de uplink das redes celulares. Considerando que a era da Internet das Coisas (IoT) e das cidades inteligentes está apenas começando, soluções para o descarregamento de dados, tal como a tratada nesta pesquisa, são consideradas imprescindíveis para continuar mantendo a rede móvel de acesso à Internet operacional e capaz de suportar uma demanda de comunicação cada vez maior por parte das aplicações. / The incorporation of computing and communication technologies into modern vehicles is enabling a new generation of connected cars. With the ability to get into a network formation, in the so-called ad hoc networks (VANETs), these vehicles might, in the near future, (i) make the traffic safer for drivers, passengers and pedestrians and/or (ii) promote a more pleasant transportation experience, with greater comfort. It is in this context that emerges the Vehicle CrowdSensing (VCS), a novel and promising paradigm for performing in loco data collection from the vehicles embedded technologies. VCS has proved to be an auspicious scheme for the development and deployment of the Intelligent Transport Systems (ITSs). However, in large urban areas, depending on the phenomenon to be monitored, the VCS applications can generate a colossal data traffic between vehicles and the monitoring center. Considering that all the vehicles information is generally sent to the remote server by using mobile network infrastructures, this massive amount of transmissions generated during the sensing activities can overload them and degrade the Quality of Service (QoS) they offer. This thesis document describes and analyzes a geographic clustering approach that relies on the use of Vehicle-to- Vehicle (V2V) communications to promote the VCS data offloading in cellular networks, in order to minimize the above impacts. The experimental results obtained showed that the use of V2V communications as a complementary data acquisition method was able to considerably reduce the number of transmissions carried out on mobile networks, without the need for deploying new communication infrastructures in the environment, and with a reduced additional delay. The created approach also stands itself as a flexible software platform on which data aggregation techniques can be incorporated, in order to maximize the network resources preservation already provided by the proposal. Considering that we are just entering in the Internet of Things (IoT) and smart cities era, creating data offloading solutions, such as that treated in this research, is considered an essential task to keep the Internet access network operational and able to support the growing demand for mobile communications.
73

Improving The Communication Performance Of I/O Intensive And Communication Intensive Application In Cluster Computer Systems

Kumar, V Santhosh 10 1900 (has links)
Cluster computer systems assembled from commodity off-the-shelf components have emerged as a viable and cost-effective alternative to high-end custom parallel computer systems.In this thesis, we investigate how scalable performance can be achieved for database systems on clusters. In this context we specfically considered database query processing for evaluation of botlenecks and suggest optimization techniques for obtaining scalable application performance. First we systematically demonstrated that in a large cluster with high disk bandwidth, the processing capability and the I/O bus bandwidth are the two major performance bottlenecks in database systems. To identify and assess bottlenecks, we developed a Petri net model of parallel query execution on a cluster. Once identified and assessed,we address the above two performance bottlenecks by offoading certain application related tasks to the processor in the network interface card. Offoading application tasks to the processor in the network interface cards shifts the bottleneck from cluster processor to I/O bus. Further, we propose a hardware scheme,network attached disk ,and a software scheme to achieve a balanced utilization of re-sources like host processor, I/O bus, and processor in the network interface card. The proposed schemes result in a speedup of upto 1.47 compared to the base scheme, and ensures scalable performance upto 64 processors. Encouraged by the benefits of offloading application tasks to network processors, we explore the possibilities of performing the bloom filter operations in network processors. We combine offloading bloom filter operations with the proposed hardware schemes to achieve upto 50% reduction in execution time. The later part of the thesis provides introductory experiments conducted in Community At-mospheric Model(CAM), a large scale parallel application used for global weather and climate prediction. CAM is a communication intensive application that involves collective communication of large messages. In our limited experiment, we identified CAM to see the effect of compression techniques and offloading techniques (as formulated for database) on the performance of communication intensive applications. Due to time constraint, we considered only the possibility of compression technique for improving the application performance. However, offloading technique could be taken as a full-fledged research problem for further investigation In our experiment, we found compression of messages reduces the message latencies, and hence improves the execution time and scalability of the application. Without using compression techniques, performance measured on 64 processor cluster resulted in a speed up of only 15.6. While lossless compression retains the accuracy and correctness of the program, it does not result in high compression. We therefore propose lossy compression technique which can achieve a higher compression, yet retain the accuracy and numerical stability of the application while achieving a scalable performance. This leads to speedup of 31.7 on 64 processors compared to a speedup of 15.6 without message compression. We establish that the accuracy within prescribed limit of variation and numerical stability of CAM is retained under lossy compression.
74

Gestion conjointe de ressources de communication et de calcul pour les réseaux sans fils à base de cloud / Joint communication and computation resources allocation for cloud-empowered future wireless networks

Oueis, Jessica 12 February 2016 (has links)
Cette thèse porte sur le paradigme « Mobile Edge cloud» qui rapproche le cloud des utilisateurs mobiles et qui déploie une architecture de clouds locaux dans les terminaisons du réseau. Les utilisateurs mobiles peuvent désormais décharger leurs tâches de calcul pour qu’elles soient exécutées par les femto-cellules (FCs) dotées de capacités de calcul et de stockage. Nous proposons ainsi un concept de regroupement de FCs dans des clusters de calculs qui participeront aux calculs des tâches déchargées. A cet effet, nous proposons, dans un premier temps, un algorithme de décision de déportation de tâches vers le cloud, nommé SM-POD. Cet algorithme prend en compte les caractéristiques des tâches de calculs, des ressources de l’équipement mobile, et de la qualité des liens de transmission. SM-POD consiste en une série de classifications successives aboutissant à une décision de calcul local, ou de déportation de l’exécution dans le cloud.Dans un deuxième temps, nous abordons le problème de formation de clusters de calcul à mono-utilisateur et à utilisateurs multiples. Nous formulons le problème d’optimisation relatif qui considère l’allocation conjointe des ressources de calculs et de communication, et la distribution de la charge de calcul sur les FCs participant au cluster. Nous proposons également une stratégie d’éparpillement, dans laquelle l’efficacité énergétique du système est améliorée au prix de la latence de calcul. Dans le cas d’utilisateurs multiples, le problème d’optimisation d’allocation conjointe de ressources n’est pas convexe. Afin de le résoudre, nous proposons une reformulation convexe du problème équivalente à la première puis nous proposons deux algorithmes heuristiques dans le but d’avoir un algorithme de formation de cluster à complexité réduite. L’idée principale du premier est l’ordonnancement des tâches de calculs sur les FCs qui les reçoivent. Les ressources de calculs sont ainsi allouées localement au niveau de la FC. Les tâches ne pouvant pas être exécutées sont, quant à elles, envoyées à une unité de contrôle (SCM) responsable de la formation des clusters de calculs et de leur exécution. Le second algorithme proposé est itératif et consiste en une formation de cluster au niveau des FCs ne tenant pas compte de la présence d’autres demandes de calculs dans le réseau. Les propositions de cluster sont envoyées au SCM qui évalue la distribution des charges sur les différentes FCs. Le SCM signale tout abus de charges pour que les FCs redistribuent leur excès dans des cellules moins chargées.Dans la dernière partie de la thèse, nous proposons un nouveau concept de mise en cache des calculs dans l’Edge cloud. Afin de réduire la latence et la consommation énergétique des clusters de calculs, nous proposons la mise en cache de calculs populaires pour empêcher leur réexécution. Ici, notre contribution est double : d’abord, nous proposons un algorithme de mise en cache basé, non seulement sur la popularité des tâches de calculs, mais aussi sur les tailles et les capacités de calculs demandés, et la connectivité des FCs dans le réseau. L’algorithme proposé identifie les tâches aboutissant à des économies d’énergie et de temps plus importantes lorsqu’elles sont téléchargées d’un cache au lieu d’être recalculées. Nous proposons ensuite d’exploiter la relation entre la popularité des tâches et la probabilité de leur mise en cache, pour localiser les emplacements potentiels de leurs copies. La méthode proposée est basée sur ces emplacements, et permet de former des clusters de recherche de taille réduite tout en garantissant de retrouver une copie en cache. / Mobile Edge Cloud brings the cloud closer to mobile users by moving the cloud computational efforts from the internet to the mobile edge. We adopt a local mobile edge cloud computing architecture, where small cells are empowered with computational and storage capacities. Mobile users’ offloaded computational tasks are executed at the cloud-enabled small cells. We propose the concept of small cells clustering for mobile edge computing, where small cells cooperate in order to execute offloaded computational tasks. A first contribution of this thesis is the design of a multi-parameter computation offloading decision algorithm, SM-POD. The proposed algorithm consists of a series of low complexity successive and nested classifications of computational tasks at the mobile side, leading to local computation, or offloading to the cloud. To reach the offloading decision, SM-POD jointly considers computational tasks, handsets, and communication channel parameters. In the second part of this thesis, we tackle the problem of small cell clusters set up for mobile edge cloud computing for both single-user and multi-user cases. The clustering problem is formulated as an optimization that jointly optimizes the computational and communication resource allocation, and the computational load distribution on the small cells participating in the computation cluster. We propose a cluster sparsification strategy, where we trade cluster latency for higher system energy efficiency. In the multi-user case, the optimization problem is not convex. In order to compute a clustering solution, we propose a convex reformulation of the problem, and we prove that both problems are equivalent. With the goal of finding a lower complexity clustering solution, we propose two heuristic small cells clustering algorithms. The first algorithm is based on resource allocation on the serving small cells where tasks are received, as a first step. Then, in a second step, unserved tasks are sent to a small cell managing unit (SCM) that sets up computational clusters for the execution of these tasks. The main idea of this algorithm is task scheduling at both serving small cells, and SCM sides for higher resource allocation efficiency. The second proposed heuristic is an iterative approach in which serving small cells compute their desired clusters, without considering the presence of other users, and send their cluster parameters to the SCM. SCM then checks for excess of resource allocation at any of the network small cells. SCM reports any load excess to serving small cells that re-distribute this load on less loaded small cells. In the final part of this thesis, we propose the concept of computation caching for edge cloud computing. With the aim of reducing the edge cloud computing latency and energy consumption, we propose caching popular computational tasks for preventing their re-execution. Our contribution here is two-fold: first, we propose a caching algorithm that is based on requests popularity, computation size, required computational capacity, and small cells connectivity. This algorithm identifies requests that, if cached and downloaded instead of being re-computed, will increase the computation caching energy and latency savings. Second, we propose a method for setting up a search small cells cluster for finding a cached copy of the requests computation. The clustering policy exploits the relationship between tasks popularity and their probability of being cached, in order to identify possible locations of the cached copy. The proposed method reduces the search cluster size while guaranteeing a minimum cache hit probability.
75

Análise da eficiência energética em navios mercantes e estudo de caso do consumo de combustível em navio aliviador do tipo Suezmax. / Analysis of merchant ships energy efficiency and case study of Suezmax shutle tanker fuel comsumption.

Rodrigo Achilles Schiller 28 November 2016 (has links)
A necessidade de redução do consumo de combustíveis fósseis, devido ao cenário atual de tentar frear os efeitos do aquecimento global e de reduzir a poluição atmosférica, vem ditando uma série de transformações no setor de transporte naval. Este trabalho apresenta, inicialmente, as mudanças no âmbito normativo na questão do controle de emissões de poluentes e de eficiência de consumo de combustíveis em navios mercantes. Em seguida, com foco nas embarcações existentes, são apresentadas as principais técnicas operacionais com grande potencial de redução de consumo de combustível, destacando o método da redução da velocidade de navegação que, corretamente aplicado, tem impacto positivo tanto na redução dos custos operacionais, quanto no aumento expressivo de eficiência energética. Foi realizada uma análise numérica da variação do consumo de combustível em função da velocidade de um navio petroleiro Suezmax, adaptado para operações de alívio em plataformas do tipo FPSO em águas brasileiras. Com isso, estimou-se o potencial de aumento da eficiência energética da embarcação a partir de pequenas reduções de velocidade, e discutiu-se as possíveis aplicações desta melhoria, a partir do perfil operacional característico do navio tipo, de modo a não causar impacto econômico na operação. O estudo, ainda, avaliou a aplicação de duas metodologias numéricas diferentes, uma baseada apenas em equações de regressão, semi-empírica, e outra utilizando simulações de CFD para a estimativa de parâmetros sensíveis a forma do casco e de grande relevância para a determinação dos consumos característicos, analisando imprecisões e impactos no resultado final. / The need to reduce fossil fuels consumption due to the current scenario of trying to restrain global warming effects and reduce air pollution is dictating a series of transformations in shipping. This study introduces, at first, the changes of the regulatory framework concerning gas emissions control and fuel consumption efficiency on merchant ships. Secondly, the main operational procedures with high potential reduction of fuel consumption are discussed, with focus on existing vessels, using ship speed reduction procedure. This procedure shows the positive impacts on both operating costs reduction and also on energy efficiency increase if correctly applied. Finally, a numerical analysis of the fuel consumption variation with the speed was carried out for a Suezmax class oil tanker, which has been adapted to oil offloading operations for FPSOs in Brazilian offshore oil production systems. In this analysis, the discussions about the variations of vessel energy efficiency from small speed rate reductions and the possible applications of this improvement, taking into account the typical operating profile of the vessel in such a way to have significant economic impacts on the operation. This analysis also evaluated the application of two different numerical methods: one based only on regression equations produced by existing data, semi empirical method, and another using a CFD simulations for estimating the hull shape parameters that are most relevant for determining fuel consumption, analyzing inaccuracies and impact on the final results.
76

Explorando interações em redes sociais online, comunicação dispositivo-a-dispositivo e estratégias de cache para uso eficiente de recursos em redes celulares / Exploiting on-line social interactions, D2D communication and caching strategies for celular network resource efficiency

Moraes, Fausto da Silva 09 November 2016 (has links)
Submitted by JÚLIO HEBER SILVA (julioheber@yahoo.com.br) on 2016-12-02T18:19:39Z No. of bitstreams: 2 Dissertação - Fausto da Silva Moraes - 2016.pdf: 3698287 bytes, checksum: cbcf2894f028c54e60d90033a04bdc5e (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Jaqueline Silva (jtas29@gmail.com) on 2016-12-13T15:32:00Z (GMT) No. of bitstreams: 2 Dissertação - Fausto da Silva Moraes - 2016.pdf: 3698287 bytes, checksum: cbcf2894f028c54e60d90033a04bdc5e (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2016-12-13T15:32:00Z (GMT). No. of bitstreams: 2 Dissertação - Fausto da Silva Moraes - 2016.pdf: 3698287 bytes, checksum: cbcf2894f028c54e60d90033a04bdc5e (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2016-11-09 / For the future of cellular networks is estimated a significant increase in the number of connected devices and the bulk data traffic, especially video content. That poses as a challenge for the next generation network. With the rise of new communications paradigm, D2D communication emerge as potential approach for cellular network data offloading, especially when paired with caching solutions within the network. Also, the impact of viral videos could be mitigated by proactively caching the contents being shared on Online Social Networks (OSNs). This work presents a new approach to proactive content cache for D2D-enabled networks, that is aware of users social interaction on OSN’s. Our proposal consists in the combination of user mobile and social informations to find the best located device to cache a content being shared online. Results obtained through simulation show that the proposed approach can improve offload rate, reduce energy consumption and provide a faster content access when compared with other work in literature. / As estimativas de um crescimento significativo do número de dispositivos conectados às redes sem fio e móveis, e do aumento do volume de dados trafegados, em especial o tráfego de conteúdo em vídeo, representam um desafio para a próxima geração de redes celulares. Com o surgimento de novos paradigmas de redes sem fio, a comunicação D2D (Dispositivo-a-Dispositivo) figura como uma abordagem promissora para permitir o descarregamento de dados da rede celular, principalmente quando aliada a técnicas de cache de conteúdo nos dispositivos dos usuários. Além disso, as informações de interações dos usuários em redes sociais online poderiam ser empregadas para suavizar o impacto dos vídeos virais sendo compartilhados entre os usuários, através de uma solução de cache proativa. Nesta dissertação, apresentamos uma proposta de cache proativo de conteúdo ciente às interações sociais online para redes com comunicação D2D. Em nossa proposta, determinamos de forma probabilística qual o melhor dispositivo para armazenar um conteúdo que esteja sendo compartilhado entre os usuários com base nas informações de contatos entre os usuários na rede D2D. Os resultados coletados com o simulador ns-3 mostram que a proposta apresentada pode melhorar o descarregamento de dados da rede celular, reduzir os gastos de energia dos dispositivos dos usuários, e fornecer tempos de transmissão de conteúdo menores, em comparação com outro trabalho da literatura,
77

Contrôle de la mobilité dans un réseau d'opérateur convergé fixe-mobile / Mobility management in a converged fixed-mobile operator's network

Eido, Souheir 12 July 2017 (has links)
Les réseaux fixes et mobiles font face à une croissance dramatique du trafic de données, qui est principalement due à la distribution de contenus vidéo. Les opérateurs Télécoms envisagent donc de décentraliser la distribution de contenus dans les futures architectures convergées fixe-mobile (FMC). Cette décentralisation, conjointement au déploiement d'un cœur de réseau mobile distribué, sera un élément majeur des futurs réseaux 5G. L'approche SIPTO définie par 3GPP permet déjà le délestage sur le réseau fixe du trafic mobile, et pourra donc être utilisée en 5G. SIPTO s'appuie sur la distribution des passerelles de données (PGW) qui permet ainsi de décharger le cœur du réseau mobile actuel. Cependant, dans certains cas de mobilité des usagers, SIPTO ne supporte pas la continuité de session, quand il est nécessaire de changer de PGW, donc de modifier l'adresse IP du terminal. Cette thèse commence par quantifier le gain apporté par le délestage du trafic mobile en termes de capacité requise pour différentes portions du réseau. Un état de l'art des différentes solutions de délestage du trafic de données mobiles est fourni, démontrant qu'aucune des solutions existantes ne supporte la continuité de service pour les sessions de longue durée. C'est pourquoi, cette thèse propose des solutions pour supporter une mobilité transparente ; ces solutions s'appuient à la fois sur SIPTO et sur le protocole MultiPath TCP (MPTCP). Les protocoles du 3GPP sont inchangés car la continuité est maintenue par les extrémités. Enfin, ces solutions sont appliquées aux différentes implémentations d'architectures FMC envisagées à ce jour. / Fixed and mobile networks are currently experiencing a dramatic growth in terms of data traffic, mainly driven by video content distribution. Telecoms operators are thus considering de-centralizing content distribution architecture for future Fixed and Mobile Converged (FMC) network architectures. This decentralization, together with a distributed mobile EPC, would be used for future 5G networks. Mobile data offloading, in particular SIPTO approaches, already represent a good implementation model for 5G network as it allows the use of distributed IP edges to offload Selected IP traffic off the currently centralized mobile core network. However, in some cases, SIPTO does not support session continuity during users' mobility. This is due to the fact that user's mobility may imply packet gateway (PGW) relocation and thus a modification of the UE's IP address.This PhD thesis first quantifies the gain, in terms of bandwidth demands on various network portions, brought by the generalized use of mobile traffic offloading. A state of art of existing mobile data offloading solutions is presented, showing that none of the existing solutions solve the problem of session continuity for long-lived sessions. This is why, in the context of future FMC mobile network architectures, the PhD thesis proposes solutions to provide seamless mobility for users relying on SIPTO with the help of Multipath TCP (MPTCP). 3GPP standards are not modified, as session continuity is ensured by end-points. Lastly, the proposed solutions are mapped on different architecture options considered for future FMC networks.
78

Research on Dynamic Offloading Strategy of Satellite Edge Computing Based on Deep Reinforcement Learning

Geng, Rui January 2021 (has links)
Nowadays more and more data is generated at the edge of the network, and people are beginning to consider decentralizing computing tasks to the edge of the network. The network architecture of edge computing is different from the traditional network architecture. Its distributed configuration can make up for some shortcomings of traditional networks, such as data congestion, increased delay, and limited capacity. With the continuous development of 5G technology, satellite communication networks are also facing many new business challenges. By using idle computing power and storage space on satellites and integrating edge computing technology into satellite communication networks, it will greatly improve satellite communication service quality, and enhance satellite task processing capabilities, thereby improving the satellite edge computing system performance. The primary problem that limits the computing performance of satellite edge networks is how to obtain a more effective dynamic service offloading strategy. To study this problem, this thesis monitors the status information satellite nodes in different periods, such as service load and distance to the ground, uses the Markov decision process to model the dynamic offloading problem of the satellite edge computing system, and finally obtains the service offloading strategies. The deployment plan is based on deep reinforcement learning algorithms. We mainly study the performance of the Deep Q-Network (DQN) algorithm and two improved DQN algorithms Double DQN (DDQN) and Dueling DQN (DuDQN) in different service request types and different system scenarios. Compared with existing service deployment algorithms, deep reinforcement learning algorithms take into account the long-term service quality of the system and form more reasonable offloading strategies. / Med den snabba utvecklingen av mobil kommunikationsteknik genereras mer och mer data i utkanten av nätverket, och människor börjar överväga att decentralisera datoruppgifter till kanten av nätverket. Och byggde ett komplett mobilt edge computing -arkitektursystem. Edge -dators nätverksarkitektur skiljer sig från den traditionella nätverksarkitekturen. Dess distribuerade konfiguration kan kompensera för eventuella brister i traditionella nätverk, såsom överbelastning av data, ökad fördröjning och begränsad kapacitet. Med den ständiga utvecklingen av 5G -teknik står satellitkommunikationsnät också inför många nya affärsutmaningar. Genom att använda inaktiv datorkraft och lagringsutrymme på satelliter och integrera edge computing -teknik i satellitkommunikationsnät kommer det att förkorta servicetiden för traditionella mobila satelliter kraftigt, förbättra satellitkommunikationstjänstkvaliteten och förbättra satellituppgiftsbehandlingsförmågan och därigenom förbättra satelliten edge computing systemprestanda. Det primära problemet som begränsar datorprestanda för satellitkantnät är hur man får en mer effektiv dynamisk tjänstavlastningsstrategi. Detta papper övervakar servicebelastningen av satellitnoder i olika perioder, markpositionsinformation och annan statusinformation använder Markov - beslutsprocessen för att modellera den dynamiska distributionen av satellitkantstjänster och får slutligen en uppsättning tjänstedynamik baserad på modell och design . Distributionsplanen är baserad på en djupt förbättrad algoritm för dynamisk distribution av tjänster. Det här dokumentet studerar huvudsakligen prestandan för DQN -algoritmen och två förbättrade DQN - algoritmer Double DQN och Dueling DQN i olika serviceförfrågningstyper och olika systemscenarier. Jämfört med befintliga algoritmer för serviceutplacering är prestandan för algoritmer för djupförstärkning något bättre.
79

Network Implementation with TCP Protocol : A server on FPGA handling multiple connections / Nätverks implementering med TCP protokoll : En server på FPGA som hanterar flera anslutningar

Li, Ruobing January 2022 (has links)
The growing number of players in Massively Multiplayer Online games puts a heavy load on the network infrastructure and the general-purpose CPU of the game servers. A game server’s network stack processing needs equal treatment to the game-related processing ability. It is a fact that the networkcommunication tasks on the CPU reach the same order of magnitude as the game-related tasks, and the computing capability of the CPU can be a factor that limits the maximum number of players. Therefore, CPU offloading is becoming vital. FPGAs play an essential role in dedicated computation and network communication due to their superiority in flexibility and computation-oriented efficiency. Thus, an FPGA can be a good hardware platform to implement a network stack to replace the CPU in processing the network computations. However, most commercial and open-source network stack IPs support only one or few connections. This thesis project explores a network server on FPGA, implemented in RTL, that can handle multiple connections, specialized in the TCP protocol. The design in this project adds a cached memory hierarchy that provides a filter against port numbers of multiple connections from the same application and an Application Layer Controller, based on an open-source Ethernet, to increase the number of TCP connections further. A proof of concept was built, and its performance was tested. As a result, the TCP server on the FPGA was designed to handle a maximum of 40 configurable connections, but only 25 connections could be maintained during operation due to operational latency constraints. This FPGA server solution provides a latency of 1 ms in LAN. The babbling idiot and out-of-order packet transfer tests from clientswere also performed to guarantee robustness. During testing, poor performance in Packet Loss and Packet Error Handling was noted. In the future, this issue needs to be addressed. In addition, further investigations of methods for expanding the cache need to be done to allow handling more clients. / Det växande antalet spelare i Massively Multiplayer Online-spel belastar nätverksinfrastrukturen och spelservrarnas CPU:er. En spelservers förmåga att bearbeta nätverksstacken måste behandlas lika med den spelrelaterade bearbetningsförmågan. Det är ett faktum att nätverkskommunikationsuppgifterna på processorn når samma storleksordning som de spelrelaterade uppgifterna, och processorns beräkningsförmåga kan vara en faktor som begränsar det maximala antalet spelare. Därför blir avlastning av CPU-viktig. FPGA:er spelar en viktig roll i dedikerad beräkning och nätverkskommunikation på grund av dess överlägsenhet vad gäller flexibilitet och beräkningsorienterad effektivitet. Således kan en FPGA vara en bra hårdvaruplattform för att implementera en nätverksstack, för att ersätta CPU:n vid bearbetning av nätverksberäkningsarna. Men, de flesta kommersiella och öppna källkodsnätverksstack- IP:er stöder dock bara en eller ett fåtal anslutningar. Detta examensarbete utforskar en nätverksserver på FPGA, implementerad mha RTL, som kan hantera flera anslutningar, specialiserad på TCP-protokollet. Designen i detta projekt lägger till en cachad minneshierarki som ger ett filter mot portnummer för flera anslutningar från samma applikation och en Application Layer Controller, baserad på öppen källkod för Ethernet, för att öka antalet TCP-anslutningar ytterligare. Ett proof of concept byggdes och dess prestanda testades. Som ett resultat designades TCP-servern på FPGA:n att kunna hantera maximalt 40 konfigurerbara anslutningar, men endast 25 anslutningar kunde bibehållas under drift på grund av driftsfördröjningar. Denna FPGA-serverlösning ger en latens på 1 ms i LAN. Tester inkluderande den babblande idioten och out-of-order paketöverföring från klienter utfördes också för att garantera robusthet. Under testningen noterades dålig prestanda i paketförlust och paketfelshantering. I framtiden måste denna fråga åtgärdas. Dessutom behöver ytterligare undersökningar av metoder för att utöka cachen göras för att kunna hantera fler klienter.
80

Safety-Oriented Task Offloading for Human-Robot Collaboration : A Learning-Based Approach / Säkerhetsorienterad Uppgiftsavlastning för Människa-robotkollaboration : Ett Inlärningsbaserat Tillvägagångssätt

Ruggeri, Franco January 2021 (has links)
In Human-Robot Collaboration scenarios, safety must be ensured by a risk management process that requires the execution of computationally expensive perception models (e.g., based on computer vision) in real-time. However, robots usually have constrained hardware resources that hinder timely responses, resulting in unsafe operations. Although Multi-access Edge Computing allows robots to offload complex tasks to servers on the network edge to meet real-time requirements, this might not always be possible due to dynamic changes in the network that can cause congestion or failures. This work proposes a safety-based task offloading strategy to address this problem. The goal is to intelligently use edge resources to reduce delays in the risk management process and consequently enhance safety. More specifically, depending on safety and network metrics, a Reinforcement Learning (RL) solution is implemented to decide whether a less accurate model should run locally on the robot or a more complex one should run remotely on the network edge. A third possibility is to reuse the previous output through verification of temporal coherence. Experiments are performed in a simulated warehouse scenario where humans and robots have close interactions. Results show that the proposed RL solution outperforms the baselines in several aspects. First, the edge is used only when the network performance is good, reducing the number of failures (up to 47%). Second, the latency is also adapted to the safety requirements (risk X latency reduced up to 48%), avoiding unnecessary network congestion in safe situations and letting other robots in hazardous situations use the edge. Overall, the latency of the risk management process is largely reduced (up to 68%), and this positively affects safety (time in safe zone increased up to 3:1%). / I scenarier med människa-robotkollaboration måste säkerheten säkerställas via en riskhanteringsprocess. Denna process kräver exekvering av beräkningstunga uppfattningsmodeller (t.ex. datorseende) i realtid. Robotar har vanligtvis begränsade hårdvaruresurser vilket förhindrar att respons uppnås i tid, vilket resulterar i osäkra operationer. Även om Multi-access Edge Computing tillåter robotar att avlasta komplexa uppgifter till servrar på edge, för att möta realtidskraven, så är detta inte alltid möjligt på grund av dynamiska förändringar i nätverket som kan skapa överbelastning eller fel. Detta arbete föreslår en säkerhetsbaserad uppgiftsavlastningsstrategi för att hantera detta problem. Målet är att intelligent använda edge-resurser för att minska förseningar i riskhanteringsprocessen och följaktligen öka säkerheten. Mer specifikt, beroende på säkerhet och nätverksmätvärden, implementeras en Reinforcement Learning (RL) lösning för att avgöra om en modell med mindre noggrannhet ska köras lokalt eller om en mer komplex ska köras avlägset på edge. En tredje möjlighet är att återanvända sista utmatningen genom verifiering av tidsmässig koherens. Experimenten utförs i ett simulerat varuhusscenario där människor och robotar har nära interaktioner. Resultaten visar att den föreslagna RL-lösningen överträffar baslinjerna i flera aspekter. För det första används edge bara när nätverkets prestanda är bra, vilket reducerar antal fel (upp till 47%). För det andra anpassas latensen också till säkerhetskraven (risk X latens reducering upp till 48%), undviker onödig överbelastning i nätverket i säkra situationer och låter andra robotar i farliga situationer använda edge. I det stora hela reduceras latensen av riskhanterings processen kraftigt (upp till 68%) och påverkar på ett positivt sätt säkerheten (tiden i säkerhetszonen ökas upp till 4%).

Page generated in 0.0572 seconds