71 |
Offline Task Scheduling in a Three-layer Edge-Cloud ArchitectureMahjoubi, Ayeh January 2023 (has links)
Internet of Things (IoT) devices are increasingly being used everywhere, from the factory to the hospital to the house to the car. IoT devices typically have limited processing resources, so they must rely on cloud servers to accomplish their tasks. Thus, many obstacles need to be overcome while offloading tasks to the cloud. In reality, an excessive amount of data must be transferred between IoT devices and the cloud, resulting in issues such as slow processing, high latency, and limited bandwidth. As a result, the concept of edge computing was developed to place compute nodes closer to the end users. Because of the limited resources available at the edge nodes, when it comes to meeting the needs of IoT devices, tasks must be optimally scheduled between IoT devices, edge nodes, and cloud nodes. In this thesis, we model the offloading problem in an edge cloud infrastructure as a Mixed-Integer Linear Programming (MILP) problem and look for efficient optimization techniques to tackle it, aiming to minimize the total delay of the system after completing all tasks of all services requested by all users. To accomplish this, we use the exact approaches like simplex to find a solution to the MILP problem. Due to the fact that precise techniques, such as simplex, require a large number of processing resources and a considerable amount of time to solve the problem, we propose several heuristics and meta-heuristics methods to solve the problem and use the simplex findings as a benchmark to evaluate these methods. Heuristics are quick and generate workable solutions in certain circumstances, but they cannot guarantee optimal results. Meta-heuristics are slower than heuristics and may require more computations, but they are more generic and capable of handling a variety of problems. In order to solve this issue, we propose two meta-heuristic approaches, one based on a genetic algorithm and the other on simulated annealing. Compared to heuristics algorithms, the genetic algorithm-based method yields a more accurate solution, but it requires more time and resources to solve the MILP, while the simulated annealing-based method is a better fit for the problem since it produces more accurate solutions in less time than the genetics-based method. / Internet of Things (IoT) devices are increasingly being used everywhere. IoT devices typically have limited processing resources, so they must rely on cloud servers to accomplish their tasks. In reality, an excessive amount of data must be transferred between IoT devices and the cloud, resulting in issues such as slow processing, high latency, and limited bandwidth. As a result, the concept of edge computing was developed to place compute nodes closer to the end users. Because of the limited resources available at the edge nodes, when it comes to meeting the needs of IoT devices, tasks must be optimally scheduled between IoT devices, edge nodes, and cloud nodes. In this thesis, the offloading problem in an edge cloud infrastructure is modeled as a Mixed-Integer Linear Programming (MILP) problem, and efficient optimization techniques seeking to minimize the total delay of the system are employed to address it. To accomplish this, the exact approaches are used to find a solution to the MILP problem. Due to the fact that precise techniques require a large number of processing resources and a considerable amount of time to solve the problem, several heuristics and meta-heuristics methods are proposed. Heuristics are quick and generate workable solutions in certain circumstances, but they cannot guarantee optimal results while meta-heuristics are slower than heuristics and may require more computations, but they are more generic and capable of handling a variety of problems.
|
72 |
Belief Rule-Based Workload Orchestration in Multi-access Edge ComputingJamil, Mohammad Newaj January 2022 (has links)
Multi-access Edge Computing (MEC) is a standard network architecture of edge computing, which is proposed to handle tremendous computation demands of emerging resource-intensive and latency-sensitive applications and services and accommodate Quality of Service (QoS) requirements for ever-growing users through computation offloading. Since the demand of end-users is unknown in a rapidly changing dynamic environment, processing offloaded tasks in a non-optimal server can deteriorate QoS due to high latency and increasing task failures. In order to deal with such a challenge in MEC, a two-stage Belief Rule-Based (BRB) workload orchestrator is proposed to distribute the workload of end-users to optimum computing units, support strict QoS requirements, ensure efficient utilization of computational resources, minimize task failures, and reduce the overall service time. The proposed BRB workload orchestrator decides the optimal execution location for each offloaded task from User Equipment (UE) within the overall MEC architecture based on network conditions, computational resources, and task requirements. EdgeCloudSim simulator is used to conduct comprehensive simulation experiments for evaluating the performance of the proposed BRB orchestrator in contrast to four workload orchestration approaches from the literature with different types of applications. Based on the simulation experiments, the proposed workload orchestrator outperforms state-of-the-art workload orchestration approaches and ensures efficient utilization of computational resources while minimizing task failures and reducing the overall service time.
|
73 |
Edge Compute Offloading Strategies using Heuristic and Reinforcement Learning Techniques.Dikonimaki, Chrysoula January 2023 (has links)
The emergence of 5G alongside the distributed computing paradigm called Edge computing has prompted a tremendous change in the industry through the opportunity for reducing network latency and energy consumption and providing scalability. Edge computing extends the capabilities of users’ resource-constrained devices by placing data centers at the edge of the network. Computation offloading enables edge computing by allowing the migration of users’ tasks to edge servers. Deciding whether it is beneficial for a mobile device to offload a task and on which server to offload, while environmental variables, such as availability, load, network quality, etc., are changing dynamically, is a challenging problem that requires careful consideration to achieve better performance. This project focuses on proposing lightweight and efficient algorithms to take offloading decisions from the mobile device perspective to benefit the user. Subsequently, heuristic techniques have been examined as a way to find quick but sub-optimal solutions. These techniques have been combined with a Multi-Armed Bandit algorithm, called Discounted Upper Confidence Bound (DUCB) to take optimal decisions quickly. The findings indicate that these heuristic approaches cannot handle the dynamicity of the problem and the DUCB provides the ability to adapt to changing circumstances without having to keep adding extra parameters. Overall, the DUCB algorithm performs better in terms of local energy consumption and can improve service time most of the times. / Utvecklingen av 5G har skett parallellt med det distribuerade beräkningsparadigm som går under namnet Edge Computing. Lokala datacenter placerade på kanten av nätverket kan reducera nätverkslatensen och energiförbrukningen för applikationer. Exempelvis kan användarenheter med begränsade resurser ges utökande möjligheter genom avlastning av beräkningsintensiva uppgifter. Avlastningen sker genom att migrera de beräkningsintensiva uppgifterna till en dator i datacentret på kanten. Det är dock inte säkert att det alltid lönar sig att avlasta en beräkningsintensiv uppgift från en enhet till kanten. Detta måste avgöras från fall till fall. Att avgöra om och när det lönar sig är ett svårt problem då förutsättningar som tillgänglighet, last, nätverkskvalitét, etcetera hela tiden varierar. Fokus i detta projekt är att identifiera enkla och effektiva algoritmer som kan avgöra om det lönar sig för en användare att avlasta en beräkningsintensiv uppgift från en mobil enhet till kanten. Heuristiska tekniker har utvärderats som en möjlig väg att snabbt hitta lösningar även om de råkar vara suboptimala. Dessa tekniker har kombinerats med en flerarmad banditalgoritm (Multi-Armed Bandit), kallad Discounted Upper Confidence Bound (DUCB), för att ta optimala beslut snabbt. Resultaten indikerar att dessa heuristiska tekniker inte kan hantera de dynamiska förändringar som hela tiden sker samtidigt som DUCB kan anpassa sig till dessa förändrade omständigheter utan att man måste addera extra parametrar. Sammantaget, ger DUCM-algoritmen bättre resultat när det gäller lokal energikonsumtion och kan i de flesta fallen förbättra tiden för tjänsten.
|
74 |
Towards High-Accuracy and Resource-Efficient Edge-Assisted Augmented RealityQiang Xu (19166152) 21 July 2024 (has links)
<p dir="ltr">Immersive applications such as augmented reality (AR) and mixed reality (MR) often need to perform latency-critical analytics tasks on every frame captured on camera. These tasks, often powered by deep neural networks (DNNs) for their superior accuracy, necessitate offloading to edge servers with GPUs due to their computational intensity. Achieving high accuracy and efficient AR task offloading faces two fundamental challenges untapped by prior work: (1) In practice, multiple DNN-supported tasks need to offload concurrently to achieve the app functionality -- how to schedule such offloaded tasks on the client which compete for shared edge server resources to maximize the app QoE? (2) Concurrent AR clients from a large user base offload to a cluster of GPU servers -- how to schedule the offloaded tasks on the servers to maximize the number of clients served and lower the operating cost?</p><p dir="ltr">To tackle the first challenge, we design a framework, AccuMO, that balances the offloading frequencies of different tasks by dynamically scheduling the offloading of multiple tasks from an AR client to an edge server, thereby optimizing the overall accuracy across tasks and hence app QoE. Our design employs two novel ideas: (1) task-specific lightweight models that predict offloading accuracy drop as a function of offloading frequency and frame content, and (2) a general two-level control feedback loop that concurrently balances offloading among tasks and adapts between offloading and using local algorithms for each task.</p><p dir="ltr">We tackle the challenge of supporting concurrent AR clients in two steps. We first focus on maximizing the capacity of individual edge servers, where we present ARISE, which untangles the intricate interplay between per-client offloading schedule and batched inference on the server by proactively coordinating offloading requests from different AR clients. In the second step, we focus on a cluster setup of heterogeneous GPU servers which exposes the synergy between diversity in both DNN layers and GPU architectures, manifesting as comparable inference latency for many layers in DNN models when running on low-class and high-class GPUs. We exploit such overlooked capability of low-class GPUs using pipeline parallelism and present a novel inference serving system, IPIPE, that employs pool-based pipeline parallelism with a mixed-integer linear programming (MILP)-based control plane and a data plane that performs resource reservation-based adaptive batching.</p>
|
75 |
Design and evaluation of contingency plans for connectivity loss in cloud-controlled mobile robots / Utformning och utvärdering av beredskapsplaner för förlust av uppkoppling i molnbaserade mobila robotarLopez Iniesta Diaz Del Campo, Javier January 2024 (has links)
Recent advancements in telecommunications have brought new tools about in the field of robotics, with offloading emerging as one of the most significant developments. Hence, computationally expensive tasks are performed on a server in the cloud instead of on the mobile robot, reducing processing costs in robots and enhancing their efficiency. However, one of the major challenges of offloading robot control is to maintain functional safety even when the connection with the server is interrupted. To mitigate these connectivity losses, an optimization-based method has been developed to compute an environment-dependent contingency plan. This plan is sent from the cloud to the robot together with the corresponding control command. The planner takes into account the current map, based on all sensor data collected up to the time of optimization, and the nominal trajectory to provide a sequence of safe control commands. Assuming that in the absence of connectivity, all detected objects will move at a constant speed. Therefore, the contingency plan would be executed on the robot only when connectivity to the cloud is lost, without making use of subsequent sensor data in the robot’s on-board processor. Thus, through the proposed method, it is possible to maximize the movement time of the mobile robot in case of loss of connectivity with the cloud controller without compromising any safety constraints. In this context, two different approaches have been designed based on the possibility of deviating from the nominal trajectory. In the first, called “path following”, the mobile robot is constrained to stay on the reference path, but can vary its speed, performing a safety brake when there is a risk of collision. In contrast, in “trajectory following”, deviation is allowed by trying to prolong the point at which the velocity is reduced. The evaluation shows that the optimal approach depends on the application for which the mobile robot will be used. Furthermore, these approaches do not overload the network bandwidth, since contingency plans can be optimized by parameterizing the velocity sequences or by reducing the sending rate through event-triggered sending. / De senaste framstegen inom telekommunikation har introducerat nya verktyg inom robotikens område, där offloading är en av de mest relevanta. Således utförs beäkningsintensiva uppgifter på en server i molnet istället för på den mobila roboten, vilket minskar bearbetningskostnaderna för roboter och ökar deras effektivitet. En av de största utmaningarna med att offloada robotstyrning är dock att bibehålla funktionell säkerhet även när anslutningen till fjärrservern bryts. För att hantera sådana avbrott, har vi utvecklat en optimeringsbaserad metod för att beräkna en reservplan, anpassad till miljön runt roboten. Denna plan skickas från molnet till roboten tillsammans med varje styrkommando. Planeraren beaktar den aktuella kartan, baserad på all sensordata som samlats in fram till nu, och den nominella banan och beräknar en säker reservplan i form av en sekvens av styrkommandon. För säkerhets skull antar planeraren att i händelse av ett avbrott, kommer alla hinder i kartan att närma sig roboten med en konstant hastighet. Det gör det säkert att exekvera reservplanen om anslutningen till molnet går förlorad, utan att använda efterföljande sensordata för att uppdatera kartan. Den föreslagna metoden gör det alltså möjligt att maximera tiden som den mobila roboten kan fortsätta köra vid förlust av anslutning till molnservern, utan att göra avkall på säkerheten. I detta projekt har vi utformat två olika planeringsmetoder, som skiljer sig vad gäller möjligheten att avvika från den nominella banan. I den första, kallad “path following”, tillåts inte roboten att avvika från referensbanan och utför därför en säkerhetsbromsning när det finns risk för kollision. I den andra, kallad “trajectory following”, tillåts roboten avvika från referensbanan, genom att försöka fördröja det ögonblick då roboten behöver bromsa. Utvärderingen visar att vilken metod som är bäst, beror på tillämpningen som den mobila roboten används för. Dessutom överbelastar dessa tillvägagångssätt inte nätverksbandbredden, eftersom beredskapsplaner kan optimeras genom att parameterisera hastighetssekvenser eller genom att minska överföringshastigheten. / Los recientes avances en las telecomunicaciones han traído consigo nuevas herramientas en la robótica, siendo el offloading una de los desarrollos más significativos. Así, las tareas computacionalmente más costosas se realizan en un servidor en la nube en lugar de en el robot móvil, reduciendo los costos de procesamiento en el robot y mejorando su eficiencia. Sin embargo, uno de los mayores desafíos del offloading de control de robots es mantener la seguridad funcional incluso cuando la conexión con el servidor se interrumpe. Con el fin de mitigar las pérdidas de conectividad, se ha desarrollado un método basado en optimizacion que calcula un plan de contingencia dependiente del entorno. Este plan se envía desde la nube al robot junto con el comando de control correspondiente. El planificador tiene en cuenta el mapa del entorno actual, basado en todos los datos del sensor recopilados hasta el momento de la optimización, y la trayectoria nominal para proporcionar una secuencia de comandos de control seguros. En este sentido, el planificador asume que, en ausencia de conectividad, todos los objetos detectados se aproximarán al robot a una velocidad constante. Este plan de contingencia se ejecutaría en el robot solo cuando se pierde la conectividad con la nube, sin hacer uso de datos de sensor posteriores en el procesador a bordo del robot. Por lo tanto, mediante el método propuesto, se logra maximizar el tiempo de movimiento del robot móvil en caso de pérdida de conectividad con el controlador en la nube sin sacrificar las restricciones de seguridad. En este contexto, dos enfoques distintos según la posibilidad de desviarse o no de la trayectoria nominal han sido diseñados. En el primero, denominado “path following”, no se permite que se desvíe de la referencia, aplicando un frenado de seguridad cuando existe riesgo de colisión. En cambio, en “trajectory following”, se permite la desviación para tratar de prolongar el momento en el que se reduce la velocidad. La evaluación muestra que el enfoque óptimo depende de la aplicación para la cual se utilizará el robot móvil. Además, estos enfoques no sobrecargan el ancho de banda de la red, ya que los planes de contingencia pueden optimizarse parametrizando las secuencias de velocidad o reduciendo la velocidad de envío.
|
76 |
Análise da eficiência energética em navios mercantes e estudo de caso do consumo de combustível em navio aliviador do tipo Suezmax. / Analysis of merchant ships energy efficiency and case study of Suezmax shutle tanker fuel comsumption.Schiller, Rodrigo Achilles 28 November 2016 (has links)
A necessidade de redução do consumo de combustíveis fósseis, devido ao cenário atual de tentar frear os efeitos do aquecimento global e de reduzir a poluição atmosférica, vem ditando uma série de transformações no setor de transporte naval. Este trabalho apresenta, inicialmente, as mudanças no âmbito normativo na questão do controle de emissões de poluentes e de eficiência de consumo de combustíveis em navios mercantes. Em seguida, com foco nas embarcações existentes, são apresentadas as principais técnicas operacionais com grande potencial de redução de consumo de combustível, destacando o método da redução da velocidade de navegação que, corretamente aplicado, tem impacto positivo tanto na redução dos custos operacionais, quanto no aumento expressivo de eficiência energética. Foi realizada uma análise numérica da variação do consumo de combustível em função da velocidade de um navio petroleiro Suezmax, adaptado para operações de alívio em plataformas do tipo FPSO em águas brasileiras. Com isso, estimou-se o potencial de aumento da eficiência energética da embarcação a partir de pequenas reduções de velocidade, e discutiu-se as possíveis aplicações desta melhoria, a partir do perfil operacional característico do navio tipo, de modo a não causar impacto econômico na operação. O estudo, ainda, avaliou a aplicação de duas metodologias numéricas diferentes, uma baseada apenas em equações de regressão, semi-empírica, e outra utilizando simulações de CFD para a estimativa de parâmetros sensíveis a forma do casco e de grande relevância para a determinação dos consumos característicos, analisando imprecisões e impactos no resultado final. / The need to reduce fossil fuels consumption due to the current scenario of trying to restrain global warming effects and reduce air pollution is dictating a series of transformations in shipping. This study introduces, at first, the changes of the regulatory framework concerning gas emissions control and fuel consumption efficiency on merchant ships. Secondly, the main operational procedures with high potential reduction of fuel consumption are discussed, with focus on existing vessels, using ship speed reduction procedure. This procedure shows the positive impacts on both operating costs reduction and also on energy efficiency increase if correctly applied. Finally, a numerical analysis of the fuel consumption variation with the speed was carried out for a Suezmax class oil tanker, which has been adapted to oil offloading operations for FPSOs in Brazilian offshore oil production systems. In this analysis, the discussions about the variations of vessel energy efficiency from small speed rate reductions and the possible applications of this improvement, taking into account the typical operating profile of the vessel in such a way to have significant economic impacts on the operation. This analysis also evaluated the application of two different numerical methods: one based only on regression equations produced by existing data, semi empirical method, and another using a CFD simulations for estimating the hull shape parameters that are most relevant for determining fuel consumption, analyzing inaccuracies and impact on the final results.
|
77 |
Uso de comunicação V2V para o descarregamento de dados em redes celulares: uma estratégia baseada em clusterização geográca para apoiar o sensoriamento veicular colaborativo / On the use of V2V communication for cellular data offloading: a geographic clustering-based strategy to support vehicular crowdsensingNunes, Douglas Fabiano de Sousa 20 December 2018 (has links)
A incorporação das tecnologias de computação e de comunicação nos veículos modernos está viabilizando uma nova geração de automóveis conectados. Com a capacidade de se organizarem em rede, nas chamadas redes veiculares ad hoc (VANETs), eles poderão, num futuro próximo, (i) tornar o trânsito mais seguro para os motoristas, passageiros e pedestres e/ou (ii) promover uma experiência de transporte mais agradável, com maior conforto. É neste contexto que se destaca o Sensoriamento Veicular Colaborativo (VCS), um paradigma emergente e promissor que explora as tecnologias já embarcados nos próprios veículos para a obtenção de dados in loco. O VCS tem demonstrado ser um modelo auspicioso para o desenvolvimento e implantação dos Sistemas Inteligentes de Transporte (ITSs). Ocorre, todavia, que, em grandes centros urbanos, dependendo do fenômeno a ser monitorado, as aplicações de VCS podem gerar um tráfego de dados colossal entre os veículos e o centro de monitoramento. Considerando que as informações dos automóveis são geralmente enviadas para um servidor remoto usando as infraestruturas das redes móveis, o número massivo de transmissões geradas durante as atividades de sensoriamento pode sobrecarregá-las e degradar consideravelmente a Qualidade de Serviço (QoS) que elas oferecem. Este documento de tese descreve e analisa uma abordagem de clusterização geográfica que se apoia no uso de comunicações Veículo-para-Veículo (V2V) para promover o descarregamento de dados do VCS em redes celulares, de forma a minimizar os impactos supracitados. Os resultados experimentais obtidos mostraram que o uso das comunicações V2V como método complementar de aquisição de dados in loco foi capaz de diminuir consideravelmente a quantidade transmissões realizadas sobre as redes móveis, sem a necessidade de implantação de novas infraestruturas de comunicação no ambiente, e com um reduzido atraso médio adicional fim a fim na obtenção das informações. A abordagem desenvolvida também se apresenta como uma plataforma de software flexível sobre a qual podem ser incorporadas técnicas de agregação de dados, o que possibilitaria aumentar ainda mais a preservação dos recursos de uplink das redes celulares. Considerando que a era da Internet das Coisas (IoT) e das cidades inteligentes está apenas começando, soluções para o descarregamento de dados, tal como a tratada nesta pesquisa, são consideradas imprescindíveis para continuar mantendo a rede móvel de acesso à Internet operacional e capaz de suportar uma demanda de comunicação cada vez maior por parte das aplicações. / The incorporation of computing and communication technologies into modern vehicles is enabling a new generation of connected cars. With the ability to get into a network formation, in the so-called ad hoc networks (VANETs), these vehicles might, in the near future, (i) make the traffic safer for drivers, passengers and pedestrians and/or (ii) promote a more pleasant transportation experience, with greater comfort. It is in this context that emerges the Vehicle CrowdSensing (VCS), a novel and promising paradigm for performing in loco data collection from the vehicles embedded technologies. VCS has proved to be an auspicious scheme for the development and deployment of the Intelligent Transport Systems (ITSs). However, in large urban areas, depending on the phenomenon to be monitored, the VCS applications can generate a colossal data traffic between vehicles and the monitoring center. Considering that all the vehicles information is generally sent to the remote server by using mobile network infrastructures, this massive amount of transmissions generated during the sensing activities can overload them and degrade the Quality of Service (QoS) they offer. This thesis document describes and analyzes a geographic clustering approach that relies on the use of Vehicle-to- Vehicle (V2V) communications to promote the VCS data offloading in cellular networks, in order to minimize the above impacts. The experimental results obtained showed that the use of V2V communications as a complementary data acquisition method was able to considerably reduce the number of transmissions carried out on mobile networks, without the need for deploying new communication infrastructures in the environment, and with a reduced additional delay. The created approach also stands itself as a flexible software platform on which data aggregation techniques can be incorporated, in order to maximize the network resources preservation already provided by the proposal. Considering that we are just entering in the Internet of Things (IoT) and smart cities era, creating data offloading solutions, such as that treated in this research, is considered an essential task to keep the Internet access network operational and able to support the growing demand for mobile communications.
|
78 |
Improving The Communication Performance Of I/O Intensive And Communication Intensive Application In Cluster Computer SystemsKumar, V Santhosh 10 1900 (has links)
Cluster computer systems assembled from commodity off-the-shelf components have emerged as a viable and cost-effective alternative to high-end custom parallel computer systems.In this thesis, we investigate how scalable performance can be achieved for database systems on clusters. In this context we specfically considered database query processing for evaluation of botlenecks and suggest optimization techniques for obtaining scalable application performance.
First we systematically demonstrated that in a large cluster with high disk bandwidth, the processing capability and the I/O bus bandwidth are the two major performance bottlenecks in database systems. To identify and assess bottlenecks, we developed a Petri net model of parallel query execution on a cluster. Once identified and assessed,we address the above two performance bottlenecks by offoading certain application related tasks to the processor in the network interface card. Offoading application tasks to the processor in the network interface cards shifts the bottleneck from cluster processor to I/O bus. Further, we propose a hardware scheme,network attached disk ,and a software scheme to achieve a balanced utilization of re-sources like host processor, I/O bus, and processor in the network interface card. The proposed schemes result in a speedup of upto 1.47 compared to the base scheme, and ensures scalable performance upto 64 processors.
Encouraged by the benefits of offloading application tasks to network processors, we explore the possibilities of performing the bloom filter operations in network processors. We combine offloading bloom filter operations with the proposed hardware schemes to achieve upto 50% reduction in execution time.
The later part of the thesis provides introductory experiments conducted in Community At-mospheric Model(CAM), a large scale parallel application used for global weather and climate prediction. CAM is a communication intensive application that involves collective communication of large messages. In our limited experiment, we identified CAM to see the effect of compression techniques and offloading techniques (as formulated for database) on the performance of communication intensive applications. Due to time constraint, we considered only the possibility of compression technique for improving the application performance. However, offloading technique could be taken as a full-fledged research problem for further investigation
In our experiment, we found compression of messages reduces the message latencies, and hence improves the execution time and scalability of the application. Without using compression techniques, performance measured on 64 processor cluster resulted in a speed up of only 15.6. While lossless compression retains the accuracy and correctness of the program, it does not result in high compression. We therefore propose lossy compression technique which can achieve a higher compression, yet retain the accuracy and numerical stability of the application while achieving a scalable performance. This leads to speedup of 31.7 on 64 processors compared to a speedup of 15.6 without message compression. We establish that the accuracy within prescribed limit of variation and numerical stability of CAM is retained under lossy compression.
|
79 |
Gestion conjointe de ressources de communication et de calcul pour les réseaux sans fils à base de cloud / Joint communication and computation resources allocation for cloud-empowered future wireless networksOueis, Jessica 12 February 2016 (has links)
Cette thèse porte sur le paradigme « Mobile Edge cloud» qui rapproche le cloud des utilisateurs mobiles et qui déploie une architecture de clouds locaux dans les terminaisons du réseau. Les utilisateurs mobiles peuvent désormais décharger leurs tâches de calcul pour qu’elles soient exécutées par les femto-cellules (FCs) dotées de capacités de calcul et de stockage. Nous proposons ainsi un concept de regroupement de FCs dans des clusters de calculs qui participeront aux calculs des tâches déchargées. A cet effet, nous proposons, dans un premier temps, un algorithme de décision de déportation de tâches vers le cloud, nommé SM-POD. Cet algorithme prend en compte les caractéristiques des tâches de calculs, des ressources de l’équipement mobile, et de la qualité des liens de transmission. SM-POD consiste en une série de classifications successives aboutissant à une décision de calcul local, ou de déportation de l’exécution dans le cloud.Dans un deuxième temps, nous abordons le problème de formation de clusters de calcul à mono-utilisateur et à utilisateurs multiples. Nous formulons le problème d’optimisation relatif qui considère l’allocation conjointe des ressources de calculs et de communication, et la distribution de la charge de calcul sur les FCs participant au cluster. Nous proposons également une stratégie d’éparpillement, dans laquelle l’efficacité énergétique du système est améliorée au prix de la latence de calcul. Dans le cas d’utilisateurs multiples, le problème d’optimisation d’allocation conjointe de ressources n’est pas convexe. Afin de le résoudre, nous proposons une reformulation convexe du problème équivalente à la première puis nous proposons deux algorithmes heuristiques dans le but d’avoir un algorithme de formation de cluster à complexité réduite. L’idée principale du premier est l’ordonnancement des tâches de calculs sur les FCs qui les reçoivent. Les ressources de calculs sont ainsi allouées localement au niveau de la FC. Les tâches ne pouvant pas être exécutées sont, quant à elles, envoyées à une unité de contrôle (SCM) responsable de la formation des clusters de calculs et de leur exécution. Le second algorithme proposé est itératif et consiste en une formation de cluster au niveau des FCs ne tenant pas compte de la présence d’autres demandes de calculs dans le réseau. Les propositions de cluster sont envoyées au SCM qui évalue la distribution des charges sur les différentes FCs. Le SCM signale tout abus de charges pour que les FCs redistribuent leur excès dans des cellules moins chargées.Dans la dernière partie de la thèse, nous proposons un nouveau concept de mise en cache des calculs dans l’Edge cloud. Afin de réduire la latence et la consommation énergétique des clusters de calculs, nous proposons la mise en cache de calculs populaires pour empêcher leur réexécution. Ici, notre contribution est double : d’abord, nous proposons un algorithme de mise en cache basé, non seulement sur la popularité des tâches de calculs, mais aussi sur les tailles et les capacités de calculs demandés, et la connectivité des FCs dans le réseau. L’algorithme proposé identifie les tâches aboutissant à des économies d’énergie et de temps plus importantes lorsqu’elles sont téléchargées d’un cache au lieu d’être recalculées. Nous proposons ensuite d’exploiter la relation entre la popularité des tâches et la probabilité de leur mise en cache, pour localiser les emplacements potentiels de leurs copies. La méthode proposée est basée sur ces emplacements, et permet de former des clusters de recherche de taille réduite tout en garantissant de retrouver une copie en cache. / Mobile Edge Cloud brings the cloud closer to mobile users by moving the cloud computational efforts from the internet to the mobile edge. We adopt a local mobile edge cloud computing architecture, where small cells are empowered with computational and storage capacities. Mobile users’ offloaded computational tasks are executed at the cloud-enabled small cells. We propose the concept of small cells clustering for mobile edge computing, where small cells cooperate in order to execute offloaded computational tasks. A first contribution of this thesis is the design of a multi-parameter computation offloading decision algorithm, SM-POD. The proposed algorithm consists of a series of low complexity successive and nested classifications of computational tasks at the mobile side, leading to local computation, or offloading to the cloud. To reach the offloading decision, SM-POD jointly considers computational tasks, handsets, and communication channel parameters. In the second part of this thesis, we tackle the problem of small cell clusters set up for mobile edge cloud computing for both single-user and multi-user cases. The clustering problem is formulated as an optimization that jointly optimizes the computational and communication resource allocation, and the computational load distribution on the small cells participating in the computation cluster. We propose a cluster sparsification strategy, where we trade cluster latency for higher system energy efficiency. In the multi-user case, the optimization problem is not convex. In order to compute a clustering solution, we propose a convex reformulation of the problem, and we prove that both problems are equivalent. With the goal of finding a lower complexity clustering solution, we propose two heuristic small cells clustering algorithms. The first algorithm is based on resource allocation on the serving small cells where tasks are received, as a first step. Then, in a second step, unserved tasks are sent to a small cell managing unit (SCM) that sets up computational clusters for the execution of these tasks. The main idea of this algorithm is task scheduling at both serving small cells, and SCM sides for higher resource allocation efficiency. The second proposed heuristic is an iterative approach in which serving small cells compute their desired clusters, without considering the presence of other users, and send their cluster parameters to the SCM. SCM then checks for excess of resource allocation at any of the network small cells. SCM reports any load excess to serving small cells that re-distribute this load on less loaded small cells. In the final part of this thesis, we propose the concept of computation caching for edge cloud computing. With the aim of reducing the edge cloud computing latency and energy consumption, we propose caching popular computational tasks for preventing their re-execution. Our contribution here is two-fold: first, we propose a caching algorithm that is based on requests popularity, computation size, required computational capacity, and small cells connectivity. This algorithm identifies requests that, if cached and downloaded instead of being re-computed, will increase the computation caching energy and latency savings. Second, we propose a method for setting up a search small cells cluster for finding a cached copy of the requests computation. The clustering policy exploits the relationship between tasks popularity and their probability of being cached, in order to identify possible locations of the cached copy. The proposed method reduces the search cluster size while guaranteeing a minimum cache hit probability.
|
80 |
Análise da eficiência energética em navios mercantes e estudo de caso do consumo de combustível em navio aliviador do tipo Suezmax. / Analysis of merchant ships energy efficiency and case study of Suezmax shutle tanker fuel comsumption.Rodrigo Achilles Schiller 28 November 2016 (has links)
A necessidade de redução do consumo de combustíveis fósseis, devido ao cenário atual de tentar frear os efeitos do aquecimento global e de reduzir a poluição atmosférica, vem ditando uma série de transformações no setor de transporte naval. Este trabalho apresenta, inicialmente, as mudanças no âmbito normativo na questão do controle de emissões de poluentes e de eficiência de consumo de combustíveis em navios mercantes. Em seguida, com foco nas embarcações existentes, são apresentadas as principais técnicas operacionais com grande potencial de redução de consumo de combustível, destacando o método da redução da velocidade de navegação que, corretamente aplicado, tem impacto positivo tanto na redução dos custos operacionais, quanto no aumento expressivo de eficiência energética. Foi realizada uma análise numérica da variação do consumo de combustível em função da velocidade de um navio petroleiro Suezmax, adaptado para operações de alívio em plataformas do tipo FPSO em águas brasileiras. Com isso, estimou-se o potencial de aumento da eficiência energética da embarcação a partir de pequenas reduções de velocidade, e discutiu-se as possíveis aplicações desta melhoria, a partir do perfil operacional característico do navio tipo, de modo a não causar impacto econômico na operação. O estudo, ainda, avaliou a aplicação de duas metodologias numéricas diferentes, uma baseada apenas em equações de regressão, semi-empírica, e outra utilizando simulações de CFD para a estimativa de parâmetros sensíveis a forma do casco e de grande relevância para a determinação dos consumos característicos, analisando imprecisões e impactos no resultado final. / The need to reduce fossil fuels consumption due to the current scenario of trying to restrain global warming effects and reduce air pollution is dictating a series of transformations in shipping. This study introduces, at first, the changes of the regulatory framework concerning gas emissions control and fuel consumption efficiency on merchant ships. Secondly, the main operational procedures with high potential reduction of fuel consumption are discussed, with focus on existing vessels, using ship speed reduction procedure. This procedure shows the positive impacts on both operating costs reduction and also on energy efficiency increase if correctly applied. Finally, a numerical analysis of the fuel consumption variation with the speed was carried out for a Suezmax class oil tanker, which has been adapted to oil offloading operations for FPSOs in Brazilian offshore oil production systems. In this analysis, the discussions about the variations of vessel energy efficiency from small speed rate reductions and the possible applications of this improvement, taking into account the typical operating profile of the vessel in such a way to have significant economic impacts on the operation. This analysis also evaluated the application of two different numerical methods: one based only on regression equations produced by existing data, semi empirical method, and another using a CFD simulations for estimating the hull shape parameters that are most relevant for determining fuel consumption, analyzing inaccuracies and impact on the final results.
|
Page generated in 0.0678 seconds