• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 55
  • 20
  • 5
  • 3
  • 2
  • 1
  • Tagged with
  • 92
  • 34
  • 29
  • 20
  • 18
  • 17
  • 16
  • 16
  • 10
  • 10
  • 9
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Numerical Analysis of a Floating Harbor System and Comparison with Experimental Results

Kang, Heonyong 2010 May 1900 (has links)
As a comparative study, the global performance of two cases for a floating harbor system are researched by numerical analysis and compared with results from experiments: one is a two-body case such that a floating quay is placed next to a fixed quay, a normal harbor, and the other is a three-body case such that a container ship is posed in the middle of the floating quay and the fixed quay. The numerical modeling is built based on the experimental cases. Mooring system used in the experiments is simplified to sets of linear springs, and gaps between adjacent bodies are remarkably narrow as 1.3m~1.6m with reference to large scales of the floating structures; a water plane of the fixed quay is 480m×160m, and the ship is 15000 TEU (twenty-foot equivalent unit) class. With the experiment-based models, numerical analysis is implemented on two domains: frequency domain using a three dimensional constant panel method, WAMIT, and time domain using a coupled dynamic analysis program of moored floating structures, CHARM3D/HARP. Following general processes of the two main tools, additional two calibrations are implemented if necessary: revision of external stiffness and estimation of damping coefficients. The revision of the external stiffness is conducted to match natural frequency of the simulation with that of the experiment; to find out natural frequencies RAO comparison is used. The next, estimation of damping coefficients is carried out on time domain to match the responses of the simulation with those of the experiment. After optimization of the numerical analysis, a set of experimental results from regular wave tests is compared with RAO on frequency domain, and results from an irregular wave test of the experiment are compared with response histories of simulation on time domain. In addition, fender forces are compared between the simulation and experiment. Based on response histories relative motions of the floating quay and container ship are compared. And the floating harbor system, the three-body case, is compared with a conventional harbor system, a fixed quay on the portside of the container ship, in terms of motions of the container ship. As an additional simulation, the three-body case is investigated on an operating sea state condition. From the present research, the experimental results are well matched with the numerical results obtained from the simulation tools optimized to the experiments. In addition, the floating harbor system show more stable motions of the container ship than the conventional harbor system, and the floating harbor system in the operating sea state condition have motions even smaller enough to operate in term of relative motions between the floating quay and the container ship.
62

Análise estrutural de mangotes de transferência utilizando materiais compósitos e poliméricos avançados

Tonatto, Maikson Luiz Passaia January 2017 (has links)
Mangotes de transferência têm sido utilizados em grande quantidade em operações de descarga de óleo, principalmente em águas profundas, onde existem cargas estáticas e cíclicas variáveis devido ao ambiente de trabalho. Apesar da grande demanda dessas estruturas, seu comportamento é pouco conhecido e discutido na literatura devido a sua complexidade. Além disso, os materiais utilizados nesse equipamento podem ocasionar um elevado número de falhas, sendo muitas vezes superestimados, deixando o mangote com peso excessivo. Este trabalho objetiva o desenvolvimento de uma metodologia de análise de materiais poliméricos avançados, especificamente fibras de poliaramida e materiais compósitos à base de fibra de carbono, em substituição a materiais tradicionais, utilizando modelos numéricos capazes de prever o comportamento da pressão de ruptura das carcaças e resistência a compressão radial do mangote, além da avaliação em fadiga dos cordonéis à base de poliaramida dessas novas estruturas. Modelos em meso-escala foram desenvolvidos utilizando conceitos de hiperelasticidade e de critérios de falha de materiais compósitos para previsão das tensões e deformações locais em regiões críticas do mangote. Análises numéricas foram realizadas via elementos finitos com o software comercial para auxiliar a elaboração dos modelos e a realização dos cálculos numéricos. Foram realizados ensaios experimentais para validação desses modelos numéricos, bem como para a previsão do comportamento estático e em fadiga dos materiais envolvidos. Foram desenvolvidos dois modelos. Em um modelo foi aplicado pressão interna no mangote para previsão de ruptura das carcaças no qual tem o objetivo de avaliar o desempenho dos novos reforços de poliaramida. No outro modelo foi aplicada uma carga radial na seção central do mangote para prever a resistência ao esmagamento, no qual tem o objetivo de avaliar o desempenho do componente de sustentação em material compósito de fibra de carbono. Os resultados dos modelos numéricos apresentaram boa concordância com os resultados experimentais em grande parte das análises. Também se observou que os novos materiais apresentam um grande potencial de substituição dos materiais tradicionais, bem como um excelente comportamento frente a carregamentos estáticos e dinâmicos envolvidos na aplicação, sendo verificada diminuição significativa de peso e aumento do desempenho. / Offloading hoses have been extensively used at offloading oil operations, especially in deep water, where there are variable static and cyclic loads due to the working environment. Despite the great demand for these structures, their behavior is little known and discussed in the literature due to the complexity. In addition, the materials used in this equipment may lead to a high number of failures, being often overestimated, leading to excessive weight. This work aims to develop a methodology for analysis of advanced polymeric materials, specifically polyaramide fibers and carbon fiber composite materials, in the substitution of traditional materials, using numerical models able to predict the static behavior of the burst pressure of the carcasses and radial compression strength of the hose. In addition, fatigue tests were performed to evaluate the polyaramide cords of these new structures. Meso-scale models were developed using advanced hyperplastic and composite failure criteria concepts to predict local stresses and strains in critical regions of the hose. Numerical analyses were performed using finite elements with commercial software to aid the development of models and to carry out numerical calculations. Several experimental tests were performed to validate numerical models, as well as to forecast the static and fatigue behavior of the materials used. Two models were developed. A model is used to predict the burst pressure of the hose in order to evaluate the performance of the new polyaramide reinforcements cords. In the other model, a radial load was applied in the central section of the hose to predict the crushing strength, in which it has the aim of evaluating the performance of the load-bearing component made with carbon fiber composite material. The results of the computer models showed good agreement with the experimental results in most analyses. It was also found that the studied materials offered considerable potential for the substitution of traditional materials, as well as an excellent behavior under static and dynamic loads related to this application, with a significant weight reduction and increased performance of the new configurations over traditional hoses.
63

Análise estrutural de mangotes de transferência utilizando materiais compósitos e poliméricos avançados

Tonatto, Maikson Luiz Passaia January 2017 (has links)
Mangotes de transferência têm sido utilizados em grande quantidade em operações de descarga de óleo, principalmente em águas profundas, onde existem cargas estáticas e cíclicas variáveis devido ao ambiente de trabalho. Apesar da grande demanda dessas estruturas, seu comportamento é pouco conhecido e discutido na literatura devido a sua complexidade. Além disso, os materiais utilizados nesse equipamento podem ocasionar um elevado número de falhas, sendo muitas vezes superestimados, deixando o mangote com peso excessivo. Este trabalho objetiva o desenvolvimento de uma metodologia de análise de materiais poliméricos avançados, especificamente fibras de poliaramida e materiais compósitos à base de fibra de carbono, em substituição a materiais tradicionais, utilizando modelos numéricos capazes de prever o comportamento da pressão de ruptura das carcaças e resistência a compressão radial do mangote, além da avaliação em fadiga dos cordonéis à base de poliaramida dessas novas estruturas. Modelos em meso-escala foram desenvolvidos utilizando conceitos de hiperelasticidade e de critérios de falha de materiais compósitos para previsão das tensões e deformações locais em regiões críticas do mangote. Análises numéricas foram realizadas via elementos finitos com o software comercial para auxiliar a elaboração dos modelos e a realização dos cálculos numéricos. Foram realizados ensaios experimentais para validação desses modelos numéricos, bem como para a previsão do comportamento estático e em fadiga dos materiais envolvidos. Foram desenvolvidos dois modelos. Em um modelo foi aplicado pressão interna no mangote para previsão de ruptura das carcaças no qual tem o objetivo de avaliar o desempenho dos novos reforços de poliaramida. No outro modelo foi aplicada uma carga radial na seção central do mangote para prever a resistência ao esmagamento, no qual tem o objetivo de avaliar o desempenho do componente de sustentação em material compósito de fibra de carbono. Os resultados dos modelos numéricos apresentaram boa concordância com os resultados experimentais em grande parte das análises. Também se observou que os novos materiais apresentam um grande potencial de substituição dos materiais tradicionais, bem como um excelente comportamento frente a carregamentos estáticos e dinâmicos envolvidos na aplicação, sendo verificada diminuição significativa de peso e aumento do desempenho. / Offloading hoses have been extensively used at offloading oil operations, especially in deep water, where there are variable static and cyclic loads due to the working environment. Despite the great demand for these structures, their behavior is little known and discussed in the literature due to the complexity. In addition, the materials used in this equipment may lead to a high number of failures, being often overestimated, leading to excessive weight. This work aims to develop a methodology for analysis of advanced polymeric materials, specifically polyaramide fibers and carbon fiber composite materials, in the substitution of traditional materials, using numerical models able to predict the static behavior of the burst pressure of the carcasses and radial compression strength of the hose. In addition, fatigue tests were performed to evaluate the polyaramide cords of these new structures. Meso-scale models were developed using advanced hyperplastic and composite failure criteria concepts to predict local stresses and strains in critical regions of the hose. Numerical analyses were performed using finite elements with commercial software to aid the development of models and to carry out numerical calculations. Several experimental tests were performed to validate numerical models, as well as to forecast the static and fatigue behavior of the materials used. Two models were developed. A model is used to predict the burst pressure of the hose in order to evaluate the performance of the new polyaramide reinforcements cords. In the other model, a radial load was applied in the central section of the hose to predict the crushing strength, in which it has the aim of evaluating the performance of the load-bearing component made with carbon fiber composite material. The results of the computer models showed good agreement with the experimental results in most analyses. It was also found that the studied materials offered considerable potential for the substitution of traditional materials, as well as an excellent behavior under static and dynamic loads related to this application, with a significant weight reduction and increased performance of the new configurations over traditional hoses.
64

Estudo de aplicação de ferramentas numéricas ao problema de ressonância de ondas na operação de alívio lado a lado. / Study of application of numerical tools of the wave resonance problem in side-by-side offocading operation.

Raul Dotta 30 March 2017 (has links)
Este trabalho apresenta uma abordagem numérica com base em ensaios experimentais previamente realizados, direcionada ao problema de ressonância do campo de ondas em operações de alívio lado a lado (side by side). Os efeitos dessas interferências hidrodinâmicas são responsáveis por alterar drasticamente o campo de ondas em regiões de confino, gerando amplificação nos movimentos de primeira ordem e trazendo risco à operação. Este fenômeno está presente em diversas áreas da exploração e produção offshore e vem sendo o principal objeto de estudo nos últimos anos, principalmente em operações de alívio lado a lado, nos quais existe uma grande preocupação de colisão, rompimento dos cabos e integridade estrutural das defensas, devido à proximidade dos cascos. Neste contexto, devido à complexidade do problema, a modelagem numérica utilizada para interpretar o fenômeno de ressonância em softwares comerciais deve ser realizada com cautela, sendo que a utilização direta desta ferramenta gera amplificações equivocadas da superfície ressonante uma vez que esta resolução tem como base a teoria potencial. As diferenças observadas durante a comparação entre ensaios numéricos e experimentais são causadas em virtude da negligência na avaliação da dissipação de parte da energia das ondas ressonantes provocadas devido aos efeitos como viscosidade, vorticidade e turbulência do escoamento. Com o objetivo de analisar corretamente este fenômeno por meio de ensaios numéricos, uma maneira consiste na inclusão de adaptações no modelo para atingir os resultados desejáveis. Estas adaptações consistem na implementação de métodos artificiais, tais como os chamados \"Modos Generalizados\" e \"Praias Numéricas\", aplicados à região entre as embarcações com o intuito de amortecer as elevações irrealistas da superfície. Sendo assim, este trabalho abordará o problema de ressonância de ondas, investigando o desempenho de duas ferramentas numéricas para a sua predição, o WAMIT (Wave Analysis Massachusetts Institute of Technology) e o TDRPM (Time Domain Rankine Painel Method). Os resultados serão comparados com dados obtidos em um conjunto de ensaios em escala reduzida, realizado previamente no laboratório Tanque de Provas Numérico da USP (TPN). Dessa forma, o estudo dos fenômenos de ressonância será discutido, principalmente, em seu aspecto numérico, visando à verificação do desempenho do WAMIT e do TDRPM. / This work presents a numerical study based on previously conducted experimental studies, focused on the problem of resonance of the wave field in operations involving multi-body. The hydrodynamic interferences effects are responsible for drastically changing the wave field in confine regions, generating amplification of first order movements and bringing operational risk. This phenomenon is present in several areas of offshore exploration and production and has been the main object of study in recent years, mainly in side-by-side offloading operations, in which there is a great concern due to the risk of mooring lines breaking, damages to the fenders and also collision. In this context, due to the complexity of the problem, the numerical modeling used to evaluate the resonance phenomenon in commercial software becomes unsuitable, generating erroneous amplifications of the resonant surface since it is based on the potential theory. The differences observed during the comparisons between numerical and experimental tests are caused by negligence in the evaluation of the dissipation of part of the resonant wave energy caused by viscosity, vorticity and flow turbulence effects. In order to correctly analyze this phenomenon through numerical tests, one way is to include adaptations on the model to achieve the desired results. These adaptations consist of the implementation of artificial methods, such as \"Generalized Modes\" and \"Numerical Damping Zones\", applied to the region between the vessels in order to damp the unrealistic elevations of the surface. Thus, this study will approach the problem of gap wave resonance, investigating the performance of two numerical tools for its prediction, WAMIT (Wave Analysis Massachusetts Institute of Technology) and TDRPM (Time Domain Rankin Panel Method). The results will be compared with data obtained from a set of small scale tests previously performed at the Numerical Test Tank of USP laboratory (TPN). Therefore, the study of resonance phenomena will be discussed, mainly, in its numerical aspect, in order to verify the performance of WAMIT and TDRPM.
65

Novel Application Models and Efficient Algorithms for Offloading to Clouds

González Barrameda, José Andrés January 2017 (has links)
The application offloading problem for Mobile Cloud Computing aims at improving the mobile user experience by leveraging the resources of the cloud. The execution of the mobile application is offloaded to the cloud, saving energy at the mobile device or speeding up the execution of the application. We improve the accuracy and performance of application offloading solutions in three main directions. First, we propose a novel fine-grained application model that supports complex module dependencies such as sequential, conditional and parallel module executions. The model also allows for multiple offloading decisions that are tailored towards the current application, network, or user contexts. As a result, the model is more precise in capturing the structure of the application and supports more complex offloading solutions. Second, we propose three cost models, namely, average-based, statistics-based and interval-based cost models, defined for the proposed application model. The average-based approach models each module cost by the expected cost value, and the expected cost of the entire application is estimated considering each of the three module dependencies. The novel statistics-based cost model employs Cumulative Distribution Function (CDFs) to represent the costs of the modules and of the mobile application, which is estimated considering the cost and dependencies of the modules. This cost model opens the doors for new statistics-based optimization functions and constraints whereas the state of the art only support optimizations based on the average running cost of the application. Furthermore, this cost model can be used to perform statistical analysis of the performance of the application in different scenarios such as varying network data rates. The last cost model, the interval-based, represents the module costs via intervals in order to addresses the cost uncertainty while having lower requirements and computational complexity than the statistics-based model. The cost of the application is estimated as an expected maximum cost via a linear optimization function. Finally, we present offloading decision algorithms for each cost model. For the average-based model, we present a fast optimal dynamic programming algorithm. For the statistics-based model, we present another fast optimal dynamic programming algorithm for the scenario where the optimization function meets specific properties. Finally, for the interval-based cost model, we present a robust formulation that solves a linear number of linear optimization problems. Our evaluations verify the accuracy of the models and show higher cost savings for our solutions when compared to the state of the art.
66

Modelování mechanizmů vícenásobného přístupu do mobilní sítě / Modelling of Mechanisms for Multiple Access to Mobile Network

Tinka, Zdeněk January 2014 (has links)
The diploma thesis „Mechanisms modelling of multi access into mobile wireless network“ is focusing the wireless network. The diploma thesis contains basic network topology of wireless standard 802.11g and utilizes key identificators of mobile node in dependency on the distance and collision controlling function for simulation purposes. In the next part of this thesis is created LTE mobile network topology, which serves for finding key identificators. In the last part is created offload topology containing both - 802.11g and LTE network. As the result are implemented offloading algorithms, which ensure data traffic switching based on comparing 802.11g and LTE key identificators. Automatically generated and shown figures providing the key statistics are the main output of this thesis.
67

Transport intermodal de données massives pour le délestage des réseaux d'infrastructure / Intermodal data transport for massive offloading of conventional data networks

Baron, Benjamin 11 October 2016 (has links)
Dans cette thèse, nous exploitons la mobilité des véhicules pour créer un médium de communication ad hoc utile pour déployer des services connectés. Notre objectif est de tirer partie des trajets quotidiens effectués en voiture ou en transport en commun pour surmonter les limitations des réseaux de données tels que l’Internet. Dans une première partie, nous profitons de la bande passante que génèrent les déplacements de véhicules équipés de capacités de stockage pour délester en masse l’Internet d’une partie de son trafic. Les données sont détournées vers des équipements de stockage appelés points de délestage installés aux abords de zones où les véhicules s’arrêtent habituellement, permettant ainsi de relayer les données entre véhicules jusqu'au point de délestage suivant où elles pourront éventuellement être déchargées. Nous proposons ensuite deux extensions étendant le concept de point de délestage selon deux directions dans le contexte de services reposant toujours la mobilité des véhicules. Dans la première extension, nous exploitons les capacités de stockage des points de délestage pour concevoir un service de stockage et partage de fichiers offert aux passagers de véhicules. Dans la seconde extension, nous dématérialisons les points de délestage en zones géographiques pré-définies où un grand nombre de véhicules se rencontrent suffisamment longtemps pour transférer de grandes quantités de données. L’évaluation des performances des différents travaux menés au cours de cette thèse montrent que la mobilité inhérente aux entités du quotidien permet la fourniture de services innovants avec une dépendance limitée vis-à-vis des réseaux de données traditionnels. / In this thesis, we exploit the daily mobility of vehicles to create an alternative transmission medium. Our objective is to draw on the many vehicular trips taken by cars or public transports to overcome the limitations of conventional data networks such as the Internet. In the first part, we take advantage of the bandwidth resulting from the mobility of vehicles equipped with storage capabilities to offload large amounts of delay-tolerant traffic from the Internet. Data is transloaded to data storage devices we refer to as offloading spots, located where vehicles stop often and long enough to transfer large amounts of data. Those devices act as data relays, i.e., they store data it is until loaded on and carried by a vehicle to the next offloading spot where it can be dropped off for later pick-up and delivery by another vehicle. We further extend the concept of offloading spots according to two directions in the context of vehicular cloud services. In the first extension, we exploit the storage capabilities of the offloading spots to design a cloud-like storage and sharing system for vehicle passengers. In the second extension, we dematerialize the offloading spots into pre-defined areas with high densities of vehicles that meet long enough to transfer large amounts of data. The performance evaluation of the various works conducted in this thesis shows that everyday mobility of entities surrounding us enables innovative services with limited reliance on conventional data networks.
68

Offline Task Scheduling in a Three-layer Edge-Cloud Architecture

Mahjoubi, Ayeh January 2023 (has links)
Internet of Things (IoT) devices are increasingly being used everywhere, from the factory to the hospital to the house to the car. IoT devices typically have limited processing resources, so they must rely on cloud servers to accomplish their tasks. Thus, many obstacles need to be overcome while offloading tasks to the cloud. In reality, an excessive amount of data must be transferred between IoT devices and the cloud, resulting in issues such as slow processing, high latency, and limited bandwidth. As a result, the concept of edge computing was developed to place compute nodes closer to the end users. Because of the limited resources available at the edge nodes, when it comes to meeting the needs of IoT devices, tasks must be optimally scheduled between IoT devices, edge nodes, and cloud nodes.  In this thesis, we model the offloading problem in an edge cloud infrastructure as a Mixed-Integer Linear Programming (MILP) problem and look for efficient optimization techniques to tackle it, aiming to minimize the total delay of the system after completing all tasks of all services requested by all users. To accomplish this, we use the exact approaches like simplex to find a solution to the MILP problem. Due to the fact that precise techniques, such as simplex, require a large number of processing resources and a considerable amount of time to solve the problem, we propose several heuristics and meta-heuristics methods to solve the problem and use the simplex findings as a benchmark to evaluate these methods. Heuristics are quick and generate workable solutions in certain circumstances, but they cannot guarantee optimal results. Meta-heuristics are slower than heuristics and may require more computations, but they are more generic and capable of handling a variety of problems. In order to solve this issue, we propose two meta-heuristic approaches, one based on a genetic algorithm and the other on simulated annealing. Compared to heuristics algorithms, the genetic algorithm-based method yields a more accurate solution, but it requires more time and resources to solve the MILP, while the simulated annealing-based method is a better fit for the problem since it produces more accurate solutions in less time than the genetics-based method. / Internet of Things (IoT) devices are increasingly being used everywhere. IoT devices typically have limited processing resources, so they must rely on cloud servers to accomplish their tasks. In reality, an excessive amount of data must be transferred between IoT devices and the cloud, resulting in issues such as slow processing, high latency, and limited bandwidth. As a result, the concept of edge computing was developed to place compute nodes closer to the end users. Because of the limited resources available at the edge nodes, when it comes to meeting the needs of IoT devices, tasks must be optimally scheduled between IoT devices, edge nodes, and cloud nodes.  In this thesis, the offloading problem in an edge cloud infrastructure is modeled as a Mixed-Integer Linear Programming (MILP) problem, and efficient optimization techniques seeking to minimize the total delay of the system are employed to address it. To accomplish this, the exact approaches are used to find a solution to the MILP problem. Due to the fact that precise techniques require a large number of processing resources and a considerable amount of time to solve the problem, several heuristics and meta-heuristics methods are proposed. Heuristics are quick and generate workable solutions in certain circumstances, but they cannot guarantee optimal results while meta-heuristics are slower than heuristics and may require more computations, but they are more generic and capable of handling a variety of problems.
69

Belief Rule-Based Workload Orchestration in Multi-access Edge Computing

Jamil, Mohammad Newaj January 2022 (has links)
Multi-access Edge Computing (MEC) is a standard network architecture of edge computing, which is proposed to handle tremendous computation demands of emerging resource-intensive and latency-sensitive applications and services and accommodate Quality of Service (QoS) requirements for ever-growing users through computation offloading. Since the demand of end-users is unknown in a rapidly changing dynamic environment, processing offloaded tasks in a non-optimal server can deteriorate QoS due to high latency and increasing task failures. In order to deal with such a challenge in MEC, a two-stage Belief Rule-Based (BRB) workload orchestrator is proposed to distribute the workload of end-users to optimum computing units, support strict QoS requirements, ensure efficient utilization of computational resources, minimize task failures, and reduce the overall service time. The proposed BRB workload orchestrator decides the optimal execution location for each offloaded task from User Equipment (UE) within the overall MEC architecture based on network conditions, computational resources, and task requirements. EdgeCloudSim simulator is used to conduct comprehensive simulation experiments for evaluating the performance of the proposed BRB orchestrator in contrast to four workload orchestration approaches from the literature with different types of applications. Based on the simulation experiments, the proposed workload orchestrator outperforms state-of-the-art workload orchestration approaches and ensures efficient utilization of computational resources while minimizing task failures and reducing the overall service time.
70

Edge Compute Offloading Strategies using Heuristic and Reinforcement Learning Techniques.

Dikonimaki, Chrysoula January 2023 (has links)
The emergence of 5G alongside the distributed computing paradigm called Edge computing has prompted a tremendous change in the industry through the opportunity for reducing network latency and energy consumption and providing scalability. Edge computing extends the capabilities of users’ resource-constrained devices by placing data centers at the edge of the network. Computation offloading enables edge computing by allowing the migration of users’ tasks to edge servers. Deciding whether it is beneficial for a mobile device to offload a task and on which server to offload, while environmental variables, such as availability, load, network quality, etc., are changing dynamically, is a challenging problem that requires careful consideration to achieve better performance. This project focuses on proposing lightweight and efficient algorithms to take offloading decisions from the mobile device perspective to benefit the user. Subsequently, heuristic techniques have been examined as a way to find quick but sub-optimal solutions. These techniques have been combined with a Multi-Armed Bandit algorithm, called Discounted Upper Confidence Bound (DUCB) to take optimal decisions quickly. The findings indicate that these heuristic approaches cannot handle the dynamicity of the problem and the DUCB provides the ability to adapt to changing circumstances without having to keep adding extra parameters. Overall, the DUCB algorithm performs better in terms of local energy consumption and can improve service time most of the times. / Utvecklingen av 5G har skett parallellt med det distribuerade beräkningsparadigm som går under namnet Edge Computing. Lokala datacenter placerade på kanten av nätverket kan reducera nätverkslatensen och energiförbrukningen för applikationer. Exempelvis kan användarenheter med begränsade resurser ges utökande möjligheter genom avlastning av beräkningsintensiva uppgifter. Avlastningen sker genom att migrera de beräkningsintensiva uppgifterna till en dator i datacentret på kanten. Det är dock inte säkert att det alltid lönar sig att avlasta en beräkningsintensiv uppgift från en enhet till kanten. Detta måste avgöras från fall till fall. Att avgöra om och när det lönar sig är ett svårt problem då förutsättningar som tillgänglighet, last, nätverkskvalitét, etcetera hela tiden varierar. Fokus i detta projekt är att identifiera enkla och effektiva algoritmer som kan avgöra om det lönar sig för en användare att avlasta en beräkningsintensiv uppgift från en mobil enhet till kanten. Heuristiska tekniker har utvärderats som en möjlig väg att snabbt hitta lösningar även om de råkar vara suboptimala. Dessa tekniker har kombinerats med en flerarmad banditalgoritm (Multi-Armed Bandit), kallad Discounted Upper Confidence Bound (DUCB), för att ta optimala beslut snabbt. Resultaten indikerar att dessa heuristiska tekniker inte kan hantera de dynamiska förändringar som hela tiden sker samtidigt som DUCB kan anpassa sig till dessa förändrade omständigheter utan att man måste addera extra parametrar. Sammantaget, ger DUCM-algoritmen bättre resultat när det gäller lokal energikonsumtion och kan i de flesta fallen förbättra tiden för tjänsten.

Page generated in 0.4467 seconds