• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • Tagged with
  • 7
  • 7
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

GePSeA: A General-Purpose Software Acceleration Framework for Lightweight Task Offloading

Singh, Ajeet 14 August 2009 (has links)
Hardware-acceleration techniques continue to be used to boost the performance of scientific codes. To do so, software developers identify portions of these codes that are amenable for offloading and map them to hardware accelerators. However, offloading such tasks to specialized hardware accelerators is non-trivial. Furthermore, these accelerators can add significant cost to a computing system. Consequently, this thesis proposes a framework called GePSeA (General Purpose Software Acceleration Framework), which uses a small fraction of the computational power on multi-core architectures to offload complex application-specific tasks. Specifically, GePSeA provides a lightweight process that acts as a helper agent to the application by executing application-specific tasks asynchronously and efficiently. GePSeA is not meant to replace hardware accelerators but to extend them. GePSeA provide several utilities called core components that offload tasks on to the core or to the special-purpose hardware when available in a way that is transparent to the application. Examples of such core components include reliable communication service, distributed lock management, global memory management, dynamic load distribution and network protocol processing. We then apply the GePSeA framework to two applications, namely mpiBLAST, an open-source computational biology application and Reliable Blast UDP (RBUDP) based file transfer application. We observe significant speed-up for both applications. / Master of Science
2

Distributed Orchestration Framework for Fog Computing

Rahafrouz, Amir January 2019 (has links)
The rise of IoT-based system is making an impact on our daily lives and environment. Fog Computing is a paradigm to utilize IoT data and process them at the first hop of access network instead of distant clouds, and it is going to bring promising applications for us. A mature framework for fog computing still lacks until today. In this study, we propose an approach for monitoring fog nodes in a distributed system using the FogFlow framework. We extend the functionality of FogFlow by adding the monitoring capability of Docker containers using cAdvisor. We use Prometheus for collecting distributed data and aggregate them. The monitoring data of the entire distributed system of fog nodes is accessed via an API from Prometheus. Furthermore, the monitoring data is used to perform the ranking of fog nodes to choose the place to place the serverless functions (Fog Function). The ranking mechanism uses Analytical Hierarchy Processes (AHP) to place the fog function according to resource utilization and saturation of fog nodes’ hardware. Finally, an experiment test-bed is set up with an image-processing application to detect faces. The effect of our ranking approach on the Quality of Service is measured and compared to the current FogFlow.
3

Network Resource Management Using Multi-Agent Deep Reinforcement Learning / マルチエージェント深層強化学習によるネットワーク資源管理

Suzuki, Akito 25 September 2023 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第24940号 / 情博第851号 / 新制||情||142(附属図書館) / 京都大学大学院情報学研究科通信情報システム専攻 / (主査)教授 大木 英司, 教授 原田 博司, 教授 伊藤 孝行 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
4

AI-enabled System Optimization with Carrier Aggregation and Task Offloading in 5G and 6G

Khoramnejad, Fahimeh 24 March 2023 (has links)
Fifth-Generation (5G) and sixth-Generation (6G) are new global wireless standards providing everyone and everything, machines, objects, and devices, with massive network capacity. The technological advances in wireless communication enable 5G and 6G networks to support resource and computation-hungry services such as smart agriculture and smart city applications. Among these advances are two state-of-the-art technologies: Carrier Aggregation (CA) and Multi Access Edge Computing (MEC). CA unlocks new sources of spectrum in both the mid-band and high-band radio frequencies. It provides the unique capability of aggregating several frequency bands for higher peak rates, and increases cell coverage. The latter is obtained by activating the Component Carriers (CC) in low-band and mid-band frequency (below 7 GHz) while 5G high-band (above 24GHz) delivers unprecedented peak rates with poorer Uplink (UL) coverage. MEC provides computing and storage resources with sufficient connectivity close to end users. These execution resources are typically within/at the boundary of access networks providing support for application use cases such as Augmented Reality (AR)/Virtual Reality (VR). The key technology in MEC is task offloading, which enables a user to offload a resource-hungry application to the MEC hosts to reduce the cost (in terms of energy and latency) of processing the application. This thesis focuses on using CA and task offloading in 5G and 6G wireless networks. These advanced infrastructures are an enabler for many broader use cases, e.g., autonomous driving and Internet of Things (IoT) applications. However, the pertinent problems are the high dimensional ones with combinatorial characteristics. Furthermore, the time-varying features of the 5G/6G wireless networks, such as the stochastic nature of the wireless channel, should be concurrently met. The above challenges can be tackled by using data-driven techniques and Machine Learning (ML) algorithms to derive intelligent and autonomous resource management techniques in the 5G/6G wireless networks. The resource management problems in these networks are sequential decision-making problems, additionally with conflicting objectives. Therefore, among the ML algorithms, the ones based on the Reinforcement Learning (RL), constitute a promising tool to make a trade-off between the conflicting objectives of the resource management problems in the 5G/6G wireless networks, are used. This research considers the objective of maximizing the achievable rate and minimizing the users’ transmit power levels in the MEC-enabled network. Additionally, we try to simultaneously maximize the network capacity and improve the network coverage by activating/deactivating the CCs. Compared with the derived schemes in the literature, our contributions are two folded: deriving distributed resource management schemes in 5G/6G wireless networks to efficiently manage the limited spectrum resources and meet the diverse requirements of some resource-hungry applications, and developing intelligent and energy-aware algorithms to improve the performance in terms of energy consumption, delay, and achievable rate.
5

Offline Task Scheduling in a Three-layer Edge-Cloud Architecture

Mahjoubi, Ayeh January 2023 (has links)
Internet of Things (IoT) devices are increasingly being used everywhere, from the factory to the hospital to the house to the car. IoT devices typically have limited processing resources, so they must rely on cloud servers to accomplish their tasks. Thus, many obstacles need to be overcome while offloading tasks to the cloud. In reality, an excessive amount of data must be transferred between IoT devices and the cloud, resulting in issues such as slow processing, high latency, and limited bandwidth. As a result, the concept of edge computing was developed to place compute nodes closer to the end users. Because of the limited resources available at the edge nodes, when it comes to meeting the needs of IoT devices, tasks must be optimally scheduled between IoT devices, edge nodes, and cloud nodes.  In this thesis, we model the offloading problem in an edge cloud infrastructure as a Mixed-Integer Linear Programming (MILP) problem and look for efficient optimization techniques to tackle it, aiming to minimize the total delay of the system after completing all tasks of all services requested by all users. To accomplish this, we use the exact approaches like simplex to find a solution to the MILP problem. Due to the fact that precise techniques, such as simplex, require a large number of processing resources and a considerable amount of time to solve the problem, we propose several heuristics and meta-heuristics methods to solve the problem and use the simplex findings as a benchmark to evaluate these methods. Heuristics are quick and generate workable solutions in certain circumstances, but they cannot guarantee optimal results. Meta-heuristics are slower than heuristics and may require more computations, but they are more generic and capable of handling a variety of problems. In order to solve this issue, we propose two meta-heuristic approaches, one based on a genetic algorithm and the other on simulated annealing. Compared to heuristics algorithms, the genetic algorithm-based method yields a more accurate solution, but it requires more time and resources to solve the MILP, while the simulated annealing-based method is a better fit for the problem since it produces more accurate solutions in less time than the genetics-based method. / Internet of Things (IoT) devices are increasingly being used everywhere. IoT devices typically have limited processing resources, so they must rely on cloud servers to accomplish their tasks. In reality, an excessive amount of data must be transferred between IoT devices and the cloud, resulting in issues such as slow processing, high latency, and limited bandwidth. As a result, the concept of edge computing was developed to place compute nodes closer to the end users. Because of the limited resources available at the edge nodes, when it comes to meeting the needs of IoT devices, tasks must be optimally scheduled between IoT devices, edge nodes, and cloud nodes.  In this thesis, the offloading problem in an edge cloud infrastructure is modeled as a Mixed-Integer Linear Programming (MILP) problem, and efficient optimization techniques seeking to minimize the total delay of the system are employed to address it. To accomplish this, the exact approaches are used to find a solution to the MILP problem. Due to the fact that precise techniques require a large number of processing resources and a considerable amount of time to solve the problem, several heuristics and meta-heuristics methods are proposed. Heuristics are quick and generate workable solutions in certain circumstances, but they cannot guarantee optimal results while meta-heuristics are slower than heuristics and may require more computations, but they are more generic and capable of handling a variety of problems.
6

Belief Rule-Based Workload Orchestration in Multi-access Edge Computing

Jamil, Mohammad Newaj January 2022 (has links)
Multi-access Edge Computing (MEC) is a standard network architecture of edge computing, which is proposed to handle tremendous computation demands of emerging resource-intensive and latency-sensitive applications and services and accommodate Quality of Service (QoS) requirements for ever-growing users through computation offloading. Since the demand of end-users is unknown in a rapidly changing dynamic environment, processing offloaded tasks in a non-optimal server can deteriorate QoS due to high latency and increasing task failures. In order to deal with such a challenge in MEC, a two-stage Belief Rule-Based (BRB) workload orchestrator is proposed to distribute the workload of end-users to optimum computing units, support strict QoS requirements, ensure efficient utilization of computational resources, minimize task failures, and reduce the overall service time. The proposed BRB workload orchestrator decides the optimal execution location for each offloaded task from User Equipment (UE) within the overall MEC architecture based on network conditions, computational resources, and task requirements. EdgeCloudSim simulator is used to conduct comprehensive simulation experiments for evaluating the performance of the proposed BRB orchestrator in contrast to four workload orchestration approaches from the literature with different types of applications. Based on the simulation experiments, the proposed workload orchestrator outperforms state-of-the-art workload orchestration approaches and ensures efficient utilization of computational resources while minimizing task failures and reducing the overall service time.
7

Safety-Oriented Task Offloading for Human-Robot Collaboration : A Learning-Based Approach / Säkerhetsorienterad Uppgiftsavlastning för Människa-robotkollaboration : Ett Inlärningsbaserat Tillvägagångssätt

Ruggeri, Franco January 2021 (has links)
In Human-Robot Collaboration scenarios, safety must be ensured by a risk management process that requires the execution of computationally expensive perception models (e.g., based on computer vision) in real-time. However, robots usually have constrained hardware resources that hinder timely responses, resulting in unsafe operations. Although Multi-access Edge Computing allows robots to offload complex tasks to servers on the network edge to meet real-time requirements, this might not always be possible due to dynamic changes in the network that can cause congestion or failures. This work proposes a safety-based task offloading strategy to address this problem. The goal is to intelligently use edge resources to reduce delays in the risk management process and consequently enhance safety. More specifically, depending on safety and network metrics, a Reinforcement Learning (RL) solution is implemented to decide whether a less accurate model should run locally on the robot or a more complex one should run remotely on the network edge. A third possibility is to reuse the previous output through verification of temporal coherence. Experiments are performed in a simulated warehouse scenario where humans and robots have close interactions. Results show that the proposed RL solution outperforms the baselines in several aspects. First, the edge is used only when the network performance is good, reducing the number of failures (up to 47%). Second, the latency is also adapted to the safety requirements (risk X latency reduced up to 48%), avoiding unnecessary network congestion in safe situations and letting other robots in hazardous situations use the edge. Overall, the latency of the risk management process is largely reduced (up to 68%), and this positively affects safety (time in safe zone increased up to 3:1%). / I scenarier med människa-robotkollaboration måste säkerheten säkerställas via en riskhanteringsprocess. Denna process kräver exekvering av beräkningstunga uppfattningsmodeller (t.ex. datorseende) i realtid. Robotar har vanligtvis begränsade hårdvaruresurser vilket förhindrar att respons uppnås i tid, vilket resulterar i osäkra operationer. Även om Multi-access Edge Computing tillåter robotar att avlasta komplexa uppgifter till servrar på edge, för att möta realtidskraven, så är detta inte alltid möjligt på grund av dynamiska förändringar i nätverket som kan skapa överbelastning eller fel. Detta arbete föreslår en säkerhetsbaserad uppgiftsavlastningsstrategi för att hantera detta problem. Målet är att intelligent använda edge-resurser för att minska förseningar i riskhanteringsprocessen och följaktligen öka säkerheten. Mer specifikt, beroende på säkerhet och nätverksmätvärden, implementeras en Reinforcement Learning (RL) lösning för att avgöra om en modell med mindre noggrannhet ska köras lokalt eller om en mer komplex ska köras avlägset på edge. En tredje möjlighet är att återanvända sista utmatningen genom verifiering av tidsmässig koherens. Experimenten utförs i ett simulerat varuhusscenario där människor och robotar har nära interaktioner. Resultaten visar att den föreslagna RL-lösningen överträffar baslinjerna i flera aspekter. För det första används edge bara när nätverkets prestanda är bra, vilket reducerar antal fel (upp till 47%). För det andra anpassas latensen också till säkerhetskraven (risk X latens reducering upp till 48%), undviker onödig överbelastning i nätverket i säkra situationer och låter andra robotar i farliga situationer använda edge. I det stora hela reduceras latensen av riskhanterings processen kraftigt (upp till 68%) och påverkar på ett positivt sätt säkerheten (tiden i säkerhetszonen ökas upp till 4%).

Page generated in 0.0882 seconds