Spelling suggestions: "subject:"offloading"" "subject:"offloadings""
51 |
CONTENT TRADING AND PRIVACY-AWARE PRICING FOR EFFICIENT SPECTRUM UTILIZATIONAlotaibi, Faisal F. January 2019 (has links)
No description available.
|
52 |
A Delay- and Power-optimized Task Offloading using Genetic Algorithm.Nygren, Christoffer, Hellkvist, Oskar January 2022 (has links)
Internet of Things (IoT) introduces the Big Data era as the IoT devices produce massive amounts of data daily. Since IoT devices contain limited computational and processing capabilities, processing the data at the edge is challenging. For example, power consumption becomes problematic if data is processed on the IoT device itself. Thus, there is a need to feed this massive data into the cloud platform for analysis. However, uploading the data from IoT devices to the cloud platform causes a delay which is a significant issue for delay-sensitive applications. This tradeoff between delay and power needs a favorable policy to decide where it should allocate the task from edge to cloud processing platform. Research on this subject addresses this issue quite frequently, and various methods have been proposed to mitigate the problem. The previous studies usually focus on the edge-to-cloud computing platform, i.e., they efficiently offload the computational tasks onto the IoT devices and cloud. This thesis proposes a balanced task allocation between edge and cloud computing regarding power consumption and delay. We accomplish our idea by comparing the different task allocation methods, benchmarking in different scenarios, and evaluating by proposing mathematical modeling.
|
53 |
Network Resource Management Using Multi-Agent Deep Reinforcement Learning / マルチエージェント深層強化学習によるネットワーク資源管理Suzuki, Akito 25 September 2023 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第24940号 / 情博第851号 / 新制||情||142(附属図書館) / 京都大学大学院情報学研究科通信情報システム専攻 / (主査)教授 大木 英司, 教授 原田 博司, 教授 伊藤 孝行 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
54 |
Computation Offloading for Real-Time Applications : Server Time Reservation for Periodic Tasks / Beräkningsavlastning för realtidsapplikationerTengana Hurtado, Lizzy January 2023 (has links)
Edge computing is a distributed computing paradigm where computing resources are located physically closer to the data source compared to the traditional cloud computing paradigm. Edge computing enables computation offloading from resource-constrained devices to more powerful servers in the edge and cloud. To offer edge and cloud support to real-time industrial applications, the communication to the servers and the server-side computation needs to be predictable. However, the predictability of offloading cannot be guaranteed in an environment where multiple devices are competing for the same edge and cloud resources due to potential server-side scheduling conflicts. To the best or our knowledge, no offloading scheme has been proposed that provides a highly predictable real-time task scheduling in the face of multiple devices offloading to a set of heterogeneous edge/cloud servers. Hence, this thesis approaches the problem of predictable offloading in real-time environments by proposing a centralized server time reservation system to schedule the offloading of real-time tasks to edge and cloud servers. Our reservation system allows end-devices to request external execution time in advance for real-time tasks that will be generated in the future, therefore when such a task is created, it already has a designated offloading server that guarantees its timely execution. Furthermore, this centralized reservation system is capable of optimizing the reservation scheduling strategy with the goal of minimizing energy consumption of edge servers while meeting the stringent deadline constraints of real-time applications. / Edge computing är ett distribuerat datorparadigm där datorresurser är fysiskt placerade närmare datakällan jämfört med det traditionella molnberäkningsparadigmet. Edge computing möjliggör beräkningsavlastning från resursbegränsade enheter till mer kraftfulla servrar i kanten och molnet. För att erbjuda kant- och molnstöd till industriella tillämpningar i realtid måste kommunikationen till servrarna och beräkningen på serversidan vara förutsägbar. Förutsägbarheten av avlastning kan dock inte garanteras i en miljö där flera enheter konkurrerar om samma kant- och molnresurser på grund av potentiella schemaläggningskonflikter på serversidan. Så vitt vi vet har inget avlastningsschema föreslagits som ger en mycket förutsägbar uppgiftsschemaläggning i realtid inför flera enheter som laddas av till en uppsättning heterogena edge-/molnservrar. Därför närmar sig denna avhandling problemet med förutsägbar avlastning i realtidsmiljöer genom att föreslå ett centraliserat servertidsreservationssystem för att schemalägga avlastningen av realtidsuppgifter till edge- och molnservrar. Vårt reservationssystem tillåter slutenheter att begära extern exekveringstid i förväg för realtidsuppgifter som kommer att genereras i framtiden, därför när en sådan uppgift skapas har den redan en utsedd avlastningsserver som garanterar att den utförs i tid. Dessutom kan detta centraliserade bokningssystem optimera bokningsschemaläggningsstrategin med målet att minimera energiförbrukningen för edge-servrar samtidigt som de stränga deadline-begränsningarna för realtidsapplikationer uppfylls.
|
55 |
AI-enabled System Optimization with Carrier Aggregation and Task Offloading in 5G and 6GKhoramnejad, Fahimeh 24 March 2023 (has links)
Fifth-Generation (5G) and sixth-Generation (6G) are new global wireless standards
providing everyone and everything, machines, objects, and devices, with massive network capacity. The technological advances in wireless communication enable 5G and 6G networks to support resource and computation-hungry services such as smart agriculture and smart city applications. Among these advances are two state-of-the-art technologies: Carrier Aggregation (CA) and Multi Access Edge Computing (MEC). CA unlocks new sources of spectrum in both the mid-band and high-band radio frequencies. It provides the unique capability of aggregating several frequency bands for higher peak rates, and increases cell coverage. The latter is obtained by activating the Component Carriers (CC) in low-band and mid-band frequency (below 7 GHz) while 5G high-band (above 24GHz) delivers unprecedented peak rates with poorer Uplink (UL) coverage. MEC provides computing and storage resources with sufficient connectivity close to end users. These execution resources are typically within/at the boundary of access networks providing support for application use cases such as Augmented Reality (AR)/Virtual Reality (VR). The key technology in MEC is task offloading, which enables a user to offload a resource-hungry application to the MEC hosts to reduce the cost (in terms of energy and latency) of processing the application. This thesis focuses on using CA and task offloading in 5G and 6G wireless networks. These advanced infrastructures are an enabler for many broader use cases, e.g., autonomous driving and Internet of Things (IoT) applications. However, the pertinent problems are the high dimensional ones with combinatorial characteristics. Furthermore, the time-varying features of the 5G/6G wireless networks, such as the stochastic nature of the wireless channel, should be concurrently met. The above challenges can be tackled by using data-driven techniques and Machine Learning (ML) algorithms to derive intelligent and autonomous resource management techniques in the 5G/6G wireless networks. The resource management problems in these networks are sequential decision-making problems, additionally with conflicting objectives. Therefore, among the ML algorithms, the ones based on the Reinforcement Learning (RL), constitute a promising tool to make a trade-off between the conflicting objectives of the resource management problems in the 5G/6G wireless networks, are used. This research considers the objective of maximizing the achievable rate and minimizing the users’ transmit power levels in the MEC-enabled network. Additionally, we try to simultaneously maximize the network capacity and improve the network coverage by activating/deactivating the CCs. Compared with the derived schemes in the literature, our contributions are two folded: deriving distributed resource management schemes in 5G/6G wireless networks to efficiently manage the limited spectrum resources and meet the diverse requirements of some resource-hungry applications, and developing intelligent and energy-aware algorithms to improve the performance in terms of energy consumption, delay, and achievable rate.
|
56 |
Computational Offloading for Sequentially Staged Tasks: A Dynamic Approach Demonstrated on Aerial Imagery AnalysisVeltri, Joshua 02 February 2018 (has links)
No description available.
|
57 |
Multipath transport protocol offloadingAlfredsson, Rebecka January 2022 (has links)
Recently, we have seen an evolution of programmable network devices, where it is possible to customize packet processing inside the data plane at an unprecedented level. This is in contrast to traditional approaches, where networking device functionality is fixed and defined by the ASIC and customers need to wait possibly years before the vendors release new versions that add features required by customers. The vendors in the industry have adapted and the focus has shifted to offering new types of network devices, such as the SmartNIC, IPU, and DPU. Another major paradigm shift in the networking area is the shift towards protocols that encrypt parts of headers and contents of packets such as QUIC. Also, many devices such as smart phones have support for multiple access networks, which requires efficient multipath protocols to leverage the capabilities of multiple networks at the same time. However, when using protocols inside the network that requires encryption such as QUIC or multipath QUIC, packet processing operations for the en/decryption process are very resource intensive. Consequently, network vendors and operators are in need to accelerate and offload crypto operations to dedicated hardware in order to free CPU cycles for business critical operations. Therefore, the aim of this study is to investigate how multipath QUIC can be offloaded or hardware accelerated in order to reduce the CPU utilization on the server. Our contributions are an evaluation of frameworks, programming languages and hardware devices in terms of crypto offloading functionality. Two packet processing offloading prototypes were designed using the DPDK framework and the programming language P4. The design using DPDK was implemented and evaluated on a BlueField 2 DPU. The offloading prototype handles a major part of the packet processing and the crypto operations in order to reduce the load of the user application running on the host. A evaluation show that the throughput when using larger keys are only slightly decreased. The evaluation gives important insights in the need of crypto engines and/or CPUs with high performance when offloading.
|
58 |
An Integrated End-User Data Service for HPC CentersMonti, Henry Matthew 16 January 2013 (has links)
The advent of extreme-scale computing systems, e.g., Petaflop supercomputers, High Performance Computing (HPC) cyber-infrastructure, Enterprise databases, and experimental facilities such as large-scale particle colliders, are pushing the envelope on dataset sizes. Supercomputing centers routinely generate and consume ever increasing amounts of data while executing high-throughput computing jobs. These are often result-datasets or checkpoint snapshots from long-running simulations, but can also be input data from experimental facilities such as the Large Hadron Collider (LHC) or the Spallation Neutron Source (SNS). These growing datasets are often processed by a geographically dispersed user base across multiple different HPC installations. Moreover, end-user workflows are also increasingly distributed in nature with massive input, output, and even intermediate data often being transported to and from several HPC resources or end-users for further processing or visualization.
The growing data demands of applications coupled with the distributed nature of HPC workflows, have the potential to place significant strain on both the storage and network resources at HPC centers. Despite this potential impact, rather than stringently managing HPC center resources, a common practice is to leave application-associated data management to the end-user, as the user is intimately aware of the application's workflow and data needs. This means end-users must frequently interact with the local storage in HPC centers, the scratch space, which is used for job input, output, and intermediate data. Scratch is built using a parallel file system that supports very high aggregate I/O throughput, e.g., Lustre, PVFS, and GPFS. To ensure efficient I/O and faster job turnaround, use of scratch by applications is encouraged. Consequently, job input and output data are required to be moved in and out of the scratch space by end-users before and after the job runs, respectively. In practice, end-users arbitrarily stage and offload data as and when they deem fit, without any consideration to the center's performance, often leaving data on the scratch long after it is needed. HPC centers resort to "purge" mechanisms that sweep the scratch space to remove files found to be no longer in use, based on not having been accessed in a preselected time threshold called the purge window that commonly ranges from a few days to a week. This ad-hoc data management ignores the interactions between different users' data storage and transmission demands, and their impact on center serviceability leading to suboptimal use of precious center resources.
To address the issues of exponentially increasing data sizes and ad-hoc data management, we present a fresh perspective to scratch storage management by fundamentally rethinking the manner in which scratch space is employed. Our approach is twofold. First, we re-design the scratch system as a "cache" and build "retention", "population", and "eviction" policies that are tightly integrated from the start, rather than being add-on tools. Second, we aim to provide and integrate the necessary end-user data delivery services, i.e. timely offloading (eviction) and just-in-time staging (population), so that the center's scratch space usage can be optimized through coordinated data movement. Together, these two combined approaches create our Integrated End-User Data Service, wherein data transfer and placement on the scratch space are scheduled with job execution. This strategy allows us to couple job scheduling with cache management, thereby bridging the gap between system software tools and scratch storage management. It enables the retention of only the relevant data for the duration it is needed. Redesigning the scratch as a cache captures the current HPC usage pattern more accurately, and better equips the scratch storage system to serve the growing datasets of workloads. This is a fundamental paradigm shift in the way scratch space has been managed in HPC centers, and outweighs providing simple purge tools to serve a caching workload. / Ph. D.
|
59 |
SLAM-as-a-Service : An explorative study for outdoor AR applicationsStröm, Felix, Fallberg, Filip January 2024 (has links)
This study investigates the feasibility and performance of SLAM (Simultaneous Localization and Mapping) as a service (SLAM-as-a-Service) for outdoor augmented reality (AR) applications. Given the rapid advancements in AR technology, integrating lightweight AR glasses with real-time SLAM capabilities poses significant challenges, particularly due to the computational demands of SLAM algorithms and the limited hardware capacity of AR devices. This study proposes a scalable SLAM-as-a-Service framework that offloads intensive computational tasks to remote servers, leveraging cloud and edge computing resources. The ORB-SLAM3 algorithm, known for its robustness and real-time processing capabilities, was adapted and implemented in a service-oriented architecture. The framework was evaluated using the EuRoC dataset to benchmark processing speed, accuracy, and round trip time. The results indicate that while the proposed SLAM-as-a-Service model shows promise in handling high computational loads, several obstacles need to be addressed to achieve minimal round trip time and ensure a seamless AR experience. This thesis contributes to the development of scalable and efficient AR solutions by addressing the limitations of on device processing and highlighting the potential of cloud-based services in enhancing the performance and feasibility of AR applications in dynamic outdoor environments.
|
60 |
Offloading devices for the prevention of heel pressure ulcers: A realist evaluationGreenwood, C., Nixon, J., Nelson, E.A., McGinnis, E., Randell, Rebecca 21 June 2023 (has links)
Yes / Heel pressure ulcers can cause pain, reduce mobility, lead to longer hospital stays and in severe cases can lead to sepsis, amputation, and death. Offloading boots are marketed as heel pressure ulcer prevention devices, working by removing pressure to the heel, yet there is little good quality evidence about their clinical effectiveness. Given that evidence is not guiding use of these devices, this study aims to explore, how, when, and why these devices are used in hospital settings.
To explore how offloading devices are used to prevent heel pressure ulcers, for whom and in what circumstances.
A realist evaluation was undertaken to explore the contexts, mechanisms, and outcomes that might influence how offloading devices are implemented and used in clinical practice for the prevention of heel pressure ulcers in hospitals. Eight Tissue Viability Nurse Specialists from across the UK (England, Wales, and Northern Ireland) were interviewed. Questions sought to elicit whether, and in what ways, initial theories about the use of heel pressure ulcers fitted with interviewee's experiences.
Thirteen initial theories were refined into three programme theories about how offloading devices are used by nurses 'proactively' to prevent heel pressure ulcers, 'reactively' to treat and minimise deterioration of early-stage pressure ulcers, and patient factors that influence how these devices are used.
Offloading devices were used in clinical practice by all the interviewees. It was viewed that they were not suitable to be used by every patient, at every point in their inpatient journey, nor was it financially viable. However, the interviewees thought that identifying suitable 'at risk' patient groups that can maintain use of the devices could lead to proactive and cost-effective use of the devices. This understanding of the contexts and mechanisms that influence the effective use of offloading devices has implications for clinical practice and design of clinical trials of offloading devices.
How, for whom, and in what circumstances do offloading devices work to prevent heel pressure ulcers? Tissue viability nurses' perspectives. / CG conducted this review as part of her PhD at the University of Leeds which was funded by a Charitable Grant from https://leedscares.org/LeedsHospitalsCharity (https://www.leedshospitalscharity. org.uk/) and Smith and Nephew Foundation.
|
Page generated in 0.08 seconds