• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 99
  • 6
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 134
  • 134
  • 63
  • 36
  • 35
  • 34
  • 34
  • 33
  • 33
  • 33
  • 26
  • 22
  • 21
  • 21
  • 20
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Energy efficient cloud computing based radio access networks in 5G : design and evaluation of an energy aware 5G cloud radio access networks framework using base station sleeping, cloud computing based workload consolidation and mobile edge computing

Sigwele, Tshiamo January 2017 (has links)
Fifth Generation (5G) cellular networks will experience a thousand-fold increase in data traffic with over 100 billion connected devices by 2020. In order to support this skyrocketing traffic demand, smaller base stations (BSs) are deployed to increase capacity. However, more BSs increase energy consumption which contributes to operational expenditure (OPEX) and CO2 emissions. Also, an introduction of a plethora of 5G applications running in the mobile devices cause a significant amount of energy consumption in the mobile devices. This thesis presents a novel framework for energy efficiency in 5G cloud radio access networks (C-RAN) by leveraging cloud computing technology. Energy efficiency is achieved in three ways; (i) at the radio side of H-C-RAN (Heterogeneous C-RAN), a dynamic BS switching off algorithm is proposed to minimise energy consumption while maintaining Quality of Service (QoS), (ii) in the BS cloud, baseband workload consolidation schemes are proposed based on simulated annealing and genetic algorithms to minimise energy consumption in the cloud, where also advanced fuzzy based admission control with pre-emption is implemented to improve QoS and resource utilisation (iii) at the mobile device side, Mobile Edge Computing (MEC) is used where computer intensive tasks from the mobile device are executed in the MEC server in the cloud. The simulation results show that the proposed framework effectively reduced energy consumption by up to 48% within RAN and 57% in the mobile devices, and improved network energy efficiency by a factor of 10, network throughput by a factor of 2.7 and resource utilisation by 54% while maintaining QoS.
92

Adaptive Two-Stage Edge-Centric Architecture for Deeply-Learned Embedded Real-Time Target Classification in Aerospace Sense-and-Avoidance Applications

Speranza, Nicholas A. 26 May 2021 (has links)
No description available.
93

EDGE COMPUTING APPROACH TO INDOOR TEMPERATURE PREDICTION USING MACHINE LEARNING

Hyemin Kim (11565625) 22 November 2021 (has links)
<p>This paper aims to present a novel approach to real-time indoor temperature forecasting to meet energy consumption constraints in buildings, utilizing computing resources available at the edge of a network, close to data sources. This work was inspired by the irreversible effects of global warming accelerated by greenhouse gas emissions from burning fossil fuels. As much as human activities have heavy impacts on global energy use, it is of utmost importance to reduce the amount of energy consumed in every possible scenario where humans are involved. According to the US Environmental Protection Agency (EPA), one of the biggest greenhouse gas sources is commercial and residential buildings, which took up 13 percent of 2019 greenhouse gas emissions in the United States. In this context, it is assumed that information of the building environment such as indoor temperature and indoor humidity, and predictions based on the information can contribute to more accurate and efficient regulation of indoor heating and cooling systems. When it comes to indoor temperature, distributed IoT devices in buildings can enable more accurate temperature forecasting and eventually help to build administrators in regulating the temperature in an energy-efficient way, but without damaging the indoor environment quality. While the IoT technology shows potential as a complement to HVAC control systems, the majority of existing IoT systems integrate a remote cloud to transfer and process all data from IoT sensors. Instead, the proposed IoT system incorporates the concept of edge computing by utilizing small computer power in close proximity to sensors where the data are generated, to overcome problems of the traditional cloud-centric IoT architecture. In addition, as the microcontroller at the edge supports computing, the machine learning-based prediction of indoor temperature is performed on the microcomputer and transferred to the cloud for further processing. The machine learning algorithm used for prediction, ANN (Artificial Neural Network) is evaluated based on error metrics and compared with simple prediction models.</p>
94

Flexible duplexing and resource optimization in small cell networks

Elbamby, M. S. (Mohammed S.) 22 November 2019 (has links)
Abstract The next-generation networks are set to support a high data rate, low latency, high reliability, and diverse types of services and use cases. These requirements come at the expense of a more complex network management, and asymmetric and time-varying traffic dynamics. Accordingly, future networks will operate at different duplexing modes and with multiple access techniques. This thesis proposes novel transmission strategies and methodologies to dynamically optimize the duplexing modes and allocate resources for small cell based cellular networks. The first part of the thesis studies dynamic time-division-duplex (TDD) operation in dynamic and asymmetric uplink (UL) and downlink (DL) traffic conditions. In this regard, we propose a dynamic TDD framework that optimizes the UL and DL frame configuration and power allocation. Due to the high interference coupling between neighboring small cells, we propose a load-aware clustering method that groups the small cell base stations (SBSs) based on their spatial and load similarities. To balance the UL and DL loads within each cluster, we study the potential of load-based UL/DL decoupled user association in balancing the traffic loads within clusters. In the second part, we study the problem of half-duplex (HD)/full-duplex (FD) mode selection and UL/DL resource and power optimization in small cell networks. Therein, SBSs operate in non-orthogonal multiple access (NOMA) in both UL and DL to schedule multiple users at the same time-frequency resource. The goal of the study is therefore to select the optimal duplexing and multiple access scheme, based on the traffic load and interference conditions, such that users’ data rates are maximized, while stabilizing traffic queues. Finally, the last part of the thesis looks beyond rate maximization and focuses on ensuring low latency and high reliability in small cell networks providing edge computing services. The problem of distributing wireless resources to users requesting edge computing tasks is cast as a delay minimization problem under stringent reliability constraints. The study investigates the role of proactive computing in ensuring low latency edge computing, while the concept of hedged requests is presented as an enabler for computing service reliability. / Tiivistelmä Seuraavan sukupolven verkot suunnitellaan tukemaan suuria tiedonsiirtonopeuksia, pientä latenssia, erinomaista luotettavuutta ja monentyyppisiä palveluja ja käyttötapauksia. Näiden vaatimusten täyttämisen kääntöpuolena ovat entistä monimutkaisemmat verkonhallintatoiminnot sekä epäsymmetrinen ja ajallisesti muuttuva dataliikenteen dynamiikka. Verkot toimivat tulevaisuudessa eri dupleksointitiloissa hyödyntämällä useita eri liittymätekniikoita. Tässä tutkielmassa ehdotetaan uusia siirtostrategioita ja menetelmiä dupleksointitilojen dynaamista optimointia ja resurssien allokointia varten piensoluperustaisissa solukkoverkoissa. Tutkielman alkuosassa tarkastellaan dynaamisen aikajakodupleksin (TDD) toimintaa dataliikenneympäristöissä, joissa on käytössä dynaaminen ja epäsymmetrinen lähetysyhteys (UL) ja laskeva siirtotie (DL). Ehdotamme tältä osin dynaamista TDD-kehystä, joka optimoi UL- ja DL-kehyksen konfiguroinnin ja tehon allokoinnin. Vierekkäisten pienten solujen välisten kytkösten suuren interferenssin takia ehdotamme kuormituksen huomioivaa klusterointimenetelmää, jossa piensolutukiasemat (SBS) ryhmitellään niiden tilallisten ja kuormitusominaisuuksien yhteneväisyyden perusteella. Tutkimme UL- ja DL-kuormitusten tasapainottamista kussakin klusterissa tarkastelemalla UL/DL-yhteyksistä irti kytketyn, kuormitukseen perustuvan käyttäjän yhdistämisen mahdollisuuksia dataliikennekuormituksen tasapainottamisessa. Tutkielman toisessa osassa tarkastellaan puolidupleksi (HD)- ja kaksisuuntaisen (FD) -tilan valinnan ongelmaa ja UL-/DL-resurssien ja tehon optimointia piensoluverkoissa. Siinä piensolutukiasemat toimivat ei-ortogonaalisessa moniliittymässä (NOMA) sekä UL- että DL-yhteyksissä useiden käyttäjien ajoittamiseksi samalle aika-taajuusresurssille. Tutkielman tavoitteena on siten valita optimaalinen dupleksointi- ja moniliittymäkaavio dataliikenteen kuormituksen ja interferenssin perusteella siten, että käyttäjän tiedonsiirtonopeudet voidaan maksimoida ja dataliikennejonot tasata. Lopuksi tutkielman viimeisessä osassa tarkastellaan tiedonsiirtonopeuden maksimoinnin lisäksi pienen latenssin ja suuren luotettavuuden varmistamista piensoluverkoissa, jotka tuottavat reunalaskentapalveluja. Langattomien resurssien jakelemista käyttäjille, jotka vaativat reunalaskentatehtäviä, käsitellään viiveen minimoinnin ongelmana soveltamalla tiukkoja luotettavuusrajoituksia. Tutkielmassa tarkastellaan proaktiivisen tietojenkäsittelyn roolia pienen latenssin reunalaskennassa.
95

Research on Dynamic Offloading Strategy of Satellite Edge Computing Based on Deep Reinforcement Learning

Geng, Rui January 2021 (has links)
Nowadays more and more data is generated at the edge of the network, and people are beginning to consider decentralizing computing tasks to the edge of the network. The network architecture of edge computing is different from the traditional network architecture. Its distributed configuration can make up for some shortcomings of traditional networks, such as data congestion, increased delay, and limited capacity. With the continuous development of 5G technology, satellite communication networks are also facing many new business challenges. By using idle computing power and storage space on satellites and integrating edge computing technology into satellite communication networks, it will greatly improve satellite communication service quality, and enhance satellite task processing capabilities, thereby improving the satellite edge computing system performance. The primary problem that limits the computing performance of satellite edge networks is how to obtain a more effective dynamic service offloading strategy. To study this problem, this thesis monitors the status information satellite nodes in different periods, such as service load and distance to the ground, uses the Markov decision process to model the dynamic offloading problem of the satellite edge computing system, and finally obtains the service offloading strategies. The deployment plan is based on deep reinforcement learning algorithms. We mainly study the performance of the Deep Q-Network (DQN) algorithm and two improved DQN algorithms Double DQN (DDQN) and Dueling DQN (DuDQN) in different service request types and different system scenarios. Compared with existing service deployment algorithms, deep reinforcement learning algorithms take into account the long-term service quality of the system and form more reasonable offloading strategies. / Med den snabba utvecklingen av mobil kommunikationsteknik genereras mer och mer data i utkanten av nätverket, och människor börjar överväga att decentralisera datoruppgifter till kanten av nätverket. Och byggde ett komplett mobilt edge computing -arkitektursystem. Edge -dators nätverksarkitektur skiljer sig från den traditionella nätverksarkitekturen. Dess distribuerade konfiguration kan kompensera för eventuella brister i traditionella nätverk, såsom överbelastning av data, ökad fördröjning och begränsad kapacitet. Med den ständiga utvecklingen av 5G -teknik står satellitkommunikationsnät också inför många nya affärsutmaningar. Genom att använda inaktiv datorkraft och lagringsutrymme på satelliter och integrera edge computing -teknik i satellitkommunikationsnät kommer det att förkorta servicetiden för traditionella mobila satelliter kraftigt, förbättra satellitkommunikationstjänstkvaliteten och förbättra satellituppgiftsbehandlingsförmågan och därigenom förbättra satelliten edge computing systemprestanda. Det primära problemet som begränsar datorprestanda för satellitkantnät är hur man får en mer effektiv dynamisk tjänstavlastningsstrategi. Detta papper övervakar servicebelastningen av satellitnoder i olika perioder, markpositionsinformation och annan statusinformation använder Markov - beslutsprocessen för att modellera den dynamiska distributionen av satellitkantstjänster och får slutligen en uppsättning tjänstedynamik baserad på modell och design . Distributionsplanen är baserad på en djupt förbättrad algoritm för dynamisk distribution av tjänster. Det här dokumentet studerar huvudsakligen prestandan för DQN -algoritmen och två förbättrade DQN - algoritmer Double DQN och Dueling DQN i olika serviceförfrågningstyper och olika systemscenarier. Jämfört med befintliga algoritmer för serviceutplacering är prestandan för algoritmer för djupförstärkning något bättre.
96

Design Space Exploration and Architecture Design for Inference and Training Deep Neural Networks

Qi, Yangjie January 2021 (has links)
No description available.
97

Building Energy-efficient Edge Systems

Tumkur Ramesh Babu, Naveen January 2020 (has links)
No description available.
98

Energy and Delay-aware Communication and Computation in Wireless Networks

Masoudi, Meysam January 2020 (has links)
Power conservation has become a severe issue in devices since battery capability advancement is not keeping pace with the swift development of other technologies such as processing technologies. This issue becomes critical when both the number of resource-intensive applications and the number of connected devices are rapidly growing. The former results in an increase in power consumption per device, and the latter causes an increase in the total power consumption of devices. Mobile edge computing (MEC) and low power wide area networks (LPWANs) are raised as two important research areas in wireless networks, which can assist devices to save power. On the one hand, devices are being considered as a platform to run resource-intensive applications while they have limited resources such as battery and processing capabilities. On the other hand, LPWANs raised as an important enabler for massive IoT (Internet of Things) to provide long-range and reliable connectivity for low power devices. The scope of this thesis spans over these two main research areas: (1) MEC, where devices can use radio resources to offload their processing tasks to the cloud to save energy. (2) LPWAN, with grant-free radio access where devices from different technology transmit their packets without any handshaking process. In particular, we consider a MEC network, where the processing resources are distributed in the proximity of the users. Hence, devices can save energy by transmitting the data to be processed to the edge cloud provided that the delay requirement is met and transmission power consumption is less than the local processing power consumption. This thesis addresses the question of whether to offload or not to minimize the uplink power consumption in a multi-cell multi-user MEC network. We consider the maximum acceptable delay as the QoS metric to be satisfied in our system. We formulate the problem as a mixed-integer nonlinear problem, which is converted into a convex form using D.C. approximation. To solve the converted optimization problem, we have proposed centralized and distributed algorithms for joint power allocation and channel assignment together with decision-making on job offloading. Our results show that there exists a region in which offloading can save power at mobile devices and increases the battery lifetime. Another focus of this thesis is on LPWANs, which are becoming more and more popular, due to the limited battery capacity and the ever-increasing need for durable battery lifetime for IoT networks. Most studies evaluate the system performance assuming single radio access technology deployment. In this thesis, we study the impact of coexisting competing radio access technologies on the system performance. We consider K technologies, defined by time and frequency activity factors, bandwidth, and power, which share a set of radio resources. Leveraging tools from stochastic geometry, we derive closed-form expressions for the successful transmission probability, expected battery lifetime, experienced delay, and expected number of retransmissions. Our analytical model, which is validated by simulation results, provides a tool to evaluate the coexistence scenarios and analyze how the introduction of a new coexisting technology may degrade the system performance in terms of success probability, delay, and battery lifetime. We further investigate the interplay between traffic load, the density of access points, and reliability/delay of communications, and examine the bounds beyond which, mean delay becomes infinite. / Antalet anslutna enheter till nätverk ökar. Det finns olika trender som mobil edgecomputing (MEC) och low power wide area-nätverk (LPWAN) som har blivit intressantai trådlösa nätverk. Därför står trådlösa nätverk inför nya utmaningar som ökadenergiförbrukning. I den här avhandlingen beaktar vi dessa två mobila nätverk. I MECavlastar mobila enheter sina bearbetningsuppgifter till centraliserad beräkningsresurser (”molnet”). I avhandlingensvarar vi på följande fråga: När det är energieffektivt att avlasta dessa beräkningsuppgifter till molnet?Vi föreslår två algoritmer för att bestämma den rätta tiden för överflyttning av beräkningsuppgifter till molnet.I LPWANs, antar vi att det finns ett mycket stort antal enheter av olika art som kommunicerar mednätverket. De använder s.k. ”Grant-free”-åtkomst för att ansluta till nätverket, där basstationerna inte ger explicita sändningstillstånd till enheterna. Denanalytiska modell som föreslås i avhandlingen utgör ett verktyg för att utvärdera sådana samexistensscenarier.Med verktygen kan vi analysera olika systems prestanda när det gäller framgångssannolikhet, fördröjning och batteriershållbarhetstid. / <p>QC 20200228</p> / SOOGreen
99

Building Distributed Systems for Fresh and Low-latency Data Delivery for Internet of Things

Toutounji Alkallas, Adnan January 2019 (has links)
Internet of Things (IoT) is a system of interrelated computing devices with the ability to transfer data over the network and collected by the applications that rely on fresh information, where the freshness of data can be measured by a metric called Age of Information (AoI). Age of Information is the time that is measured by the receiving node from the time the data has generated at the source. It is an important metric for many IoT applications such as, collecting data from temperature sensors, pollution rates in a specific city. However, the bottleneck problem occurs at sensors because they are constrained devices in terms of energy (power via battery), and also have limited memory and computational power. Therefore, they cannot serve many requests at the same time and thus, it will decrease the information quality which means more unnecessary aging. As a result, we suggest as a solution a distributed system that takes into account the AoI transmitted by the sensors so that IoT applications will receive the expected information quality. This thesis describes the three algorithms that can be used tobuild and test three different topologies. The first algorithm builds a Random graph while second and thirds algorithms shapes Clustered and Hybrid graphs respectively. For testing, we use Python based SimPy package which is a process-based discrete-event simulation framework. Finally, we compare Random, Clustered and Hybrid graphs results. Overall, the Hybrid graph delivers more fresh information than other graphs. / Internet of Things (IoT) är ett system med sammanhängande datorenheter med förmågan att överföra data över nätverket och samlas in av applikationer som förlitar sig på ny information, där datorns färskhet kan mätas med en metrisk som kallas Age of Information (AoI ). Age of Information är den tid som mäts av den mottagande noden från den tid datan har genererat vid källan. Det är en viktig metrisk för många IoT-applikationer, till exempel att samla in data från temperatursensorer, föroreningar i en specifik stad. Flaskhalsproblemet uppstår emellertid vid sensorer eftersom de är begränsade enheter i termer av energi (ström via batteri), och också har begränsat minne och beräkningskraft. Därför kan de inte betjäna många förfrågningar samtidigt och det kommer därför att minska informationskvaliteten vilket innebär mer onödigt åldrande. Som ett resultat föreslår vi som en lösning ett distribuerat system som tar hänsyn till AoI som sänds av sensorerna så att IoT-applikationer får den förväntade informationskvaliteten. Den här avhandlingen beskriver de tre algoritmerna som kananvändas för att bygga och testa tre olika topologier. Den första algoritmen bygger ett slumpmässigt diagram medan andra och tredjedels algoritmer formar Clustered respektive hybriddiagram. För testning använder vi ett Python-baserat SimPy-paket somär en processbaserad diskret händelsimuleringsram. Slutligen jämför vi slumpmässiga, klusterade och hybriddiagramresultat. Sammantaget ger hybridgrafen mer ny information än andra grafer.
100

Agile Network Security for Software Defined Edge Clouds

Osman, Amr 07 March 2023 (has links)
Today's Internet is seeing a massive shift from traditional client-server applications towards real-time, context-sensitive, and highly immersive applications. The fusion between Cyber-physical systems, The Internet of Things (IoT), Augmented/Virtual-Reality (AR/VR), and the Tactile Internet with the Human-in-the-Loop (TaHIL) means that Ultra-Reliable Low Latency Communication (URLLC) is a key functional requirement. Mobile Edge Computing (MEC) has emerged as a network architectural paradigm to address such ever-increasing resource demands. MEC leverages networking and computational resource pools that are closer to the end-users at the far edge of the network, eliminating the need to send and process large volumes of data over multiple distant hops at central cloud computing data centers. Multiple 'cloudlets' are formed at the edge, and the access to resources is shared and federated across them over multiple network domains that are distributed over various geographical locations. However, this federated access comes at the cost of a fuzzy and dynamically-changing network security perimeter because there are multiple sources of mobility. Not only are the end users mobile, but the applications themselves virtually migrate over multiple network domains and cloudlets to serve the end users, bypassing statically placed network security middleboxes and firewalls. This work aims to address this problem by proposing adaptive network security measures that can be dynamically changed at runtime, and are decoupled from the ever-changing network topology. In particular, we: 1) use the state of the art in programmable networking to protect MEC networks from internal adversaries that can adapt and laterally move, 2) Automatically infer application security contexts, and device vulnerabilities, then evolve the network access control policies to segment the network in such a way that minimizes the attack surface with minimal impact on its utility, 3) propose new metrics to assess the susceptibility of edge nodes to a new class of stealthy attacks that bypasses traditional statically placed Intrusion Detection Systems (IDS), and a probabilistic approach to pro-actively protect them.:Acknowledgments Acronyms & Abbreviations 1 Introduction 1.1 Prelude 1.2 Motivation and Challenges 1.3 Aim and objectives 1.4 Contributions 1.5 Thesis structure 2 Background 2.1 A primer on computer networks 2.2 Network security 2.3 Network softwarization 2.4 Cloudification of networks 2.5 Securing cloud networks 2.6 Towards Securing Edge Cloud Networks 2.7 Summary I Adaptive security in consumer edge cloud networks 3 Automatic microsegmentation of smarthome IoT networks 3.1 Introduction 3.2 Related work 3.3 Smart home microsegmentation 3.4 Software-Defined Secure Isolation 3.5 Evaluation 3.6 Summary 4 Smart home microsegmentation with user privacy in mind 4.1 Introduction 4.2 Related Work 4.3 Goals and Assumptions 4.4 Quantifying the security and privacy of SHIoT devices 4.5 Automatic microsegmentation 4.6 Manual microsegmentation 4.7 Experimental setup 4.8 Evaluation 4.9 Summary II Adaptive security in enterprise edge cloud networks 5 Adaptive real-time network deception and isolation 5.1 Introduction 5.2 Related work 5.3 Sandnet’s concept 5.4 Live Cloning and Network Deception 5.5 Evaluation 5.6 Summary 6 Localization of internal stealthy DDoS attacks on Microservices 6.1 Introduction 6.2 Related work 6.3 Assumptions & Threat model 6.4 Mitigating SILVDDoS 6.5 Evaluation 6.6 Summary III Summary of Results 7 Conclusion 7.1 Main outcomes 7.2 Future outlook Listings Bibliography List of Algorithms List of Figures List of Tables Appendix

Page generated in 0.0478 seconds