Spelling suggestions: "subject:"edgecomputing"" "subject:"geocomputing""
51 |
Edge Computing for Mixed Reality / Blandad virtuell verklighet med stöd av edge computingLindqvist, Johan January 2019 (has links)
Mixed reality, or augmented reality, where the real and the virtual worlds are combined, has seen an increase in interest in recent years with the release of tools like Google ARCore and Apple ARkit. Edge computing, where the distributed computing resources are located near the end device at the edge of the network, is a paradigm that enables offloading of computing tasks with latency requirements to dedicated servers. This thesis studies how edge computing can be used to bring mixed reality capabilities to mobile end devices that lack native support for that. It presents a working prototype for delivering mixed reality, evaluates the different technologies in it in relation to stability, responsiveness and resource usage, and studies the requirements on the end and edge devices. The experimental evaluation revealed that transmission time is the most significant chunk of end-to-end latency for the developed application. Reducing that delay will have a significant impact on future deployments of such systems. The thesis also presents other bottlenecks and best practices found during the prototype’s development, and how to proceed from here.
|
52 |
On the impact and applicability of network edge computing to reduce network latencies of worldwide client applicationsHorsthemke, Stephan January 2020 (has links)
This project evaluates the applicability of network edge computing to reduce global latencies of client applications. It determines the dimension of latency reduction network edge computing can provide compared to common cloud computing architectures. Furthermore, this project examines whether Compute@Edge, an exemplary and modern edge computing service, enables the replacement of many latency-sensitive cloud systems by an adequate versatility and a reasonable costbenefit ratio. Compute@Edge is a new, serverless edge computing platform by Fastly built on WebAssembly. A prototype that replicates a globally utilized server of Spotify was implemented on Compute@Edge. To compare the latencies of cloud and edge computing, an experiment captured the latencies of the prototype and the original system using a Spotify client that generated almost 26 million data points from all over the world. Next to the experiment, the implementation of the prototype allows accurate insights into the possibilities of Compute@Edge and whether WebAssembly is a promising approach for edge computing. Successes of this work include data showing that network edge computing can reduce latencies significantly. It offers arguments to ramp up the usage of edge computing, WebAssembly and Compute@Edge for applications that require low latencies. The results of the experiment show that network edge computing is capable of reducing network latency compared to cloud computing by at least 38%. The lower latencies combined with the versatility and feasibility of Compute@Edge show that modern edge platforms enable a much higher utilization for applications like Spotify. / Projektet utvärderar hur applicerbart nätverks edge computing är för att minska global latens av kundapplikationer. Den avgör att dimensionen av fördröjnings minskningen i nätverks edge computing kan ge i jämförelse till vanliga cloud computing arkitekturer. Projektet undersöker också om Compute@Edge, en exemplarisk och modernt edge computing service, möjliggör ett byte av många latens-känsliga cloud system och då med en lämplig användbarhet och ett rimlig kostnads-nyttoförhållande. Compute@Edge är en ny serverlös edge computing platform av Fastly, byggt på WebAssembly. En prototype som replikerar en globalt använd server av Spotify var implementerad på Compute@Edge. För att jämföra latenserna av cloud och edge computing, genomfördes ett experiment som fångade upp latenserna av prototypen och det ursprungliga systemet med hjälp från en Spotify kund som genererade runt 26 millioner globala datapunkter. Med experimentet, ger prototypimplementeringen exakta insikter till möjligheterna med Compute@Edge och om WebAssembly är en lovande lösning till edge computing. Arbetes framgång inkluderar data som visar att nätverks edge computing kan minska latensen betydligt. Det visar också argument för att öka på användingen av edge computing, WebAssembly och Compute@Edge till applikationer som behöver låga latens. Experimentets resultat visar att nätverks edge computing kan minska nätverkslatens i jämförelse till cloud computing med åtminstone 38%. De lägre latenserna kombinerade med användbarheten och möjligheten av Compute@Edge visar att moderna edge plattformar ger möjligheter till mycket mer bättre översättning för applikationer som Spotify.
|
53 |
Planning of Mobile Edge Computing Resources in 5G Based on Uplink Energy EfficiencySingh, Navjot 19 November 2018 (has links)
Increasing number of devices demand for low latency and high-speed data transmission require that the computation resources to be closer to users. The emerging Mobile Edge Computing (MEC) technology aims to bring the advantages of cloud computing which are computation, storage and networking capabilities in close proximity of user. MEC servers are also integrated with cloud servers which give them flexibility of reaching vast computational power whenever needed. In this thesis, leveraging the idea of Mobile Edge Computing, we propose algorithms for cost-efficient and energy-efficient the placement of Mobile Edge nodes. We focus on uplink energy-efficiency which is essential for certain applications including augmented reality and connected vehicles, as well as extending battery life of user equipment that is favorable for all applications. The experimental results show that our proposed schemes significantly reduce the uplink energy of devices and minimizes the number of edge nodes required in the network.
|
54 |
Study of Knowledge Transfer Techniques For Deep Learning on Edge DevicesJanuary 2018 (has links)
abstract: With the emergence of edge computing paradigm, many applications such as image recognition and augmented reality require to perform machine learning (ML) and artificial intelligence (AI) tasks on edge devices. Most AI and ML models are large and computational heavy, whereas edge devices are usually equipped with limited computational and storage resources. Such models can be compressed and reduced in order to be placed on edge devices, but they may loose their capability and may not generalize and perform well compared to large models. Recent works used knowledge transfer techniques to transfer information from a large network (termed teacher) to a small one (termed student) in order to improve the performance of the latter. This approach seems to be promising for learning on edge devices, but a thorough investigation on its effectiveness is lacking.
The purpose of this work is to provide an extensive study on the performance (both in terms of accuracy and convergence speed) of knowledge transfer, considering different student-teacher architectures, datasets and different techniques for transferring knowledge from teacher to student.
A good performance improvement is obtained by transferring knowledge from both the intermediate layers and last layer of the teacher to a shallower student. But other architectures and transfer techniques do not fare so well and some of them even lead to negative performance impact. For example, a smaller and shorter network, trained with knowledge transfer on Caltech 101 achieved a significant improvement of 7.36\% in the accuracy and converges 16 times faster compared to the same network trained without knowledge transfer. On the other hand, smaller network which is thinner than the teacher network performed worse with an accuracy drop of 9.48\% on Caltech 101, even with utilization of knowledge transfer. / Dissertation/Thesis / Masters Thesis Computer Science 2018
|
55 |
Distributed Intelligence-Assisted Autonomic Context-Information Management : A context-based approach to handling vast amounts of heterogeneous IoT dataRahman, Hasibur January 2018 (has links)
As an implication of rapid growth in Internet-of-Things (IoT) data, current focus has shifted towards utilizing and analysing the data in order to make sense of the data. The aim of which is to make instantaneous, automated, and informed decisions that will drive the future IoT. This corresponds to extracting and applying knowledge from IoT data which brings both a substantial challenge and high value. Context plays an important role in reaping value from data, and is capable of countering the IoT data challenges. The management of heterogeneous contextualized data is infeasible and insufficient with the existing solutions which mandates new solutions. Research until now has mostly concentrated on providing cloud-based IoT solutions; among other issues, this promotes real-time and faster decision-making issues. In view of this, this dissertation undertakes a study of a context-based approach entitled Distributed intelligence-assisted Autonomic Context Information Management (DACIM), the purpose of which is to efficiently (i) utilize and (ii) analyse IoT data. To address the challenges and solutions with respect to enabling DACIM, the dissertation starts with proposing a logical-clustering approach for proper IoT data utilization. The environment that the number of Things immerse changes rapidly and becomes dynamic. To this end, self-organization has been supported by proposing self-* algorithms that resulted in 10 organized Things per second and high accuracy rate for Things joining. IoT contextualized data further requires scalable dissemination which has been addressed by a Publish/Subscribe model, and it has been shown that high publication rate and faster subscription matching are realisable. The dissertation ends with the proposal of a new approach which assists distribution of intelligence with regard to analysing context information to alleviate intelligence of things. The approach allows to bring few of the application of knowledge from the cloud to the edge; where edge based solution has been facilitated with intelligence that enables faster responses and reduced dependency on the rules by leveraging artificial intelligence techniques. To infer knowledge for different IoT applications closer to the Things, a multi-modal reasoner has been proposed which demonstrates faster response. The evaluations of the designed and developed DACIM gives promising results, which are distributed over seven publications; from this, it can be concluded that it is feasible to realize a distributed intelligence-assisted context-based approach that contribute towards autonomic context information management in the ever-expanding IoT realm. / <p>At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 7: Submitted.</p>
|
56 |
An evaluation of how edge computing is enabling the opportunities for Industry 4.0Svensson, Wictor January 2020 (has links)
Connecting factories to the internet and enable the possibilities for these to autonomously talk to each other is called the Industrial Internet of Things(IIoT) and is mentioned as Industry 4.0 in the terms of the industrial revolutions. The machines are collecting data through very many different sensors and need to share these values with each other and the cloud. This will make a large load to the cloud and the internet, and the latency will be large. To evaluate how the workload and the latency can be reduced and still get the same result as using the cloud, two different systems are implemented. One which uses cloud and one which using edge computing. Edge computing is when the processing of the data is decentralized to the edge of the network. This thesis aims to find out ”When is it more favorable to use an edge solution and when is it to prefer a cloud solution”. The first system is implemented with an edge platform, Crosser, the second system is implemented with a cloud platform, Azure. Both implementations are giving the same outputs but the differences is where the data is processed. The systems are measured in latency, bandwidth, and CPU usage. The result of the measurements shows that the Crosser system has less latency, using smaller bandwidth but is needing more computational power of the device which is on the edge of the network. The conclusion of the results is that it depends on the demands of the system. Is the demands that it should have low latency and not using much bandwidth Crosser is to prefer. But if a very heavy machine learning algorithm is going to be executed in the system and the latency and bandwidth size is not a problem then the Cloud Reference System is to prefer.
|
57 |
Reinforcement Learning Based Fair Edge-User Allocation for Delay-Sensitive Edge Computing ApplicationsAlchalabi, Alaa Eddin 15 November 2021 (has links)
Cloud Gaming systems are among the most challenging networked-applications, since they deal with streaming high-quality and bulky video in real-time to players’ devices. While all industry solutions today are centralized, we introduce an AI-assisted hybrid networking architecture that, in addition to the central cloud servers, also uses some players’ computing resources as additional points of service. We describe the problem, its mathematical formulation, and potential solution strategy.
Edge computing is a promising paradigm that brings servers closer to users, leading to lower latencies and enabling latency-sensitive applications such as cloud gaming, virtual/augmented reality, telepresence, and telecollaboration. Due to the high number of possible edge servers and incoming user requests, the optimum choice of user-server matching has become a difficult challenge, especially in the 5G era where the network can offer very low latencies. In this thesis, we introduce the problem of fair server selection as not only complying with an application's latency threshold but also reducing the variance of the latency among users in the same session. Due to the dynamic and rapidly evolving nature of such an environment and the capacity limitation of the servers, we propose as solution a Reinforcement Learning method in the form of a Quadruple Q-Learning model with action suppression, Q-value normalization, and a reward function that minimizes the variance of the latency. Our evaluations in the context of a cloud gaming application show that, compared to a existing methods, our proposed method not only better meets the application's latency threshold but is also more fair with a reduction of up to 35\% in the standard deviation of the latencies while using the geo-distance, and it shows improvements in fairness up to 18.7\% compared to existing solutions using the RTT delay especially during resource scarcity. Additionally, the RL solution can act as a heuristic algorithm even when it is not fully trained.
While designing this solution, we also introduced action suppression, Quadruple Q-Learning, and normalization of the Q-values, leading to a more scalable and implementable RL system. We focus on algorithms for distributed applications and especially esports, but the principles we discuss apply to other domains and applications where fairness can be a crucial aspect to be optimized.
|
58 |
Kriging Methods to Exploit Spatial Correlations of EEG Signals for Fast and Accurate Seizure Detection in the IoMTOlokodana, Ibrahim Latunde 08 1900 (has links)
Epileptic seizure presents a formidable threat to the life of its sufferers, leaving them unconscious within seconds of its onset. Having a mortality rate that is at least twice that of the general population, it is a true cause for concern which has gained ample attention from various research communities. About 800 million people in the world will have at least one seizure experience in their lifespan. Injuries sustained during a seizure crisis are one of the leading causes of death in epilepsy. These can be prevented by an early detection of seizure accompanied by a timely intervention mechanism. The research presented in this dissertation explores Kriging methods to exploit spatial correlations of electroencephalogram (EEG) Signals from the brain, for fast and accurate seizure detection in the Internet of Medical Things (IoMT) using edge computing paradigms, by modeling the brain as a three-dimensional spatial object, similar to a geographical panorama. This dissertation proposes basic, hierarchical and distributed Kriging models, with a deep neural network (DNN) wrapper in some instances. Experimental results from the models are highly promising for real-time seizure detection, with excellent performance in seizure detection latency and training time, as well as accuracy, sensitivity and specificity which compare well with other notable seizure detection research projects.
|
59 |
Mobile Crowd Sensing in Edge Computing EnvironmentJanuary 2019 (has links)
abstract: The mobile crowdsensing (MCS) applications leverage the user data to derive useful information by data-driven evaluation of innovative user contexts and gathering of information at a high data rate. Such access to context-rich data can potentially enable computationally intensive crowd-sourcing applications such as tracking a missing person or capturing a highlight video of an event. Using snippets and pictures captured from multiple mobile phone cameras with specific contexts can improve the data acquired in such applications. These MCS applications require efficient processing and analysis to generate results in real time. A human user, mobile device and their interactions cause a change in context on the mobile device affecting the quality contextual data that is gathered. Usage of MCS data in real-time mobile applications is challenging due to the complex inter-relationship between: a) availability of context, context is available with the mobile phones and not with the cloud, b) cost of data transfer to remote cloud servers, both in terms of communication time and energy, and c) availability of local computational resources on the mobile phone, computation may lead to rapid battery drain or increased response time. The resource-constrained mobile devices need to offload some of their computation.
This thesis proposes ContextAiDe an end-end architecture for data-driven distributed applications aware of human mobile interactions using Edge computing. Edge processing supports real-time applications by reducing communication costs. The goal is to optimize the quality and the cost of acquiring the data using a) modeling and prediction of mobile user contexts, b) efficient strategies of scheduling application tasks on heterogeneous devices including multi-core devices such as GPU c) power-aware scheduling of virtual machine (VM) applications in cloud infrastructure e.g. elastic VMs. ContextAiDe middleware is integrated into the mobile application via Android API. The evaluation consists of overheads and costs analysis in the scenario of ``perpetrator tracking" application on the cloud, fog servers, and mobile devices. LifeMap data sets containing actual sensor data traces from mobile devices are used to simulate the application run for large scale evaluation. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2019
|
60 |
NON-INTRUSIVE LOAD EXTRACTION OF ELECTRIC VEHICLE CHARGING LOADS FOR EDGE COMPUTINGHyeonae Jang (8790983) 01 May 2020 (has links)
<div>The accelerated urbanization of countries has led the adoption of the smart power grid with an explosion in high power usage. The emergence of Non-intrusive load monitoring (NILM), also referred to as Energy Disaggregation has followed the recent worldwide adoption of smart meters in smart grids. NILM is a convenient process to analyze composite electrical energy load and determine electrical energy consumption.</div><div><br></div><div>A number of state-of-the-art NILM (energy disaggregation) algorithms have been proposed recently to detect various individual appliances from one aggregated signal observation. Different kinds of classification methods such as Hidden Markov Model (HMM), Support Vector Method (SVM), neural networks, fuzzy logic, Naive Bayes, k-Nearest Neighbors (kNN), and many other hybrid approaches have been used to classify the estimated power consumption of electrical appliances from extracted appliances signatures. This study proposes an end-to-end edge computing system with an NILM algorithm, which especially focuses on recognizing Electric Vehicle (EV) charging. This system consists of three main components: (1) Data acquisition and Preprocessing, (2) Extraction of EV charging load via an NILM algorithm (Load identification) on the NILMTK Framework, (3) and Result report to the cloud server platform.</div><div><br></div><div>The monitoring of energy consumption through the proposed system is remarkably beneficial for demand response and energy efficiency. It helps to improve the understanding and prediction of power grid stress as well as enhance grid system reliability and resilience of the power grid. Furthermore, it is highly advantageous for the integration of more renewable energies that are under rapid development. As a result, countless potential NILM use-cases are expected from monitoring and identifying energy consumption in a power grid. It would enable smarter power consumption plans for residents as well as more flexible power grid management for electric utility companies, such as Duke Energy and ComEd.</div>
|
Page generated in 0.0635 seconds