• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 819
  • 148
  • 89
  • 72
  • 66
  • 32
  • 17
  • 15
  • 9
  • 8
  • 7
  • 7
  • 5
  • 5
  • 4
  • Tagged with
  • 1596
  • 196
  • 195
  • 188
  • 166
  • 112
  • 103
  • 100
  • 91
  • 85
  • 80
  • 77
  • 76
  • 76
  • 75
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
631

Distributed Orchestration Framework for Fog Computing

Rahafrouz, Amir January 2019 (has links)
The rise of IoT-based system is making an impact on our daily lives and environment. Fog Computing is a paradigm to utilize IoT data and process them at the first hop of access network instead of distant clouds, and it is going to bring promising applications for us. A mature framework for fog computing still lacks until today. In this study, we propose an approach for monitoring fog nodes in a distributed system using the FogFlow framework. We extend the functionality of FogFlow by adding the monitoring capability of Docker containers using cAdvisor. We use Prometheus for collecting distributed data and aggregate them. The monitoring data of the entire distributed system of fog nodes is accessed via an API from Prometheus. Furthermore, the monitoring data is used to perform the ranking of fog nodes to choose the place to place the serverless functions (Fog Function). The ranking mechanism uses Analytical Hierarchy Processes (AHP) to place the fog function according to resource utilization and saturation of fog nodes’ hardware. Finally, an experiment test-bed is set up with an image-processing application to detect faces. The effect of our ranking approach on the Quality of Service is measured and compared to the current FogFlow.
632

Partitionnement réparti basé sur les sommets / Distributed edge partitioning

Mykhailenko, Hlib 14 June 2017 (has links)
Pour traiter un graphe de manière répartie, le partitionnement est une étape préliminaire importante car elle influence de manière significative le temps final d’exécutions. Dans cette thèse nous étudions le problème du partitionnement réparti de graphe. Des travaux récents ont montré qu’une approche basée sur le partitionnement des sommets plutôt que des arêtes offre de meilleures performances pour les graphes de type power-laws qui sont courant dans les données réelles. Dans un premier temps nous avons étudié les différentes métriques utilisées pour évaluer la qualité d’un partitionnement. Ensuite nous avons analysé et comparé plusieurs logiciels d’analyse de grands graphes (Hadoop, Giraph, Giraph++, Distributed GrahpLab et PowerGraph), les comparant `a une solution très populaire actuellement, Spark et son API de traitement de graphe appelée GraphX. Nous présentons les algorithmes de partitionnement les plus récents et introduisons une classification. En étudiant les différentes publications, nous arrivons à la conclusion qu’il n’est pas possible de comparer la performance relative de tous ces algorithmes. Nous avons donc décidé de les implémenter afin de les comparer expérimentalement. Les résultats obtenus montrent qu’un partitionneur de type Hybrid-Cut offre les meilleures performances. Dans un deuxième temps, nous étudions comment il est possible de prédire la qualité d’un partitionnement avant d’effectivement traiter le graphe. Pour cela, nous avons effectué de nombreuses expérimentations avec GraphX et effectué une analyse statistique précise des résultats en utilisation un modèle de régression linéaire. Nos expérimentations montrent que les métriques de communication sont de bons indicateurs de la performance. Enfin, nous proposons un environnement de partitionnement réparti basé sur du recuit simulé qui peut être utilisé pour optimiser une large partie des métriques de partitionnement. Nous fournissons des conditions suffisantes pour assurer la convergence vers l’optimum et discutons des métriques pouvant être effectivement optimisées de manière répartie. Nous avons implémenté cet algorithme dans GraphX et comparé ses performances avec JA-BE-JA-VC. Nous montrons que notre stratégie amène a` des améliorations significatives. / In distributed graph computation, graph partitioning is an important preliminary step because the computation time can significantly depend on how the graph has been split among the different executors. In this thesis we explore the graph partitioning problem. Recently, edge partitioning approach has been advocated as a better approach to process graphs with a power-law degree distribution, which are very common in real-world datasets. That is why we focus on edge partition- ing approach. We start by an overview of existing metrics, to evaluate the quality of the graph partitioning. We briefly study existing graph processing systems: Hadoop, Giraph, Giraph++, Distributed GrahpLab, and PowerGraph with their key features. Next, we compare them to Spark, a popular big-data processing framework with its graph processing APIs — GraphX. We provide an overview of existing edge partitioning algorithms and introduce partitioner classification. We conclude that, based only on published work, it is not possible to draw a clear conclusion about the relative performances of these partitioners. For this reason, we have experimentally compared all the edge partitioners currently avail- able for GraphX. Results suggest that Hybrid-Cut partitioner provides the best performance. We then study how it is possible to evaluate the quality of a parti- tion before running a computation. To this purpose, we carry experiments with GraphX and we perform an accurate statistical analysis using a linear regression model. Our experimental results show that communication metrics like vertex-cut and communication cost are effective predictors in most of the cases. Finally, we propose a framework for distributed edge partitioning based on distributed simulated annealing which can be used to optimize a large family of partitioning metrics. We provide sufficient conditions for convergence to the optimum and discuss which metrics can be efficiently optimized in a distributed way. We implemented our framework with GraphX and performed a comparison with JA-BE-JA-VC, a state-of-the-art partitioner that inspired our approach. We show that our approach can provide significant improvements.
633

Nasazení aplikací zohledňující komunikační zpoždění v prostředí tzv. edge-cloud / Latency aware deployment in the edge-cloud environment

Filandr, Adam January 2020 (has links)
The goal of this thesis is to propose a layer on top of edge-cloud, in order to provide soft real-time guarantees on the execution time of applications. This is done in order to satisfy the soft-real time requirements set by the developers of latency-sensitive applications. The proposed layer uses a predictor of execution time, in order to find combinations of processes with, which satisfy the soft real- time requirements when collocated. To implement the predictor, we are provided with information about the resource usage of processes and execution times of collocated combinations. We utilize similarity between the processes, cluster analysis, and regression analysis to form four prediction methods. We also provide a boundary system of resource usage used to filter out combinations exceeding the capacity of a computer. Because the metrics indicating the resource usage of a process can vary in their usefulness, we also added a system of weights, which estimates the importance of each metric. We experimentally analyze the accuracy of each prediction method, the influence of the boundary detection system, and the effects of weights. 1
634

Vyhodnocování výkonnosti cloudových aplikací / Performance assessment of cloud applications

Sándor, Gábor January 2020 (has links)
Modern CPS and mobile applications like augmented reality or coordinated driving, etc. are envisioned to combine edge-cloud processing with real-time requirements. The real-time requirements however create a brand new challenge for cloud processing which has traditionally been best-effort. A key to guaranteeing real-time requirements is the understanding of how services sharing resources in the cloud interact on the performance level. The objective of the thesis is to design a mechanism which helps to categorize cloud applications based on the type of their workload. This should result in specification of a model defining a set of applications which can be deployed on a single node, while guaranteeing a certain quality of the service. It should be also able to find the optimal node where the application could be deployed.
635

RESOURCE MANAGEMENT IN EDGE COMPUTING FOR INTERNET OF THINGS APPLICATIONS

Galanis, Ioannis 01 December 2020 (has links)
The Internet of Things (IoT) computing paradigm has connected smart objects “things” and has brought new services at the proximity of the user. Edge Computing, a natural evolution of the traditional IoT, has been proposed to deal with the ever-increasing (i) number of IoT devices and (ii) the amount of data traffic that is produced by the IoT endpoints. EC promises to significantly reduce the unwanted latency that is imposed by the multi-hop communication delays and suggests that instead of uploading all the data to the remote cloud for further processing, it is beneficial to perform computation at the “edge” of the network, close to where the data is produced. However, bringing computation at the edge level has created numerous challenges as edge devices struggle to keep up with the growing application requirements (e.g. Neural Networks, or video-based analytics). In this thesis, we adopt the EC paradigm and we aim at addressing the open challenges. Our goal is to bridge the performance gap that is caused by the increased requirements of the IoT applications with respect to the IoT platform capabilities and provide latency- and energy-efficient computation at the edge level. Our first step is to study the performance of IoT applications that are based on Deep Neural Networks (DNNs). The exploding need to deploy DNN-based applications on resource-constrained edge devices has created several challenges, mainly due to the complex nature of DNNs. DNNs are becoming deeper and wider in order to fulfill users expectations for high accuracy, while they also become power hungry. For instance, executing a DNN on an edge device can drain the battery within minutes. Our solution to make DNNs more energy and inference friendly is to propose hardware-aware method that re-designs a given DNN architecture. Instead of proxy metrics, we measure the DNN performance on real edge devices and we capture their energy and inference time. Our method manages to find alternative DNN architectures that consume up to 78.82% less energy and are up to35.71% faster than the reference networks. In order to achieve end-to-end optimal performance, we also need to manage theedge device resources that will execute a DNN-based application. Due to their unique characteristics, we distinguish the edge devices into two categories: (i) a neuromorphic platform that is designed to execute Spiking Neural Networks (SNNs), and (ii) a general-purpose edge device that is suitable to host a DNN. For the first category, we train a traditional DNN and then we convert it to a spiking representation. We target the SpiNNaker neuromorphic platform and we develop a novel technique that efficiently configures the platform-dependent parameters, in order to achieve the highest possible SNN accuracy.Experimental results show that our technique is 2.5× faster than an exhaustive approach and can reach up to 0.8% higher accuracy compared to a CPU-based simulation method. Regarding the general-purpose edge devices, we show that a DNN-unaware platform can result in sub-optimal DNN performance in terms of power and inference time. Our approachconfigures the frequency of the device components (GPU, CPU, Memory) and manages to achieve average of 33.4% and up to 66.3% inference time improvements and an average of 42.8% and up to 61.5% power savings compared to the predefined configuration of an edge device. The last part of this thesis is the offloading optimization between the edge devicesand the gateway. The offloaded tasks create contention effects on gateway, which can lead to application slowdown. Our proposed solution configures (i) the number of application stages that are executed on each edge device, and (ii) the achieved utility in terms of Quality of Service (QoS) on each edge device. Our technique manages to (i) maximize theoverall QoS, and (ii) simultaneously satisfy network constraints (bandwidth) and user expectations (execution time). In case of multi-gateway deployments, we tackled the problem of unequal workload distribution. In particular, we propose a workload-aware management scheme that performs intra- and inter-gateway optimizations. The intra-gateway mechanism provides a balanced execution environment for the applications, and it achieves up to 95% performance deviation improvement, compared to un-optimized systems. The presented inter-gateway method manages to balance the workload among multiple gateways and is able to achieve a global performance threshold.
636

Energy-Efficient Bandwidth Allocation for Integrating Fog with Optical Access Networks

Helmy, Ahmed 03 December 2019 (has links)
Access networks have been going through many reformations to make them adapt to arising traffic trends and become better suited for many new demanding applications. To that end, incorporating fog and edge computing has become a necessity for supporting many emerging applications as well as alleviating network congestions. At the same time, energy-efficiency has become a strong imperative for access networks to reduce both their operating costs and carbon footprint. In this dissertation, we address these two challenges in long-reach optical access networks. We first study the integration of fog and edge computing with optical access networks, which is believed to form a highly capable access network by combining the huge fiber capacity with closer-to-the-edge computing and storage resources. In our study, we examine the offloading performance under different cloudlet placements when the underlying bandwidth allocation is either centralized or decentralized. We combine between analytical modeling and simulation results in order to identify the different factors that affect the offloading performance within each paradigm. To address the energy efficiency requirement, we introduce novel enhancements and modifications to both allocation paradigms that aim to enhance their network performance while conserving energy. We consider this work to be one of the first to explore the integration of fog and edge computing with optical access networks from both bandwidth allocation and energy efficiency perspectives in order to identify which allocation paradigm would be able to meet the requirements of next-generation access networks.
637

UTILIZING PHOSPHORUS BUDGETS AND ISOTOPIC TRACERS TO EVALUATE PHOSPHORUS FATE IN SOILS WITH LONG TERM POULTRY LITTER APPLICATION

Janae H Bos (9153470) 24 July 2020 (has links)
<p>Converting a nutrient management plan from commercial fertilizers to poultry litter helps effectively utilize waste from the nearly 10 billion broiler birds across the United States. Nine field scale watersheds from the USDA ARS Grassland, Soil and Water Research Laboratory near Riesel, TX were evaluated for P inputs and P outputs to determine phosphorus budgets for 15 years of annual application of poultry litter ranging from 75 – 219 kg P ha<sup>-1</sup> yr<sup>-1</sup> on cultivated and pasture/grazed fields. The cumulative net P continued to increase regardless of the application rate and had a positive relationship with soil level P (Mehlich-3 P) and flow weighted mean concentration (FWMC) for dissolved reactive P for both cultivated and pasture managed fields. We assessed hydrological connectivity within two nested watersheds by using the before-after-control-impact (BACI) design. Results showed hydrological connectivity during high rainfall years whereas low rainfall years had minimal connectivity compared to the controls. These results suggest the P contributions from upstream fields receiving poultry litter, even at high application rates, did not exhibit a treatment effect during the low rainfall years at downslope monitoring stations. </p><p><br></p> <p>As nutrient source variability increases in nutrient management plans, improving our ability to differentiate P sources and their fate in soils is critical. We evaluated soils with unique P inputs: inorganic P, poultry litter, and cattle grazing for isotopic signatures by forming silver phosphate and determining the δ<sup>18</sup>O<sub>P</sub>. Isotopic signatures of the oxygen molecules which are strongly bound to P, provided signatures of 17.09‰, 18.00‰, and 17.20‰ for fields receiving commercial fertilizer, poultry manure, and cattle grazed, respectively. Significant effort was made to determine critical steps in the method to successfully precipitate Ag<sub>3</sub>PO<sub>4 </sub>for analysis. Results show adding a cation removal step as well as monitoring and adjusting pH throughout the method increases probability of successful Ag<sub>3</sub>PO<sub>4 </sub>precipitation. Findings from this study provide a valuable framework for future analysis to confirm unique δ<sup>18</sup>O<sub>P</sub> signatures which can be used to differentiate the fate of different phosphorus sources in agricultural systems.</p>
638

Adaptive Video Streaming : Adapting video quality to radio links with different characteristics

Eklöf, William January 2008 (has links)
During the last decade, the data rates provided by mobile networks have improved to the point that it is now feasible to provide richer services, such as streaming multimedia, to mobile users. However, due to factors such as radio interference and cell load, the throughput available to a client varies over time. If the throughput available to a client decreases below the media’s bit rate, the client’s buffer will eventually become empty. This causes the client to enter a period of rebuffering, which degrades user experience. In order to avoid this, a streaming server may provide the media at different bit rates, thereby allowing the media’s bit rate (and quality) to be modified to fit the client’s bandwidth. This is referred to as adaptive streaming. The aim of this thesis is to devise an algorithm to find the media quality most suitable for a specific client, focusing on how to detect that the user is able to receive content at a higher rate. The goal for such an algorithm is to avoid depleting the client buffer, while utilizing as much of the bandwidth available as possible without overflowing the buffers in the network. In particular, this thesis looks into the difficult problem of how to do adaptation for live content and how to switch to a content version with higher bitrate and quality in an optimal way. This thesis examines if existing adaptation mechanisms can be improved by considering the characteristics of different mobile networks. In order to achieve this, a study of mobile networks currently in use has been conducted, as well as experiments with streaming over live networks. The experiments and study indicate that the increased available throughput can not be detected by passive monitoring of client feedback. Furthermore, a higher data rate carrier will not be allocated to a client in 3G networks, unless the client is sufficiently utilizing the current carrier. This means that a streaming server must modify its sending rate in order to find its maximum throughput and to force allocation of a higher data rate carrier. Different methods for achieving this are examined and discussed and an algorithm based upon these ideas was implemented and evaluated. It is shown that increasing the transmission rate by introducing stuffed packets in the media stream allows the server to find the optimal bit rate for live video streams without switching up to a bit rate which the network can not support. This thesis was carried out during the summer and autumn of 2008 at Ericsson Research, Multimedia Technologies in Kista, Sweden. / Under det senaste decenniet har överföringshastigheterna i mobilnätet ökat så pass mycket att detnu är möjligt att erbjuda användarna mer avancerade tjänster, som till exempel strömmandemultimedia. I mobilnäten varierar dock klientens bandbredd med avseende på tiden på grund avfaktorer som störningar på radiolänken och lasten i cellen. Om en klients överföringshastighetsjunker till mindre än mediets bithastighet, kommer klientens buffert till slut att bli tom. Dettaleder till att klienten inleder en period av ombuffring, vilket försämrar användarupplevelsen. Föratt undvika detta kan en strömmande server erbjuda mediet i flera olika bithastigheter, vilket gördet möjligt för servern att anpassa bithastigheten (och därmed kvalitén) till klientens bandbredd.Denna metod kallas för adaptive strömning. Syftet för detta examensarbete är att utveckla en algoritm, som hittar den bithastighet som är bästlämpad för en specifik användare med fokus på att upptäcka att en klient kan ta emot media avhögre kvalité. Målet för en sådan algoritm är att undvika att klientens buffert blir tom ochsamtidigt utnyttja så mycket av bandbredden som möjligt utan att fylla nätverksbuffertarna. Merspecifikt undersöker denna rapport det svåra problemet med hur adaptering för direktsänd mediakan utföras. Examensarbetet undersöker om existerande adapteringsmekanismer kan förbättras genom attbeakta de olika radioteknologiers egenskaper. I detta arbete ingår både en studie avradioteknologier, som för tillfället används kommersiellt, samt experiment med strömmandemedia över dessa. Resultaten från studien och experimenten tyder på att ökad bandbredd inte kanupptäckas genom att passivt övervaka ”feedback” från klienten. Vidare kommer inte användarenatt allokeras en radiobärare med högre överföringshastighet i 3G-nätverk, om inte den nuvarandebäraren utnyttjas maximalt. Detta innebär att en strömmande server måste variera sinsändningshastighet både för att upptäcka om mer bandbredd är tillgänglig och för att framtvingaallokering av en bärare med högre hastighet. Olika metoder för att utföra detta undersöks ochdiskuteras och en algoritm baserad på dessa idéer utvecklas. Detta examensarbete utfördes under sommaren och hösten 2008 vid Ericsson Research,Multimedia Technologies i Kista, Sverige.
639

Machine Vision Inspection of the Lapping Process in the Production of Mass Impregnated High Voltage Cables

Nilsson, Jim, Valtersson, Peter January 2018 (has links)
Background. Mass impregnated high voltage cables are used in, for example, submarine electric power transmission. One of the production steps of such cables is the lapping process in which several hundred layers of special purpose paper are wrapped around the conductor of the cable. It is important for the mechanical and electrical properties of the finished cable that the paper is applied correctly, however there currently exists no reliable way of continuously ensuring that the paper is applied correctly. Objective. The objective of this thesis is to develop a prototype of a cost-effective machine vision system which monitors the lapping process and detects and records any errors that may occur during the process; with an accuracy of at least one tenth of a millimetre. Methods. The requirements of the system are specified and suitable hardware is identified. Using a method where the images are projected down to one axis as well as other signal processing methods, the errors are measured. Experiments are performed where the accuracy and performance of the system is tested in a controlled environment. Results. The results show that the system is able to detect and measure errors accurately down to one tenth of a millimetre while operating at a frame rate of 40 frames per second. The hardware cost of the system is less than €200. Conclusions. A cost-effective machine vision system capable of performing measurements accurate down to one tenth of a millimetre can be implemented using the inexpensive Raspberry Pi 3 and Raspberry Pi Camera Module V2. Th
640

Urban Walls

Bhawsar, Priya 10 July 2013 (has links)
"Edge. a. The line of intersection of two surfaces. b. A rim or brink. c. The point at which something is likely to begin. d. The area or part away from the middle; an extremity. e. A dividing line; a border." Edges are linear elements that create boundaries between two entities and linear breaks in continuity: shores, railroad cuts, walls. They act as lateral references rather that coordinate axes. "Those edges seem strongest which are not only visually prominent but also continuous in form and impenetrable to cross movement. An edge may be more than simply a dominant barrier if some visual or motion penetration is allowed through it then it becomes a seam rather than a barrier, a line of exchange along which two areas are sewn together." In our built environment an edge is defined and made permanent by the presence of a wall just as a line defines an edge on paper. Walls are the physical as well as the metaphorical representation of an edge. This thesis will examine the edge at the urban-suburban threshold of a city and private-public threshold of a neighborhood. / Master of Architecture

Page generated in 0.0635 seconds