• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 819
  • 148
  • 89
  • 72
  • 66
  • 32
  • 17
  • 15
  • 9
  • 8
  • 7
  • 7
  • 5
  • 5
  • 4
  • Tagged with
  • 1596
  • 196
  • 195
  • 188
  • 166
  • 112
  • 103
  • 100
  • 91
  • 85
  • 80
  • 77
  • 76
  • 76
  • 75
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Interactive Visual Analysis of Hypergraphs

Chen, ningrui January 2021 (has links)
Access to and understanding data plays an essential role in the increasingly digital world. Representation and analysis of relations between various data entities, i.e., graph and network structures in the data, is an important problem for various industries. In contrast to simple graphs that focus on edges with two endpoints only, a hypergraph provides a natural method to represent multi-way interactions with an arbitrary number of endpoints for each edge, and it can be a better alternative than a bipartite graph for comparable applications. However, traditional approaches for visually representing hypergraphs are purely static diagrams without support for interaction, which can be difficult to perceive and do not scale well with regard to the number of nodes and edges. They are not adequate for the representation and interactive exploration of large or dense hypergraph data sets found in real-world applications. The ISOVIS (Information and Software Visualisation) research group at Linnaeus University has previously introduced a novel radial visualization approach for undirected hypergraphs called Onion. The Onion tool focuses on solving the issues of edge clutter, overlaps, and edge crossings. However, certain open challenges and suggestions for improvements were identified for the respective implementation, and there is an opportunity to fill a gap in the hypergraph visualization research by building upon the original Onion approach study. In this thesis project, we implement the new version of the Onion approach based on the principles and challenges established previously. The contributions of this work include evidence regarding the effectiveness and efficiency of a hypergraph comparison technique, the usability of edge bundling in the context of hypergraph exploration tasks, and the scalability of the interactive visualization through an entirely new web-based version of the Onion approach. To obtain the respective results, the new implementation is applied for two case studies involving real-world data sets, and further validated through a user study with several participants. The results of this work can be helpful for researchers of network visualization and practitioners in need of approaches for representing and exploring data that can be modeled as hypergraphs.
332

Edge Processing of Image for UAS Sense and Avoidance

Rave, Christopher J. 26 August 2021 (has links)
No description available.
333

Will the Telco survive to an ever changing world ? Technical considerations leading to disruptive scenarios / Les opérateurs sauront-ils survivre dans un monde en constante évolution? Considérations techniques conduisant à des scénarios de rupture

Minerva, Roberto 12 June 2013 (has links)
Le secteur des télécommunications passe par une phase délicate en raison de profondes mutations technologiques, principalement motivées par le développement de l'Internet. Elles ont un impact majeur sur l'industrie des télécommunications dans son ensemble et, par conséquent, sur les futurs déploiements des nouveaux réseaux, plateformes et services. L'évolution de l'Internet a un impact particulièrement fort sur les opérateurs des télécommunications (Telcos). En fait, l'industrie des télécommunications est à la veille de changements majeurs en raison de nombreux facteurs, comme par exemple la banalisation progressive de la connectivité, la domination dans le domaine des services de sociétés du web (Webcos), l'importance croissante de solutions à base de logiciels et la flexibilité qu'elles introduisent (par rapport au système statique des opérateurs télécoms). Cette thèse élabore, propose et compare les scénarios possibles basés sur des solutions et des approches qui sont technologiquement viables. Les scénarios identifiés couvrent un large éventail de possibilités: 1) Telco traditionnel; 2) Telco transporteur de Bits; 3) Telco facilitateur de Plateforme; 4) Telco fournisseur de services; 5) Disparition des Telco. Pour chaque scénario, une plateforme viable (selon le point de vue des opérateurs télécoms) est décrite avec ses avantages potentiels et le portefeuille de services qui pourraient être fournis / The telecommunications industry is going through a difficult phase because of profound technological changes, mainly originated by the development of the Internet. They have a major impact on the telecommunications industry as a whole and, consequently, the future deployment of new networks, platforms and services. The evolution of the Internet has a particularly strong impact on telecommunications operators (Telcos). In fact, the telecommunications industry is on the verge of major changes due to many factors, such as the gradual commoditization of connectivity, the dominance of web services companies (Webcos), the growing importance of software based solutions that introduce flexibility (compared to static system of telecom operators). This thesis develops, proposes and compares plausible future scenarios based on future solutions and approaches that will be technologically feasible and viable. Identified scenarios cover a wide range of possibilities: 1) Traditional Telco; 2) Telco as Bit Carrier; 3) Telco as Platform Provider; 4) Telco as Service Provider; 5) Telco Disappearance. For each scenario, a viable platform (from the point of view of telecom operators) is described highlighting the enabled service portfolio and its potential benefits
334

Measuring the responsiveness of WebAssembly in edge network applications / Mätning av responsiviteten hos WebAssembly i edge network-applikationer

Scolati, Remo January 2023 (has links)
Edge computing facilitates applications of cyber-physical systems that require low latencies by moving compute and storage resources closer to the end application. Whilst the edge network benefits such systems in terms of responsiveness, it increases the systems’ complexity due to edge devices’ often heterogeneous and resource-constrained nature. In this work, we evaluate whether WebAssembly can be used as a lightweight and portable abstraction layer for such applications. Through the implementation of an edge network robot control scenario, we benchmark and compare the performance of WebAssembly against its native equivalent. We measure WebAssembly’s overhead and assess the impact of different placement options in the network. We further compare the overall application responsiveness against the latency requirements of an industrial application to evaluate its performance. We find that WebAssembly satisfies the portability and performance requirements of the selected industrial use case. Our empirical results show that WebAssembly doubles the execution latency in a localized setting, but does not excessively impact the overall responsiveness of a cyber-physical system. / Edge computing underlättar tillämpningar av cyberfysiska system som kräver låga latenser genom att flytta beräknings- och lagringsresurser närmare slutapplikationen. Även om edge-nätverket gynnar sådana system när det gäller reaktionsförmåga, ökar det systemens komplexitet på grund av edge-enheternas ofta heterogena och resursbegränsade natur. I detta arbete utvärderar vi om WebAssembly kan användas som ett lättviktigt och portabelt abstraktionslager för sådana applikationer. Genom att implementera ett robotkontrollscenario för edge-nätverk benchmarkar och jämför vi prestandan hos WebAssembly med dess inbyggda motsvarighet. Vi mäter WebAssemblys overhead och utvärderar effekten av olika placeringsalternativ i nätverket. Vi jämför även den övergripande applikationsresponsen mot latenskraven i en industriell applikation för att utvärdera dess prestanda. Vi konstaterar att WebAssembly uppfyller portabilitets- och prestandakraven för det utvalda industriella användningsfallet. Våra empiriska resultat visar att WebAssembly fördubblar exekveringslatensen i en lokaliserad miljö, men att det inte påverkar den övergripande responsiviteten i ett cyberfysiskt system i alltför hög grad.
335

Dynamic container orchestration for a device-cloud continuum

Alfonso Rodriguez Garzon, Camilo January 2023 (has links)
Edge computing has emerged as a paradigm to support the growing demand for real-time processing of data generated at the edge of the network. As the devices at the edge are constrained, one of the challenges in the area is how to schedule workloads. The scheduling problem is difficult to tackle due to the multitude of sources from which variables originate, diverse algorithms and execution methods, and tasks involving information dissemination and action execution. This project aims to explore the problem and implement a system that simplifies the construction of a scheduler for the edge computing to reduce the cognitive load on developers that work on the area and focus their attention on their expertise area. To construct the solution, a literature review is conducted, a set of functional and non functional requirements are proposed, an implementation using a Kubernetes operator and Python application is performed, and an evaluation and validation of the solution against the requirements and an use case and test case are performed. The results demonstrate that the system generates customized instances capable of receiving any number of inputs, outsources the execution of the logic and interacts with different outputs. This allows developers to rapidly deploy instances for their own needs, focusing on their domain of expertise. / Edge computing har framträtt som ett paradigm för att stödja den växande efterfrågan på realtidsbehandling av data som genereras vid nätverkets kant. Eftersom enheterna vid kanten är begränsade utgör en av utmaningarna inom området hur arbetsbelastningar ska schemaläggas. Schemaläggningsproblemet är svårt att hantera på grund av den mångfald av källor varifrån variabler härstammar, varierande algoritmer och utförandemetoder samt uppgifter som involverar informationsförmedling och utförande av åtgärder. Detta projekt syftar till att utforska problemet och implementera ett system som förenklar konstruktionen av en schemaläggare för kantberäkning för att minska den kognitiva belastningen på utvecklare som arbetar inom området och fokusera deras uppmärksamhet på deras expertområde. För att konstruera lösningen genomförs en litteraturgenomgång, en uppsättning funktionella och ickefunktionella krav föreslås, en implementation med hjälp av en Kubernetesoperatör och en Python-applikation utförs, och en utvärdering och validering av lösningen gentemot kraven, inklusive både användnings- och testfall, genomförs. Resultaten visar att systemet genererar anpassade instanser som kan ta emot vilket antal inmatningar som helst, outsourcar utförandet av logiken och interagerar med olika utgångar. Detta gör det möjligt för utvecklare att snabbt distribuera instanser för sina egna behov och fokusera på sitt expertområde.
336

Design, development and evaluation of the ruggedized edge computing node (RECON)

Patel, Sahil Girin 09 December 2022 (has links)
The increased quality and quantity of sensors provide an ever-increasing capability to collect large quantities of high-quality data in the field. Research devoted to translating that data is progressing rapidly; however, translating field data into usable information can require high performance computing capabilities. While high performance computing (HPC) resources are available in centralized facilities, bandwidth, latency, security and other limitations inherent to edge location in field sensor applications may prevent HPC resources from being used in a timely fashion necessary for potential United States Army Corps of Engineers (USACE) field applications. To address these limitations, the design requirements for RECON are established and derived from a review of edge computing, in order to develop and evaluate a novel high-power, field-deployable HPC platform capable of operating in austere environments at the edge.
337

Optimization of Data Propagation Algorithm for Conflict-Free Replicated Data Type-based Datastores in Geo-Distributed Edge Environment

Tejankar, Vinayak Prabhakar January 2020 (has links)
Replication primarily provides data availability by having multiple copies over different systems and is exploited to make distributed systems scalable in num- bers and geographical areas. Placing a replica closer to the source of request can also significantly reduce the time required to service the request, improv- ing applications’ performance. However, modifications done at a single copy need to be propagated to all the standing copies to maintain the data’s consis- tency. Over the years, numerous strategies have been proposed for handling the tradeoff between consistency and availability, of which the majority pro- vides either strong consistency or eventual consistency. These models do not provide sufficient compatibility for developing modern applications for geo- distributed (edge) environments.Conflict-Free Replicated Data Types (CRDT) provides a new model of consistency referred to as strong eventual consistency. In principle, CRDTs guarantee conflict-free merge even when the updates arrive out of order using simple mathematical properties. Lasp is a coordination free distributed pro- gramming model for building modern distributed applications using CRDTs. Lasp uses a gossip protocol for disseminating state changes to all replicas in the system. The current implementation of gossip in Lasp is agnostic to the application’s behavior in propagating the updates efficiently to critical repli- cas in the system. In the thesis, we introduce an application-specific feature to optimize the dissemination of updates in Lasp. The proposed algorithm propagates the updates by catering to the different consistency requirements of the replicas in the system. The experimental results on a topology of 100 replicas found that the update latency at critical replicas with high consistency requirements is reduced by 40–50%, and the total bandwidth consumption in the system is reduced by 4–8% without significant repercussion on other repli- cas in the system. / Datareplikering erbjuder primärt tillgänglighet genom att tillhandahålla mul- tipla kopior fördelat över olika system, och utnyttjas för att göra distribuerade system skalbara i antal och över geografiska områden. Att placera en replika nära källan till en förfrågan kan dessutom signifikant reducera tiden det krävs att besvara förfrågan vilket förbättrar applikationens prestanda. Modifikatio- ner gjorda på en av kopiorna måste dock propageras till alla stående kopior för att upprätthålla datans konsistens . Över tid har många strategier föreslagits för att hantera avvägningen mellan konsistens och tillgänglighet, där majorite- ten erbjuder antingen stark eller eventuell konsistens. Dessa modeller erbjuder inte tillräcklig kompatibilitet för utveckling av moderna applikationer för geo- distribuerade (edge) miljöer.Konfliktfria replikerade datatyper (CRDT) erbjuder en ny modell av konsi- stens som kallas stark eventuell konsistens. I princip garanterar CRDTer kon- fliktfria sammanslagningar trots att uppdateringar sker i oordning, genom att använda dess matematiska egenskaper. Lasp är en koordineringsfri distribue- rad programmeringsmodell för att bygga moderna distribuerade applikationer med hjälp av CRDTer. Lasp använder skvallerprotokoll för att sprida tillstånds- förändringar till alla replikor i systemet. Den nuvarande implementationen av skvaller i Lasp är agnostiskt för applikationens beteende relaterat till effektiv propagering av uppdateringar till kritiska replikor i systemet. I det här exa- mensarbetet introducerar vi applikationsspecifik funktionalitet för att optime- ra spridandet av uppdateringar i Lasp. Den föreslagna algoritmen sprider upp- dateringarna genom att tillgodose de olika konsistenskraven för replikorna i systemet. Experimentella resultat i en topologi av 100 replikor visade att upp- dateringslatensen vid kritiska replikor med höga konsistenskrav minskas med 40–50% och att den totala bandbreddskonsumtionen i systemet minskas med 4–8% utan signifikanta negativa följder för andra replikor i systemet.
338

Retrospective Dosimetric Comparison of MLC Defined Conformal Arc to Stereotactic Cone Plans for Single Fraction SRS on the Varian Edge (TM)

Yates, Justin, Yates 19 December 2018 (has links)
No description available.
339

Distributed effects in power transistors and the optimization of the layouts of AlGaN/GaN HFETs

Lee, Sunyoung 08 August 2006 (has links)
No description available.
340

Design and Evaluation of a Microservice Testing Tool for Edge Computing Environments / Design och utvärdering av Microservice Testing Verktyg i Kantmolnmiljö

Tanfener, Ozan January 2020 (has links)
Edge computing can provide decentralized computation and storage resources with low latency and high bandwidth. It is a promising infrastructure to host services with stringent latency requirements, for example autonomous driving, cloud gaming, and telesurgery to the customers. Because of the structural complexity associated with the edge computing applications, research topics like service placement gain great importance. To provide a realistic and efficient general environment for evaluating service placement solutions that can be used to analyze latency requirements of services at scale, a new testing tool for mobile edge cloud is designed and implemented in this thesis. The proposed tool is implemented as a cloud native application, and allows deploying applications in an edge computing infrastructure that consists of Kubernetes and Istio, it can be easily scaled up to several hundreds of microservices, and deployment into the edge clusters is automated. With the help of the designed tool, two different microservice placement algorithms are evaluated in an emulated edge computing environment based on Federated Kubernetes. The results have shown how the performance of algorithms varies when the parameters of the environment, and the applications instantiated and deployed by the tool are changed. For example, increasing the request rate 200% can increase the delay by 100% for different algorithms. Moreover, complicating the mobile network can improve the latency performance up to 20% depending on the microservice placement algorithm. / Edge computing kan ge decentraliserad beräkning och lagringsresurser med låg latens och hög bandbredd. Det är en lovande infrastruktur för att vara värd för tjänster med strängt prestandakrav, till exempel autonom körning, molnspel och telekirurgi till kunderna. På grund av den strukturella komplexiteten som är associerad med edge computing applikationerna, får forskningsämnen som tjänsteplacering stor betydelse. För att tillhandahålla en realistisk och effektiv allmän miljö för utvärdering av lösningar för tjänsteplacering, designas och implementeras ett nytt testverktyg för mobilt kantmoln i denna avhandling. Det föreslagna verktyget implementeras på molnmässigt sätt som gör det möjligt att distribuera applikationer i en edge computing-infrastruktur som består av Kubernetes och Istio. Med hjälp av det konstruerade verktyget utvärderas två olika placeringsalgoritmer för mikrotjänster i en realistisk edge computing miljö. Resultaten visar att en ökning av förfrågningsgraden 200 % kan öka förseningen med 100 % för olika algoritmer. Dessutom kan komplicering av mobilnätet förbättra latensprestanda upp till 20% beroende på algoritmen för mikroserviceplaceringen.

Page generated in 0.0509 seconds