Spelling suggestions: "subject:"edgecomputing"" "subject:"geocomputing""
101 |
Design, development and evaluation of the ruggedized edge computing node (RECON)Patel, Sahil Girin 09 December 2022 (has links)
The increased quality and quantity of sensors provide an ever-increasing capability to collect large quantities of high-quality data in the field. Research devoted to translating that data is progressing rapidly; however, translating field data into usable information can require high performance computing capabilities. While high performance computing (HPC) resources are available in centralized facilities, bandwidth, latency, security and other limitations inherent to edge location in field sensor applications may prevent HPC resources from being used in a timely fashion necessary for potential United States Army Corps of Engineers (USACE) field applications. To address these limitations, the design requirements for RECON are established and derived from a review of edge computing, in order to develop and evaluate a novel high-power, field-deployable HPC platform capable of operating in austere environments at the edge.
|
102 |
Belief Rule-Based Workload Orchestration in Multi-access Edge ComputingJamil, Mohammad Newaj January 2022 (has links)
Multi-access Edge Computing (MEC) is a standard network architecture of edge computing, which is proposed to handle tremendous computation demands of emerging resource-intensive and latency-sensitive applications and services and accommodate Quality of Service (QoS) requirements for ever-growing users through computation offloading. Since the demand of end-users is unknown in a rapidly changing dynamic environment, processing offloaded tasks in a non-optimal server can deteriorate QoS due to high latency and increasing task failures. In order to deal with such a challenge in MEC, a two-stage Belief Rule-Based (BRB) workload orchestrator is proposed to distribute the workload of end-users to optimum computing units, support strict QoS requirements, ensure efficient utilization of computational resources, minimize task failures, and reduce the overall service time. The proposed BRB workload orchestrator decides the optimal execution location for each offloaded task from User Equipment (UE) within the overall MEC architecture based on network conditions, computational resources, and task requirements. EdgeCloudSim simulator is used to conduct comprehensive simulation experiments for evaluating the performance of the proposed BRB orchestrator in contrast to four workload orchestration approaches from the literature with different types of applications. Based on the simulation experiments, the proposed workload orchestrator outperforms state-of-the-art workload orchestration approaches and ensures efficient utilization of computational resources while minimizing task failures and reducing the overall service time.
|
103 |
A neuromorphic approach for edge use allocationPetersson Steenari, Kim January 2022 (has links)
This paper introduces a new way of solving an edge user allocation problem. The problem is to be solved with a network of spiking neurons. This network should quickly and with low energy cost solve the optimization problem of allocating users to servers and minimizing the amount of servers hired to reduce the related hiring cost. The demonstrated method is a simulation of a method which could be implemented onto neuromorphic hardware. It is written in Python using the Brian2 spiking neural network simulator. The core of the method involves simulating an energy function through the use of circuit motifs. The dynamics of these circuit motifs mimic a search for the lowest energy point in an energy landscape, corresponding to a valid solution for the edge user allocation problem. The paper also shows the results of testing this network within the Brian2 environment.
|
104 |
Federated Learning for edge computing : Real-Time Object DetectionMemia, Ardit January 2023 (has links)
In domains where data is sensitive or private, there is a great value in methods that can learn in a distributed manner without the data ever leaving the local devices. Federated Learning (FL) has recently emerged as a promising solution to collaborative machine learning challenges while maintaining data privacy. With FL, multiple entities, whether cross-device or cross-silo, can jointly train models without compromising the locality or privacy of their data. Instead of moving data to a central storage system or cloud for model training, code is moved to the data owners’ local sites, and incremental local updates are combined into a global model. In this way FL enhances data pri-vacy and reduces the probability of eavesdropping to a certain extent. In this thesis we have utilized the means of Federated Learning into a Real-Time Object Detection (RTOB) model in order to investigate its performance and privacy awareness towards a traditional centralized ML environment. Several object detection models have been built us-ing YOLO framework and training with a custom dataset for indoor object detection. Local tests have been performed and the most opti-mal model has been chosen by evaluating training and testing metrics and afterwards using NVIDIA Jetson Nano external device to train the model and integrate into a Federated Learning environment using an open-source FL framework. Experiments has been conducted through the path in order to choose the optimal YOLO model (YOLOv8) and the best fitted FL framework to our study (FEDn).We observed a gradual enhancement in balancing the APC factors (Accuracy-Privacy-Communication) as we transitioned from basic lo-cal models to the YOLOv8 implementation within the FEDn system, both locally and on the SSC Cloud production environment. Although we encountered technical challenges deploying the YOLOv8-FEDn system on the SSC Cloud, preventing it from reaching a finalized state, our preliminary findings indicate its potential as a robust foundation for FL applications in RTOB models at the edge.
|
105 |
Heterogeneous IoT Network Architecture Design for Age of Information MinimizationXia, Xiaohao 01 February 2023 (has links) (PDF)
Timely data collection and execution in heterogeneous Internet of Things (IoT) networks in which different protocols and spectrum bands coexist such as WiFi, RFID, Zigbee, and LoRa, requires further investigation. This thesis studies the problem of age-of-information minimization in heterogeneous IoT networks consisting of heterogeneous IoT devices, an intermediate layer of multi-protocol mobile gateways (M-MGs) that collects and relays data from IoT objects and performs computing tasks, and heterogeneous access points (APs). A federated matching framework is presented to model the collaboration between different service providers (SPs) to deploy and share M-MGs and minimize the average weighted sum of the age-of-information and energy consumption. Further, we develop a two-level multi-protocol multi-agent actor-critic (MP-MAAC) to solve the optimization problem, where M-MGs and SPs can learn collaborative strategies through their own observations. The M-MGs' strategies include selecting IoT objects for data collection, execution, relaying, and/or offloading to SPs’ access points while SPs decide on spectrum allocation. Finally, to improve the convergence of the learning process we incorporate federated learning into the multi-agent collaborative framework. The numerical results show that our Fed-Match algorithm reduces the AoI by factor four, collects twice more packets than existing approaches, reduces the penalty by factor five when enabling relaying, and establishes design principles for the stability of the training process.
|
106 |
Edge Compute Offloading Strategies using Heuristic and Reinforcement Learning Techniques.Dikonimaki, Chrysoula January 2023 (has links)
The emergence of 5G alongside the distributed computing paradigm called Edge computing has prompted a tremendous change in the industry through the opportunity for reducing network latency and energy consumption and providing scalability. Edge computing extends the capabilities of users’ resource-constrained devices by placing data centers at the edge of the network. Computation offloading enables edge computing by allowing the migration of users’ tasks to edge servers. Deciding whether it is beneficial for a mobile device to offload a task and on which server to offload, while environmental variables, such as availability, load, network quality, etc., are changing dynamically, is a challenging problem that requires careful consideration to achieve better performance. This project focuses on proposing lightweight and efficient algorithms to take offloading decisions from the mobile device perspective to benefit the user. Subsequently, heuristic techniques have been examined as a way to find quick but sub-optimal solutions. These techniques have been combined with a Multi-Armed Bandit algorithm, called Discounted Upper Confidence Bound (DUCB) to take optimal decisions quickly. The findings indicate that these heuristic approaches cannot handle the dynamicity of the problem and the DUCB provides the ability to adapt to changing circumstances without having to keep adding extra parameters. Overall, the DUCB algorithm performs better in terms of local energy consumption and can improve service time most of the times. / Utvecklingen av 5G har skett parallellt med det distribuerade beräkningsparadigm som går under namnet Edge Computing. Lokala datacenter placerade på kanten av nätverket kan reducera nätverkslatensen och energiförbrukningen för applikationer. Exempelvis kan användarenheter med begränsade resurser ges utökande möjligheter genom avlastning av beräkningsintensiva uppgifter. Avlastningen sker genom att migrera de beräkningsintensiva uppgifterna till en dator i datacentret på kanten. Det är dock inte säkert att det alltid lönar sig att avlasta en beräkningsintensiv uppgift från en enhet till kanten. Detta måste avgöras från fall till fall. Att avgöra om och när det lönar sig är ett svårt problem då förutsättningar som tillgänglighet, last, nätverkskvalitét, etcetera hela tiden varierar. Fokus i detta projekt är att identifiera enkla och effektiva algoritmer som kan avgöra om det lönar sig för en användare att avlasta en beräkningsintensiv uppgift från en mobil enhet till kanten. Heuristiska tekniker har utvärderats som en möjlig väg att snabbt hitta lösningar även om de råkar vara suboptimala. Dessa tekniker har kombinerats med en flerarmad banditalgoritm (Multi-Armed Bandit), kallad Discounted Upper Confidence Bound (DUCB), för att ta optimala beslut snabbt. Resultaten indikerar att dessa heuristiska tekniker inte kan hantera de dynamiska förändringar som hela tiden sker samtidigt som DUCB kan anpassa sig till dessa förändrade omständigheter utan att man måste addera extra parametrar. Sammantaget, ger DUCM-algoritmen bättre resultat när det gäller lokal energikonsumtion och kan i de flesta fallen förbättra tiden för tjänsten.
|
107 |
An Edge-Based Blockchain-Enabled Framework for Preventing Insider Attacks in Internet of Things (IoT)Tukur, Yusuf M. January 2021 (has links)
The IoT offers enormous potentials thanks to its Widespread adoption by many industries, individuals, and governments, leading explosive growth and remarkable breakthroughs that have made it a technology with seemingly boundless applications. However, the far-reaching IoT applications cum its characteristic heterogeneity and ubiquity come with a huge price for more security vulnerabilities, making the deployed IoT systems increasingly susceptible to, and prime targets of many different physical and cyber-attacks including insider attacks, thereby growing the overall security risks to the systems.
This research, which focuses on addressing insider attacks on IoT, studies the likelihood of malicious insiders' activities compromising some of the security triad of Confidentiality, Integrity and Availability (CIA) of a supposedly secure IoT system with implemented security mechanisms. To further establish the vulnerability of the IoT systems to the insider attack being investigated in our research, we first produced a research output that emphasized the need for multi-layer security of the overall system and proposed the implementation of security mechanisms on components at all layers of the IoT system to safeguard the system and ensure its CIA. Those conventional measures however do not safeguard against insider attacks, as found by our experimental investigation of a working IoT system prototype.
The outcome of the investigation therefore necessitates our proposed solution to the problem, which leverages the integration of distributed edge computing with decentralized Ethereum blockchain technology to provide countermeasures that preserve the Integrity of the IoT system data and improve effectiveness of the system. We employed the power of Ethereum smart contracts to perform integrity checks on the system data logically and take risk management decisions. We considered the industry use case of Downstream Petroleum sector for application of our solution. The solution was evaluated using datasets from different experimental settings and showed up to 86% accuracy rate. / Government of the Federal Republic of Nigeria through the Petroleum Technology Development Fund (PTDF) Overseas Scholarship Scheme (OSS)
|
108 |
Edge-based blockchain enabled anomaly detection for insider attack prevention in Internet of ThingsTukur, Yusuf M., Thakker, Dhaval, Awan, Irfan U. 31 March 2022 (has links)
Yes / Internet of Things (IoT) platforms are responsible for overall data processing in the IoT System. This ranges from analytics and big data processing to gathering all sensor data over time to analyze and produce long-term trends. However, this comes with prohibitively high demand for resources such as memory, computing power and bandwidth, which the highly resource constrained IoT devices lack to send data to the platforms to achieve efficient operations. This results in poor availability and risk of data loss due to single point of failure should the cloud platforms suffer attacks. The integrity of the data can also be compromised by an insider, such as a malicious system administrator, without leaving traces of their actions. To address these issues, we propose in this work an edge-based blockchain enabled anomaly detection technique to prevent insider attacks in IoT. The technique first employs the power of edge computing to reduce the latency and bandwidth requirements by taking processing closer to the IoT nodes, hence improving availability, and avoiding single point of failure. It then leverages some aspect of sequence-based anomaly detection, while integrating distributed edge with blockchain that offers smart contracts to perform detection and correction of abnormalities in incoming sensor data. Evaluation of our technique using real IoT system datasets showed that the technique remarkably achieved the intended purpose, while ensuring integrity and availability of the data which is critical to IoT success. / Petroleum Technology Development Fund(PTDF) Nigeria, Grant/Award Number:PTDF/ED/PHD/TYM/858/16
|
109 |
Design and implementation of simulation tools, protocols and architectures to support service platforms on vehicular networksBáguena Albaladejo, Miguel 18 July 2017 (has links)
Tesis por compendio / Products related with Intelligent Transportation Systems (ITS) are becoming
a reality on our roads.
All car manufacturers are starting to include Internet
access in their vehicles and to integrate smartphones directly from the
dashboard, but more and more services will be introduced in the near future.
Connectivity through "vehicular networks" will become a cornerstone of every
new proposal, and offering an adequate quality of service is obviously desirable.
However, a lot of work is needed for vehicular networks to offer performances
similar to those of the wired networks.
Vehicular networks can be characterized by two main features: high variability
due to mobility levels that can reach up to 250 kilometers per hour,
and heterogeneity, being that various competing versions from different vendors
have and will be released. Therefore, to make the deployment of efficient
services possible, an extensive study must be carried out and adequate tools
must be proposed and developed. This PhD thesis addresses the service deployment
problem in these networks at three different levels: (i) the physical
and link layer, showing an exhaustive analysis of the physical channel and
models; (ii) the network layer, proposing a forwarding protocol for IP packets;
and (iii) the transport layer, where protocols are proposed to improve data
delivery.
First of all, the two main wireless technologies used in vehicular networks
where studied and modeled, namely the 802.11 family of standards, particularly
802.11p, and the cellular networks focusing on LTE. Since 802.11p is a
quite mature standard, we defined (i) a propagation and attenuation model
capable of replicating the transmission range and the fading behavior of real
802.11p devices, both in line-of-sight conditions and when obstructed by small
obstacles, and (ii) a visibility model able to deal with large obstacles, such
as buildings and houses, in a realistic manner.
Additionally, we proposed a
model based on high-level performance indicators (bandwidth and delay) for
LTE, which makes application validation and evaluation easier.
At the network layer, a hybrid protocol called AVE is proposed for packet
forwarding by switching among a set of standard routing strategies. Depending
on the specific scenario, AVE selects one out of four different routing solutions:
a) two-hop direct delivery, b) Dynamic MANET On-demand (DYMO), c)
greedy georouting, and d) store-carry-and-forward technique, to dynamically
adapt its behavior to the specific situation.
At the transport layer, we proposed a content delivery protocol for reliable
and bidirectional unicast communication in lossy links that improves content
delivery in situations where the wireless network is the bottleneck.
It has
been designed, validated, optimized, and its performance has been analyzed
in terms of throughput and resource efficiency.
Finally, at system level, we propose an edge-assisted computing model that
allows reducing the response latency of several queries by placing a computing
unit at the network edge. This way, traffic traversal through the Internet is
avoided when not needed.
This scheme could be used in both 802.11p and
cellular networks, and in this thesis we decided to focus on its evaluation using
LTE networks.
The platform presented in this thesis combines all the individual efforts to
create a single efficient platform. This new environment could be used by any
provider to improve the quality of the user experience obtainable through the
proposed vehicular network-based services. / Los productos relacionados con los Sistemas Inteligentes de Transporte (ITS)
se están transformando en una realidad en nuestras carreteras. Todos los
fabricantes de coches comienzan a incluir acceso a internet en sus vehículos y a
facilitar su integración con los teléfonos móviles, pero más y más servicios se
introducirán en el futuro.
La conectividad usando las "redes vehiculares" se
convertirá en la piedra angular de cada nueva propuesta, y ofrecer una calidad
de servicio adecuada será, obviamente, deseable. Sin embargo, se necesita
una gran cantidad de trabajo para que las redes vehiculares ofrezcan un
rendimiento similar al de las redes cableadas.
Las redes vehiculares quedan definidas por sus dos características básicas:
alto dinamismo, pues los nodos pueden alcanzar una velocidad relativa de más
de 250 km/h; y heterogeneidad, por la gran cantidad de propuestas diferentes
que los fabricantes están lanzando al mercado. Por ello, para hacer posible el
despliegue de servicios sobre ellas, se impone la necesidad de hacer un estudio
en profundidad de este entorno, y deben de proponerse y desarrollarse las
herramientas adecuadas.
Esta tesis ataca la problemática del despliegue de servicios en estas redes
a tres niveles diferentes: (i) el nivel físico y de enlace, mostrando varios análisis
en profundidad del medio físico y modelos derivados para su simulación;
(ii) el nivel de red, proponiendo un protocolo de difusión de la información
para los paquetes IP; y (iii) el nivel de transporte, donde otros protocolos son
propuestos para mejorar el rendimiento del transporte de datos.
En primer lugar, se han estudiado y modelado las dos principales tecnologías
inalámbricas que se utilizan para la comunicación en redes vehiculares,
la rama de estándares 802.11, en concreto 802.11p; y la comunicación celular,
en particular LTE. Dado que el estándar 802.11p es un estándar bastante
maduro, nos centramos en crear (i) un modelo de propagación y atenuación
capaz de replicar el rango de transmisión de dispositivos 802.11p reales, en
condiciones de visión directa y obstrucción por pequeños obstáculos, y (ii) un
modelo de visibilidad capaz de simular el efecto de grandes obstáculos, como
son los edifcios, de una manera realista.
Además, proponemos un modelo
basado en indicadores de rendimiento de alto nivel (ancho de banda y retardo)
para LTE, que facilita la validación y evaluación de aplicaciones.
En el plano de red, se propone un protocolo híbrido, llamado AVE, para
el encaminamiento y reenvío de paquetes usando un conjunto de estrategias
estándar de enrutamiento. Dependiendo del escenario, AVE elige entre cuatro
estrategias diferentes: a) entrega directa a dos saltos, b) Dynamic MANET
On-demand (DYMO) c) georouting voraz, y d) una técnica store-carry-and-
forward, para adaptar su comportamiento dinámicamente a cada situación.
En el plano de transporte, se propone un protocolo bidireccional de distribución
de contenidos en canales con pérdidas que mejora la entrega de contenidos
en situaciones en las que la red es un cuello de botella, como las redes
inalámbricas.
Ha sido diseñado, validado, optimizado, y su rendimiento ha
sido analizado en términos de productividad y eficiencia en la utilización de
recursos.
Finalmente, a nivel de sistema, proponemos un modelo de computación
asistida que permite reducir la latencia en la respuesta a muchas consultas
colocando una unidad de computación en el borde de la red, i.e., la red de
acceso. Este esquema podría ser usado en redes basadas en 802.11p y en redes
celulares, si bien en esta tesis decidimos centrarnos en su evaluación usando
redes LTE.
La plataforma presentada en esta tesis combina todos los esfuerzos individuales
para crear una plataforma única y eficiente. Este nuevo entorno puede
ser usado por cualquier proveedor para mejorar la calidad de la experiencia de
usuario en los servicios desplegados sobre redes vehiculares. / Els productes relacionats amb els sistemes intel · ligents de transport (ITS)
s'estan transformant en una realitat en les nostres carreteres. Tots els fabri-
cants de cotxes comencen a incloure accés a internet en els vehicles i a facilitar-
ne la integració amb els telèfons mòbils, però en el futur més i més serveis s'hi
introduiran. La connectivitat usant les xarxes vehicular esdevindrà la pedra
angular de cada nova proposta, i oferir una qualitat de servei adequada serà,
òbviament, desitjable.
No obstant això, es necessita una gran quantitat de
treball perquè les xarxes vehiculars oferisquen un rendiment similar al de les
xarxes cablejades.
Les xarxes vehiculars queden definides per dues característiques bàsiques:
alt dinamisme, ja que els nodes poden arribar a una velocitat relativa de més
de 250 km/h; i heterogeneïtat, per la gran quantitat de propostes diferents
que els fabricants estan llançant al mercat.
Per això, per a fer possible el
desplegament de serveis sobre aquestes xarxes, s'imposa la necessitat de fer un
estudi en profunditat d'aquest entorn, i cal proposar i desenvolupar les eines
adequades.
Aquesta tesi ataca la problemàtica del desplegament de serveis en aquestes
xarxes a tres nivells diferents: (i) el nivell físic i d'enllaç , mostrant diverses
anàlisis en profunditat del medi físic i models derivats per simular-lo; (ii) el
nivell de xarxa, proposant un protocol de difusió de la informació per als
paquets IP; i (iii) el nivell de transport, on es proposen altres protocols per a
millorar el rendiment del transport de dades.
En primer lloc, s'han estudiat i modelat les dues principals tecnologies
sense fils que s'utilitzen per a la comunicació en xarxes vehiculars, la branca
d'estàndards 802.11, en concret 802.11p; i la comunicació cel · lular, en partic-
ular LTE. Atès que l'estàndard 802.11p és un estàndard bastant madur, ens
centrem a crear (i) un model de propagació i atenuació capaç de replicar el
rang de transmissió de dispositius 802.11p reals, en condicions de visió directa
i obstrucció per petits obstacles, i (ii) un model de visibilitat capaç de simular
l'efecte de grans obstacles, com són els edificis, d'una manera realista. A més,
proposem un model basat en indicadors de rendiment d'alt nivell (ample de
banda i retard) per a LTE, que facilita la validació i l'avaluació d'aplicacions.
En el pla de xarxa, es proposa un protocol híbrid, anomenat AVE, per
a l'encaminament i el reenviament de paquets usant un conjunt d'estratègies
estàndard d'encaminament.
Depenent de l'escenari , AVE tria entre quatre
estratègies diferents: a) lliurament directe a dos salts, b) Dynamic MANET
On-demand (DYMO) c) georouting voraç, i d) una tècnica store-carry-and-
forward, per a adaptar-ne el comportament dinàmicament a cada situació.
En el pla de transport, es proposa un protocol bidireccional de distribució
de continguts en canals amb pèrdues que millora el lliurament de continguts
en situacions en què la xarxa és un coll de botella, com les xarxes sense fils.
Ha sigut dissenyat, validat, optimitzat, i el seu rendiment ha sigut analitzat
en termes de productivitat i eficiència en la utilització de recursos.
Finalment, a nivell de sistema, proposem un model de computació assistida
que permet reduir la latència en la resposta a moltes consultes col · locant una
unitat de computació a la vora de la xarxa, és a dir, la xarxa d'accés. Aquest
esquema podria ser usat en xarxes basades en 802.11p i en xarxes cel · lulars, si
bé en aquesta tesi decidim centrar-nos en la seua avaluació usant xarxes LTE.
La plataforma presentada en aquesta tesi combina tots els esforços indi-
viduals per a crear una plataforma única i eficient. Aquest nou entorn pot ser
usat per qualsevol proveïdor per a millorar la qualitat de l'experiència d'usuari
en els serveis desplegats sobre xarxes vehiculars. / Báguena Albaladejo, M. (2017). Design and implementation of simulation tools, protocols and architectures to support service platforms on vehicular networks [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/85333 / Compendio
|
110 |
Building A More Efficient Mobile Vision System Through Adaptive Video AnalyticsJunpeng Guo (20349582) 17 December 2024 (has links)
<p dir="ltr">Mobile vision is becoming the norm, transforming our daily lives. It powers numerous
applications, enabling seamless interactions between the digital and physical worlds, such as
augmented reality, real-time object detection, and many others. The popularity of mobile vision
has spurred advancements from both computer vision (CV) and mobile edge computing
(MEC) communities. The former focuses on improving analytics accuracy through the use
of proper deep neural networks (DNNs), while the latter addresses the resource limitations
of mobile environments by coordinating tasks between mobile and edge devices, determining
which data to transmit and process to enable real-time performance. </p><p dir="ltr">
Despite recent advancements, existing approaches typically integrate the functionalities
of the two camps at a basic task level. They rely on a uniform on-device processing scheme
that streams the same type of data and uses the same DNN model for identical CV tasks,
regardless of the analytical complexity of the current input, input size, or latency requirements.
This lack of adaptability to dynamic contexts limits their ability to achieve optimal
efficiency in scenarios involving diverse source data, varying computational resources, and
differing application requirements.
</p><p dir="ltr">Our approach seeks to move beyond task-level adaptation by emphasizing customized
optimizations tailored to dynamic use scenarios. This involves three key adaptive strategies:
dynamically compressing source data based on contextual information, selecting the
appropriate computing model (e.g., DNN or sub-DNN) for the vision task, and establishing
a feedback mechanism for context-aware runtime tuning. Additionally, for scenarios involving
movable cameras, the feedback mechanism guides the data capture process to further
enhance performance. These innovations are explored across three use cases categorized by
the capture device: one stationary camera, one moving camera, and cross-camera analytics.
</p><p dir="ltr">My dissertation begins with a stationary camera scenario, where we improve efficiency
by adapting to the use context on both the device and edge sides. On the device side, we
explore a broader compression space and implement adaptive compression based on data
context. Specifically, we leverage changes in confidence scores as feedback to guide on-device
compression, progressively reducing data volume while preserving the accuracy of visual analytics. On the edge side, instead of training a specialized DNN for each deployment
scenario, we adaptively select the best-fit sub-network for the given context. A shallow sub-network
is used to “test the waters”, accelerating the search for a deep sub-network that
maximizes analytical accuracy while meeting latency requirements.</p><p dir="ltr">
Next, we explore scenarios involving a moving camera, such as those mounted on drones.
These introduce new challenges, including increased data encoding demands due to camera
movement and degraded analytics performance (e.g., tracking) caused by changing perspectives.
To address these issues, we leverage drone-specific domain knowledge to optimize
compression for object detection by applying global motion compensation and assigning different
resolutions at a tile-granularity level based on the far-near effect. Furthermore, we
tackle the more complex task of object tracking and following, where the analytics results
directly influence the drone’s navigation. To enable effective target following with minimal
processing overhead, we design an adaptive frame rate tracking mechanism that dynamically
adjusts based on changing contexts.</p><p dir="ltr">
Last but not least, we extend the work to cross-camera analytics, focusing on coordination
between one stationary ground-based camera and one moving aerial camera. The primary
challenge lies in addressing significant misalignments (e.g., scale, rotation, and lighting variations)
between the two perspectives. To overcome these issues, we propose a multi-exit
matching mechanism that prioritizes local feature matching while incorporating global features
and additional cues, such as color and location, to refine matches as needed. This
approach ensures accurate identification of the same target across viewpoints while minimizing
computational overhead by dynamically adapting to the complexity of the matching
task.
</p><p dir="ltr">While the current work primarily addresses ideal conditions, assuming favorable weather,
optimal lighting, and reliable network performance, it establishes a solid foundation for future
innovations in adaptive video processing under more challenging conditions. Future efforts
will focus on enhancing robustness against adversarial factors, such as sensing data drift
and transmission losses. Additionally, we plan to explore multi-camera coordination and
multimodal data integration, leveraging the growing potential of large language models to
further advance this field.</p>
|
Page generated in 0.0707 seconds