• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 110
  • 110
  • 110
  • 36
  • 24
  • 22
  • 22
  • 19
  • 19
  • 19
  • 19
  • 18
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Training Multi-Agent Collaboration using Deep Reinforcement Learning in Game Environment / Träning av sambarbete mellan flera agenter i spelmiljö med hjälp av djup förstärkningsinlärning

Deng, Jie January 2018 (has links)
Deep Reinforcement Learning (DRL) is a new research area, which integrates deep neural networks into reinforcement learning algorithms. It is revolutionizing the field of AI with high performance in the traditional challenges, such as natural language processing, computer vision etc. The current deep reinforcement learning algorithms enable an end to end learning that utilizes deep neural networks to produce effective actions in complex environments from high dimensional sensory observations, such as raw images. The applications of deep reinforcement learning algorithms are remarkable. For example, the performance of trained agent playing Atari video games is comparable, or even superior to a human player. Current studies mostly focus on training single agent and its interaction with dynamic environments. However, in order to cope with complex real-world scenarios, it is necessary to look into multiple interacting agents and their collaborations on certain tasks. This thesis studies the state-of-the-art deep reinforcement learning algorithms and techniques. Through the experiments conducted in several 2D and 3D game scenarios, we investigate how DRL models can be adapted to train multiple agents cooperating with one another, by communications and physical navigations, and achieving their individual goals on complex tasks. / Djup förstärkningsinlärning (DRL) är en ny forskningsdomän som integrerar djupa neurala nätverk i inlärningsalgoritmer. Det har revolutionerat AI-fältet och skapat höga förväntningar på att lösa de traditionella problemen inom AI-forskningen. I detta examensarbete genomförs en grundlig studie av state-of-the-art inom DRL-algoritmer och DRL-tekniker. Genom experiment med flera 2D- och 3D-spelscenarion så undersöks hur agenter kan samarbeta med varandra och nå sina mål genom kommunikation och fysisk navigering.
32

Deep Reinforcement Learning of IoT System Dynamics  for Optimal Orchestration and Boosted Efficiency

Haowei Shi (16636062) 30 August 2023 (has links)
<p>This thesis targets the orchestration challenge of the Wearable Internet of Things (IoT) systems, for optimal configurations of the system in terms of energy efficiency, computing, and  data transmission activities. We have firstly investigated the reinforcement learning on the  simulated IoT environments to demonstrate its effectiveness, and afterwards studied the algorithm  on the real-world wearable motion data to show the practical promise. More specifically, firstly,  challenge arises in the complex massive-device orchestration, meaning that it is essential to  configure and manage the massive devices and the gateway/server. The complexity on the massive  wearable IoT devices, lies in the diverse energy budget, computing efficiency, etc. On the phone  or server side, it lies in how global diversity can be analyzed and how the system configuration  can be optimized. We therefore propose a new reinforcement learning architecture, called boosted  deep deterministic policy gradient, with enhanced actor-critic co-learning and multi-view state?transformation. The proposed actor-critic co-learning allows for enhanced dynamics abstraction  through the shared neural network component. Evaluated on a simulated massive-device task, the proposed deep reinforcement learning framework has achieved much more efficient system  configurations with enhanced computing capabilities and improved energy efficiency. Secondly, we have leveraged the real-world motion data to demonstrate the potential of leveraging  reinforcement learning to optimally configure the motion sensors. We used paradigms in  sequential data estimation to obtain estimated data for some sensors, allowing energy savings since  these sensors no longer need to be activated to collect data for estimation intervals. We then  introduced the Deep Deterministic Policy Gradient algorithm to learn to control the estimation  timing. This study will provide a real-world demonstration of maximizing energy efficiency wearable IoT applications while maintaining data accuracy. Overall, this thesis will greatly  advance the wearable IoT system orchestration for optimal system configurations.   </p>
33

MMF-DRL: Multimodal Fusion-Deep Reinforcement Learning Approach with Domain-Specific Features for Classifying Time Series Data

Sharma, Asmita 01 June 2023 (has links) (PDF)
This research focuses on addressing two pertinent problems in machine learning (ML) which are (a) the supervised classification of time series and (b) the need for large amounts of labeled images for training supervised classifiers. The novel contributions are two-fold. The first problem of time series classification is addressed by proposing to transform time series into domain-specific 2D features such as scalograms and recurrence plot (RP) images. The second problem which is the need for large amounts of labeled image data, is tackled by proposing a new way of using a reinforcement learning (RL) technique as a supervised classifier by using multimodal (joint representation) scalograms and RP images. The motivation for using such domain-specific features is that they provide additional information to the ML models by capturing domain-specific features (patterns) and also help in taking advantage of state-of-the-art image classifiers for learning the patterns from these textured images. Thus, this research proposes a multimodal fusion (MMF) - deep reinforcement learning (DRL) approach as an alternative technique to traditional supervised image classifiers for the classification of time series. The proposed MMF-DRL approach produces improved accuracy over state-of-the-art supervised learning models while needing fewer training data. Results show the merit of using multiple modalities and RL in achieving improved performance than training on a single modality. Moreover, the proposed approach yields the highest accuracy of 90.20% and 89.63% respectively for two physiological time series datasets with fewer training data in contrast to the state-of-the-art supervised learning model ChronoNet which gave 87.62% and 88.02% accuracy respectively for the two datasets with more training data.
34

Power Dispatch and Storage Configuration Optimization of an IntegratedEnergy System using Deep Reinforcement Learning and Hyperparameter Tuning

Katikaneni, Sravya January 2022 (has links)
No description available.
35

Cost-Effective Large-Scale Digital Twins Notification System with Prioritization Consideration

Vrbaski, Mira 19 December 2023 (has links)
Large-Scale Digital Twins Notification System (LSDTNS) monitors a Digital Twin (DT) cluster for a predefined critical state, and once it detects such a state, it sends a Notification Event (NE) to a predefined recipient. Additionally, the time from producing the DT's Complex Event (CE) to sending an alarm has to be less than a predefined deadline. However, addressing scalability and multi-objectives, such as deployment cost, resource utilization, and meeting the deadline, on top of process scheduling, presents a complex challenge. Therefore, this thesis presents a complex methodology consisting of three contributions that address system scalability, multi-objectivity and scheduling of CE processes using Reinforcement Learning (RL). The first contribution proposes the IoT Notification System Architecture based on a micro-service-based notification methodology that allows for running and seamlessly switching between various CE reasoning algorithms. Our proposed IoT Notification System architecture addresses the scalability issue in state-of-the-art CE Recognition systems. The second contribution proposes a novel methodology for multi-objective optimization for cloud provisioning (MOOP). MOOP is the first work dealing with multi-optimization objectives for microservice notification applications, where the notification load is variable and depends on the results of previous microservices subtasks. MOOP provides a multi-objective mathematical cloud resource deployment model and demonstrates effectiveness through the case study. Finally, the thesis presents a Scheduler for large-scale Critical Notification applications based on a Deep Reinforcement Learning (SCN-DRL) scheduling approach for LSDTNS using RL. SCN-DRL is the first work dealing with multi-objective optimization for critical microservice notification applications using RL. During the performance evaluation, SCN-DRL demonstrates better performance than state-of-the-art heuristics. SCN-DRL shows steady performance when the notification workload increases from 10% to 90%. In addition, SCN-DRL, tested with three neural networks, shows that it is resilient to sudden container resources drop by 10%. Such resilience to resource container failures is an important attribute of a distributed system.
36

The Development of Real-Time Fouling Monitoring and Control Systems for Reverse Osmosis Membrane Cleaning using Deep Reinforcement Learning

Titus Glover, Kyle Ian Kwartei 11 July 2023 (has links)
This dissertation investigates potential applications for Machine Learning (ML) and real-time fouling monitors in Reverse Osmosis (RO) desalination. The main objective was to develop a framework that minimizes the cost of membrane fouling by deploying AI-generated cleaning patterns and real-time fouling monitoring. Membrane manufacturers and researchers typically recommend cleaning (standard operating procedure – SOP) when normalized permeate flow, a performance metric tracking the decline of permeate flow/output from its initial baseline with respect to operating pressure, reaches 0.85-0.90 of baseline values. This study used estimates of production cost, internal profitability metrics, and permeate volume output to evaluate and compare the impact of time selection for cleaning intervention. The cleanings initiated when the normalized permeate flow reached 0.85 represented the control for cleaning intervention times. In deciding optimal times for cleaning intervention, a Deep Reinforcement Learning (RL) agent was trained to signal cleaning between 0.85-0.90 normalized with a cost-based reward system. A laboratory-scale RO flat membrane desalination system platform was developed as a model plant, and data from the platform and used to train the model and examine both simulated and actual control of when to trigger membrane cleaning, replacing the control operator's 0.85 cleaning threshold. Compared to SOP, the intelligent operator showed consistent savings in production costs at the expense of total permeate volume output. The simulated operation using the RL initiated yielded 9% less permeate water but reduced the cost per unit volume ($/m3) by 12.3%. When the RL agent was used to initiate cleaning on the laboratory-scale RO desalination system platform, the system produced 21% less permeate water but reduced production cost ($/m3) by 16.0%. These results are consistent with an RL agent that prioritizes production cost savings over product volume output. / Doctor of Philosophy / The decreasing supply of freshwater sources has made desalination technology an attractive solution. Desalination—or the removal of salt from water—provides an opportunity to produce more freshwater by treating saline sources and recycled water. One prominent form of desalination is Reverse Osmosis (RO), an energy intensive process in which freshwater is forced from a pressurized feed through a semipermeable membrane. A significant limiting cost factor for RO desalination is the maintenance and replacement of semipermeable RO membranes. Over time, unwanted particles accumulate on the membrane surface in a process known as membrane fouling. Significant levels of fouling can drive up costs, negatively affect product quality (permeate water), and decrease the useful lifetime of the membrane. As a result, operators employ various fouling control techniques, such as membrane cleaning, to mitigate its effects on production and minimize damage to the membrane. This dissertation investigates potential applications for Machine Learning (ML) and real-time fouling monitors in Reverse Osmosis (RO) desalination. The main objective was to develop a framework that minimizes the cost of membrane fouling by deploying AI-generated cleaning patterns and real-time fouling monitoring. Membrane manufacturers and researchers typically recommend cleaning (standard operating procedure – SOP) when normalized permeate flow, a performance metric tracking the decline of permeate flow/output from its initial baseline with respect to operating pressure, reaches 0.85-0.90 of baseline values. This study used estimates of production cost, internal profitability metrics, and permeate volume output to evaluate and compare the impact of time selection for cleaning intervention. The cleanings initiated when the normalized permeate flow reached 0.85 represented the control for cleaning intervention times. In deciding optimal times for cleaning intervention, a Deep Reinforcement Learning (RL) agent was trained to signal cleaning between 0.85-0.90 normalized with a cost-based reward system. A laboratory-scale RO flat membrane desalination system platform was developed as a model plant, and data from the platform and used to train the model and examine both simulated and actual control of when to trigger membrane cleaning, replacing the control operator's 0.85 cleaning threshold. Compared to SOP, the intelligent operator showed consistent savings in production costs at the expense of total permeate volume output. The simulated operation using the RL initiated yielded 9% less permeate water but reduced the cost per unit volume ($/m3) by 12.3%. When the RL agent was used to initiate cleaning on the laboratory-scale RO desalination system platform, the system produced 21% less permeate water but reduced production cost ($/m3) by 16.0%. These results are consistent with an RL agent that prioritizes production cost savings over product volume output.
37

Towards Anatomically Plausible Streamline Tractography with Deep Reinforcement Learning / Mot anatomiskt plausibel strömlinje-traktografi med djup förstärkningsinlärning

Bengtsdotter, Erika January 2022 (has links)
Tractography is a tool that is often used to study structural brain connectivity from diffusion magnetic resonance imaging data. Despite its ability to visualize fibers in the white brain matter, it results in a high number of invalid streamlines. For the sake of research and clinical applications, it is of great interest to reduce this number and so improve the quality of tractography. Over the years, many solutions have been proposed, often with a need for ground truth data. As such data for tractography is very difficult to generate even with expertise, it is meaningful to instead use methods like reinforcement learning that does not have such a requirement. In 2021 a deep reinforcement learning tractography network was published: Track-To-Learn. There is however still room for improvement in the reward function of the framework and this is what we focused on in this thesis. Firstly we successfully reproduced some of the published results of Track-To-Learn and observed that almost 20 % of the streamlines were anatomically plausible. Continuously we modified the reward function by giving a reward boost to streamlines which started or terminated within a specified mask. This addition resulted in a small increase of plausible streamlines for a more realistic dataset. Lastly we attempted to include anatomical filtering in the reward function. The produced results were however not enough to draw any valid conclusions about the influence of the modification. Nonetheless, the work of this thesis showed that including further fiber specific anatomical constraints in the reward function of Track-To-Learn could possibly improve the quality of the generated tractograms and would be of interest in both research and clinical settings.
38

Autonomous Mobile Robot Navigation in Dynamic Real-World Environments Without Maps With Zero-Shot Deep Reinforcement Learning

Sivashangaran, Shathushan 04 June 2024 (has links)
Operation of Autonomous Mobile Robots (AMRs) of all forms that include wheeled ground vehicles, quadrupeds and humanoids in dynamically changing GPS denied environments without a-priori maps, exclusively using onboard sensors, is an unsolved problem that has potential to transform the economy, and vastly improve humanity's capabilities with improvements to agriculture, manufacturing, disaster response, military and space exploration. Conventional AMR automation approaches are modularized into perception, motion planning and control which is computationally inefficient, and requires explicit feature extraction and engineering, that inhibits generalization, and deployment at scale. Few works have focused on real-world end-to-end approaches that directly map sensor inputs to control outputs due to the large amount of well curated training data required for supervised Deep Learning (DL) which is time consuming and labor intensive to collect and label, and sample inefficiency and challenges to bridging the simulation to reality gap using Deep Reinforcement Learning (DRL). This dissertation presents a novel method to efficiently train DRL with significantly fewer samples in a constrained racetrack environment at physical limits in simulation, transferred zero-shot to the real-world for robust end-to-end AMR navigation. The representation learned in a compact parameter space with 2 fully connected layers with 64 nodes each is demonstrated to exhibit emergent behavior for Out-of-Distribution (OOD) generalization to navigation in new environments that include unstructured terrain without maps, dynamic obstacle avoidance, and navigation to objects of interest with vision input that encompass low light scenarios with the addition of a night vision camera. The learned policy outperforms conventional navigation algorithms while consuming a fraction of the computation resources, enabling execution on a range of AMR forms with varying embedded computer payloads. / Doctor of Philosophy / Robots with wheels or legs to move around environments improve humanity's capabilities in many applications such as agriculture, manufacturing, and space exploration. Reliable, robust mobile robots have the potential to significantly improve the economy. A key component of mobility is navigation to either explore the surrounding environment, or travel to a goal position or object of interest by avoiding stationary, and dynamic obstacles. This is a complex problem that has no reliable solution, which is one of the main reasons robots are not present everywhere, assisting people in various tasks. Past and current approaches involve first mapping an environment, then planning a collision-free path, and finally executing motor signals to traverse along the path. This has several limitations due to the lack of detailed pre-made maps, and inability to operate in previously unseen, dynamic environments. Furthermore, these modular methods require high computation resources due to the large number of calculations required for each step that prevents high real-time speed, and functionality in small robots with limited weight capacity for onboard computers, that are beneficial for reconnaissance, and exploration tasks. This dissertation presents a novel Artificial Intelligence (AI) method for robot navigation that is more computationally efficient than current approaches, with better performance. The AI model is trained to race in simulation at multiple times real-time speed for cost-effective, accelerated training, and transferred to a physical mobile robot where it retains its training experience, and generalizes to navigation in new environments without maps, with exploratory behavior, and dynamic obstacle avoidance capabilities.
39

Information Freshness: How To Achieve It and Its Impact On Low- Latency Autonomous Systems

Choudhury, Biplav 03 June 2022 (has links)
In the context of wireless communications, low latency autonomous systems continue to grow in importance. Some applications of autonomous systems where low latency communication is essential are (i) vehicular network's safety performance depends on how recently the vehicles are updated on their neighboring vehicle's locations, (ii) updates from IoT devices need to be aggregated appropriately at the monitoring station before the information gets stale to extract temporal and spatial information from it, and (iii) sensors and controllers in a smart grid need to track the most recent state of the system to tune system parameters dynamically, etc. Each of the above-mentioned applications differs based on the connectivity between the source and the destination. First, vehicular networks involve a broadcast network where each of the vehicles broadcasts its packets to all the other vehicles. Secondly, in the case of UAV-assisted IoT networks, packets generated at multiple IoT devices are transmitted to a final destination via relays. Finally for the smart grid and generally for distributed systems, each source can have varying and unique destinations. Therefore in terms of connectivity, they can be categorized into one-to-all, all-to-one, and variable relationship between the number of sources and destinations. Additionally, some of the other major differences between the applications are the impact of mobility, the importance of a reduced AoI, centralized vs distributed manner of measuring AoI, etc. Thus the wide variety of application requirements makes it challenging to develop scheduling schemes that universally address minimizing the AoI. All these applications involve generating time-stamped status updates at a source which are then transmitted to their destination over a wireless medium. The timely reception of these updates at the destination decides the operating state of the system. This is because the fresher the information at the destination, the better its awareness of the system state for making better control decisions. This freshness of information is not the same as maximizing the throughput or minimizing the delay. While ideally throughput can be maximized by sending data as fast as possible, this may saturate the receiver resulting in queuing, contention, and other delays. On the other hand, these delays can be minimized by sending updates slowly, but this may cause high inter-arrival times. Therefore, a new metric called the Age of Information (AoI) has been proposed to measure the freshness of information that can account for many facets that influence data availability. In simple terms, AoI is measured at the destination as the time elapsed since the generation time of the most recently received update. Therefore AoI is able to incorporate both the delay and the inter-packet arrival time. This makes it a much better metric to measure end-to-end latency, and hence characterize the performance of such time-sensitive systems. These basic characteristics of AoI are explained in detail in Chapter 1. Overall, the main contribution of this dissertation is developing scheduling and resource allocation schemes targeted at improving the AoI of various autonomous systems having different types of connectivity, namely vehicular networks, UAV-assisted IoT networks, and smart grids, and then characterizing and quantifying the benefits of a reduced AoI from the application perspective. In the first contribution, we look into minimizing AoI for the case of broadcast networks having one-to-all connectivity between the source and destination devices by considering the case of vehicular networks. While vehicular networks have been studied in terms of AoI minimization, the impact of mobility and the benefit of a reduced AoI from the application perspective has not been investigated. The mobility of the vehicles is realistically modeled using the Simulation of Urban Mobility (SUMO) software to account for overtaking, lane changes, etc. We propose a safety metric that indicates the collision risk of a vehicle and do a simulation-based study on the ns3 simulator to study its relation to AoI. We see that the broadcast rate in a Dedicated Short Range Network (DSRC) that minimizes the system AoI also has the least collision risk, therefore signifying that reducing AoI improves the on-road safety of the vehicles. However, we also show that this relationship is not universally true and the mobility of the vehicles becomes a crucial aspect. Therefore, we propose a new metric called the Trackability-aware AoI (TAoI) which ensures that vehicles with unpredictable mobility broadcast at a faster rate while vehicles that are predicable are broadcasting at a reduced rate. The results obtained show that minimizing TAoI provides much better on-road safety as compared to plain AoI minimizing, which points to the importance of mobility in such applications. In the second contribution, we focus on networks with all-to-one connectivity where packets from multiple sources are transmitted to a single destination by taking an example of IoT networks. Here multiple IoT devices measure a physical phenomenon and transmit these measurements to a central base station (BS). However, under certain scenarios, the BS and IoT devices are unable to communicate directly and this necessitates the use of UAVs as relays. This creates a two-hop scenario that has not been studied for AoI minimization in UAV networks. In the first hop, the packets have to be sampled from the IoT devices to the UAV and then updated from the UAVs to the BS in the second hop. Such networks are called UAV-assisted IoT networks. We show that under ideal conditions with a generate-at-will traffic generation model and lossless wireless channels, the Maximal Age Difference (MAD) scheduler is the optimal AoI minimizing scheduler. When the ideal conditions are not applicable and more practical conditions are considered, a reinforcement learning (RL) based scheduler is desirable that can account for packet generation patterns and channel qualities. Therefore we propose to use a Deep-Q-Network (DQN)-based scheduler and it outperforms MAD and all other schedulers under general conditions. However, the DQN-based scheduler suffers from scalability issues in large networks. Therefore, another type of RL algorithm called Proximal Policy Optimization (PPO) is proposed to be used for larger networks. Additionally, the PPO-based scheduler can account for changes in the network conditions which the DQN-based scheduler was not able to do. This ensures the trained model can be deployed in environments that might be different than the trained environment. In the final contribution, AoI is studied in networks with varying connectivity between the source and destination devices. A typical example of such a distributed network is the smart grid where multiple devices exchange state information to ensure the grid operates in a stable state. To investigate AoI minimization and its impact on the smart grid, a co-simulation platform is designed where the 5G network is modeled in Python and the smart grid is modeled in PSCAD/MATLAB. In the first part of the study, the suitability of 5G in supporting smart grid operations is investigated. Based on the encouraging results that 5G can support a smart grid, we focus on the schedulers at the 5G RAN to minimize the AoI. It is seen that the AoI-based schedulers provide much better stability compared to traditional 5G schedulers like the proportional fairness and round-robin. However, the MAD scheduler which has been shown to be optimal for a variety of scenarios is no longer optimal as it cannot account for the connectivity among the devices. Additionally, distributed networks with heterogeneous sources will, in addition to the varying connectivity, have different sized packets requiring a different number of resource blocks (RB) to transmit, packet generation patterns, channel conditions, etc. This motivates an RL-based approach. Hence we propose a DQN-based scheduler that can take these factors into account and results show that the DQN-based scheduler outperforms all other schedulers in all considered conditions. / Doctor of Philosophy / Age of information (AoI) is an exciting new metric as it is able to characterize the freshness of information, where freshness means how representative the information is of the current system state. Therefore it is being actively investigated for a variety of autonomous systems that rely on having the most up-to-date information on the current state. Some examples are vehicular networks, UAV networks, and smart grids. Vehicular networks need the real-time location of their neighbor vehicles to make maneuver decisions, UAVs have to collect the most recent information from IoT devices for monitoring purposes, and devices in a smart grid need to ensure that they have the most recent information on the desired system state. From a communication point of view, each of these scenarios presents a different type of connectivity between the source and the destination. First, the vehicular network is a broadcast network where each vehicle broadcasts its packets to every other vehicle. Secondly, in the UAV network, multiple devices transmit their packets to a single destination via a relay. Finally, with the smart grid and the generally distributed networks, every source can have different and unique destinations. In these applications, AoI becomes a natural choice to measure the system performance as the fresher the information at the destination, the better its awareness of the system state which allows it to take better control decisions to reach the desired objective. Therefore in this dissertation, we use mathematical analysis and simulation-based approaches to investigate different scheduling and resource allocation policies to improve the AoI for the above-mentioned scenarios. We also show that the reduced AoI improves the system performance, i.e., better on-road safety for vehicular networks and better stability for smart grid applications. The results obtained in this dissertation show that when designing communication and networking protocols for time-sensitive applications requiring low latency, they have to be optimized to improve AoI. This is in contrast to most modern-day communication protocols that are targeted at improving the throughput or minimizing the delay.
40

Enabling IoV Communication through Secure Decentralized Clustering using Federated Deep Reinforcement Learning

Scott, Chandler 01 August 2024 (has links) (PDF)
The Internet of Vehicles (IoV) holds immense potential for revolutionizing transporta- tion systems by facilitating seamless vehicle-to-vehicle and vehicle-to-infrastructure communication. However, challenges such as congestion, pollution, and security per- sist, particularly in rural areas with limited infrastructure. Existing centralized solu- tions are impractical in such environments due to latency and privacy concerns. To address these challenges, we propose a decentralized clustering algorithm enhanced with Federated Deep Reinforcement Learning (FDRL). Our approach enables low- latency communication, competitive packet delivery ratios, and cluster stability while preserving data privacy. Additionally, we introduce a trust-based security framework for IoV environments, integrating a central authority and trust engine to establish se- cure communication and interaction among vehicles and infrastructure components. Through these innovations, we contribute to safer, more efficient, and trustworthy IoV deployments, paving the way for widespread adoption and realizing the transfor- mative potential of IoV technologies.

Page generated in 0.1301 seconds