Spelling suggestions: "subject:"read traffic control""
1 |
Real Time Identification of Road Traffic Control MeasuresAlmejalli, Khaled A., Dahal, Keshav P., Hossain, M. Alamgir January 2007 (has links)
No / The operator of a traffic control centre has to select the most appropriate traffic control action or combination of actions in a short time to manage the traffic network when non-recurrent road traffic congestion happens. This is a complex task, which requires expert knowledge, much experience and fast reaction. There are a large number of factors related to a traffic state as well as a large number of possible control actions that need to be considered during the decision making process. The identification of suitable control actions for a given non-recurrent traffic congestion can be tough even for experienced operators. Therefore, simulation models are used in many cases. However, simulating different traffic actions for a number of control measures in a complicated situation is very time-consuming. This chapter presents an intelligent method for the real-time identification of road traffic actions which assists the human operator of the traffic control centre in managing the current traffic state. The proposed system combines three soft-computing approaches, namely fuzzy logic, neural networks, and genetic algorithms. The system employs a fuzzy-neural network tool with self-organization algorithm for initializing the membership functions, a genetic algorithm (GA) for identifying fuzzy rules, and the back-propagation neural network algorithm for fine tuning the system parameters. The proposed system has been tested for a case-study of a small section of the ring-road around Riyadh city in Saudi Arabia. The results obtained for the case study are promising and demonstrate that the proposed approach can provide an effective support for real-time traffic control.
|
2 |
Systém řízení dopravy / Traffic Control SystemKačic, Matej January 2010 (has links)
Main goal of this thesis is to create an application which can do a simulation of model of road traffic system based on reality and can handle to manage signal control based on the proposed algorithm so that the road traffic should have a great performance and system should have a maximum throughput. It describes a simulation model, different approaches in design of algorithm for management of the road traffic system and decribes in detail evolutionary approach for optimalization system of control crossroads.
|
3 |
An intelligent automatic vehicle traffic flow monitoring and control systemMarie, Theko Emmanuel 01 1900 (has links)
M. Tech. (Information Technology, Faculty of Applied and Computer Sciences), Vaal University of Technology / Traffic congestion is a concern within the main arteries that link Johannesburg to Pretoria. In this study Matlab function Randperm is used to generate random vehicle speeds on a simulated highway. Randperm is used to mimic vehicle speed sensors capturing vehicle traffic on the highway.
Java sockets are used to send vehicle speed to the Road Traffic Control Centre (RTCC)-database server through a wireless medium. The RTCC-database server uses MySQL to store vehicle speed data. The domain controller with active directory together with a certificate server is used to manage and provide security access control to network resources. The wireless link used by speed sensors to transmit vehicle speed data is protected using PEAP with EAP-TLS which employs the use of digital certificates during authentication.
A java database connectivity driver is used to retrieve data from MySQL and a multilayer perceptron (MLP) model is used to predict future traffic status on the highway being monitored i.e. next 5 minutes from previous 5 minutes captured data. A dataset of 402 instances was divided as follows: 66 percent training data was used to train the MLP model, 15 percent data used during validation and the remaining 19 percent was used to test the trained MLP model. An excel spreadsheet was used to introduce novel (19 percent data not used during training) data to the trained MLP model to predict. Assuming that the spreadsheet data represent captured highway vehicle data for the last 5 minutes, the model showed 100 percent accuracy in predicting the four classes: congested, out congested, into congested and normal traffic flow.
Predicted traffic status is displayed for the motorist on the highway to know. Ability of the proposed model to continuously capture the traffic pattern on the highway (monitor) helps in redirecting (controlling) the highway traffic during periods of congestion.
Implementation of this project will definitely decrease traffic congestion across main arteries of Johannesburg. Pollution normally experienced when cars idle for a long time during congestion will be reduced by free highway traffic flow. Frequent servicing of motor vehicles will no longer be required by the motorists. Furthermore the economy of Gauteng and South Africa as a whole will benefit due to increase in production. Consumers will also benefit in obtaining competitive prices from organizations that depend on haulage services.
|
4 |
Feature Adaptation Algorithms for Reinforcement Learning with Applications to Wireless Sensor Networks And Road Traffic ControlPrabuchandran, K J January 2016 (has links) (PDF)
Many sequential decision making problems under uncertainty arising in engineering, science and economics are often modelled as Markov Decision Processes (MDPs). In the setting of MDPs, the goal is to and a state dependent optimal sequence of actions that minimizes a certain long-term performance criterion. The standard dynamic programming approach to solve an MDP for the optimal decisions requires a complete model of the MDP and is computationally feasible only for small state-action MDPs. Reinforcement learning (RL) methods, on the other hand, are model-free simulation based approaches for solving MDPs. In many real world applications, one is often faced with MDPs that have large state-action spaces whose model is unknown, however, whose outcomes can be simulated. In order to solve such (large) MDPs, one either resorts to the technique of function approximation in conjunction with RL methods or develops application specific RL methods. A solution based on RL methods with function approximation comes with the associated problem of choosing the right features for approximation and a solution based on application specific RL methods primarily relies on utilizing the problem structure. In this thesis, we investigate the problem of choosing the right features for RL methods based on function approximation as well as develop novel RL algorithms that adaptively obtain best features for approximation. Subsequently, we also develop problem specie RL methods for applications arising in the areas of wireless sensor networks and road traffic control.
In the first part of the thesis, we consider the problem of finding the best features for value function approximation in reinforcement learning for the long-run discounted cost objective. We quantify the error in the approximation for any given feature and the approximation parameter by the mean square Bellman error (MSBE) objective and develop an online algorithm to optimize MSBE.
Subsequently, we propose the first online actor-critic scheme with adaptive bases to find a locally optimal (control) policy for an MDP under the weighted discounted cost objective. The actor performs gradient search in the space of policy parameters using simultaneous perturbation stochastic approximation (SPSA) gradient estimates. This gradient computation however requires estimates of the value function of the policy. The value function is approximated using a linear architecture and its estimate is obtained from the critic. The error in approximation of the value function, however, results in sub-optimal policies. Thus, we obtain the best features by performing a gradient descent on the Grassmannian of features to minimize a MSBE objective. We provide a proof of convergence of our control algorithm to a locally optimal policy and show numerical results illustrating the performance of our algorithm.
In our next work, we develop an online actor-critic control algorithm with adaptive feature tuning for MDPs under the long-run average cost objective. In this setting, a gradient search in the policy parameters is performed using policy gradient estimates to improve the performance of the actor. The computation of the aforementioned gradient however requires estimates of the differential value function of the policy. In order to obtain good estimates of the differential value function, the critic adaptively tunes the features to obtain the best representation of the value function using gradient search in the Grassmannian of features. We prove that our actor-critic algorithm converges to a locally optimal policy. Experiments on two different MDP settings show performance improvements resulting from our feature adaptation scheme.
In the second part of the thesis, we develop problem specific RL solution methods for the two aforementioned applications. In both the applications, the size of the state-action space in the formulated MDPs is large. However, by utilizing the problem structure we develop scalable RL algorithms.
In the wireless sensor networks application, we develop RL algorithms to find optimal energy management policies (EMPs) for energy harvesting (EH) sensor nodes. First, we consider the case of a single EH sensor node and formulate the problem of finding an optimal EMP in the discounted cost MDP setting. We then propose two RL algorithms to maximize network performance. Through simulations, our algorithms are seen to outperform the algorithms in the literature. Our RL algorithms for the single EH sensor node do not scale when there are multiple sensor nodes. In our second work, we consider the problem of finding optimal energy sharing policies that maximize the network performance of a system comprising of multiple sensor nodes and a single energy harvesting (EH) source. We develop efficient energy sharing algorithms, namely Q-learning algorithm with exploration mechanisms based on the -greedy method as well as upper confidence bound (UCB). We extend these algorithms by incorporating state and action space aggregation to tackle state-action space explosion in the MDP. We also develop a cross entropy based method that incorporates policy parameterization in order to find near optimal energy sharing policies. Through numerical experiments, we show that our algorithms yield energy sharing policies that outperform the heuristic greedy method.
In the context of road traffic control, optimal control of traffic lights at junctions or traffic signal control (TSC) is essential for reducing the average delay experienced by the road users. This problem is hard to solve when simultaneously considering all the junctions in the road network. So, we propose a decentralized multi-agent reinforcement learning (MARL) algorithm for solving this problem by considering each junction in the road network as a separate agent (controller) to obtain dynamic TSC policies. We propose two approaches to minimize the average delay. In the first approach, each agent decides the signal duration of its phases in a round-robin (RR) manner using the multi-agent Q-learning algorithm. We show through simulations over VISSIM (microscopic traffic simulator) that our round-robin MARL algorithms perform significantly better than both the standard fixed signal timing (FST) algorithm and the saturation balancing (SAT) algorithm over two real road networks. In the second approach, instead of optimizing green light duration, each agent optimizes the order of the phase sequence. We then employ our MARL algorithms by suitably changing the state-action space and cost structure of the MDP. We show through simulations over VISSIM that our non-round robin MARL algorithms perform significantly better than the FST, SAT and the round-robin MARL algorithms based on the first approach. However, on the other hand, our round-robin MARL algorithms are more practically viable as they conform with the psychology of road users.
|
Page generated in 0.0952 seconds