Spelling suggestions: "subject:"4traffic lights"" "subject:"ktraffic lights""
1 |
The effect of geometric design on the capacity of isolated highway traffic signal approachesAl-Mojel, A. H. S. January 1987 (has links)
No description available.
|
2 |
New technology for smart traffic lights based on the Arduino microcontrollerAL Anbagi, Bassam January 2023 (has links)
Abstract Society has changed immensely due to the revolution in information technology. The question is how this technology can be useful to man. My interest lies in how to apply radar to traffic. People in general relate radar to military matters while the focus here is on traffic accidents. Accidents happen and therefore police and ambulance need to be on site as fast as possible. This master thesis presents a new design with hardware and software combined in one device. The combination is essential in traffic technology. I designed a new and smart traffic light that can be effective in a traffic system. I design, discuss and examine the radar up to the laboratory stage. It is based on the Doppler effect and has a sound sensor that can identify a police car and also detect vehicle speed. The status of the traffic lights can then be changed to give priority to the police car, based on speed limits and other restrictions. Difficulties as well as advantages with the design are discussed.
|
3 |
Experimental Investigation of Wind-induced Response of Span-wire Traffic Signal SystemsMatus, Manuel A., Mr. 27 March 2018 (has links)
The purpose of this investigation was to identify key design parameters that might significantly affect the response of span wire traffic light systems during extreme wind events. The performance of these systems was assessed through physical testing in an effort to quantify the effect of sag ratio, wire tension and wire clearance. The Wall of Wind experimental facility at Florida International University was utilized for testing the systems at different wind speeds and wind directions.
The findings showed that, at all tested wind directions, lift, drag and tension forces increased with increasing wind speeds. On the contrary, increasing the wind speed resulted in higher inclination on the traffic lights, lower drag coefficients and higher lift coefficients. Overall, when the wind was approaching from the rear face of the traffic signals, increased drag coefficients were recorded. When the sag was set at 7% lower drag coefficients were observed.
|
4 |
Contribution à la modélisation et à la commande des feux de signalisation par réseaux de Petri hybrides / Contribution to the modeling and control of traffic lights with hybrid Petri netsSammoud, Bassem 04 September 2015 (has links)
Le trafic routier entraine de nombreux effets néfastes, dont la pollution, l'insécurité et la congestion. La plupart de méthodes développées, pour la régulation du trafic urbain au niveau des carrefours, cherche à réduire les temps d’attente et les longueurs des files d'attente. Ces méthodes se fixent principalement comme objectif l’optimisation des cycles de feu sur un horizon fini.Pour la description du trafic, nous adoptons une modélisation par les réseaux de Pétri Hybrides (RdPH), qui représente, simultanément, deux niveaux de représentation complémentaires : l'évolution continue des files d'attente et l'évolution discrète des feux tricolores. Ces deux niveaux sont, respectivement, articulés autour des réseaux de Pétri à vitesses variables et des réseaux de Pétri discrets temporisés.Nous élaborons en plus, une nouvelle stratégie pour résoudre le problème de la régulation du trafic urbain intervenant d'une manière adaptée au niveau des feux de signalisation. Nous cherchons à éviter, en premier lieu,la congestion et la sursaturation des files d'attente, qui ne doivent pas dépasser l'optimum des capacités des voies de l'intersection et, en second lieu, à réduire le temps d'évacuation des véhicules au niveau du carrefour et surtout les temps d'attente des conducteurs.Dans ce sens, un premier algorithme est élaboré pour calculer les longueurs des files d'attente, utilisant une approche qui se base sur la simplification de la modélisation d'un carrefour. Pour optimiser le temps moyen d’attente et le temps total d'évacuation sont, considérés et appliqués, avec succès, une heuristique de contrôle et une stratégie de régulation à feux fixe et à feux variables, suite à la détermination de la durée de feu vert correspondante à la situation de l'intersection en temps réel.Nous envisageons, de généraliser les résultats de nos travaux exploitant le modèle RdPH aux intersections plus complexes pour des situations réelles d'un réseau de carrefour. / Road traffic causes many adverse effects, including pollution, insecurity and congestion. Most of the developedmethods for regulation of urban traffic at crossroads, seeking to reduce wait times and lengths of queues. Thesemethods are mainly set objective optimization fire cycles over a finite horizon.To describe the composition of traffic, we opted for a traffic modeling by hybrid Petri nets, representingsimultaneously two complementary levels of representation: the continuing evolution of queues and discreetchanging traffic lights. These two levels are respectively articulated about Petri nets variable speed and discretetimed Petri nets.We chose, as well, for a new strategy to solve the problem of urban traffic control intervening in an appropriatemanner to the level of the signal lights. We sought to avoid first, congestion and the super saturation of queues,which must not exceed the optimum capacity of the intersection of routes studied and, second, to reduce theevacuation time of vehicles at the crossroads and especially waiting times for drivers.In this sense, a first algorithm is developed to calculate the lengths of queues, using a modeling simplificationapproach to a junction. To optimize the average waiting time and the total evacuation time are considered andapplied successfully by a heuristic control lights and a fixed control strategy and floating lights, following thedetermination of the duration of corresponding green light to the situation in real time intersectionWe plan to generalize the results of our work exploiting RDPH model to complex intersections for real situations of acrossroads network.
|
5 |
Rapid development of problem-solvers with HeurEAKA! - a heuristic evolutionary algorithm and incremental knowledge acquisition approachBekmann, Joachim Peter, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
A new approach for the development of problem-solvers for combinatorial problems is proposed in this thesis. The approach combines incremental knowledge acquisition and probabilistic search algorithms, such as evolutionary algorithms, to allow a human to rapidly develop problem-solvers in new domains in a framework called HeurEAKA. The approach addresses a known problem, that is, adapting evolutionary algorithms to the search domain by the introduction of domain knowledge. The development of specialised problem-solvers has historically been labour intensive. Implementing a problem-solver from scratch is very time consuming. Another approach is to adapt a general purpose search strategy to the problem domain. This is motivated by the observation that in order to scale an algorithm to solve complex problems, domain knowledge is needed. At present there is no systematic approach allowing one to efficiently engineer a specialpurpose search strategy for a given search problem. This means that, for example, adapting evolutionary algorithms (which are general purpose algorithms) is often very difficult and has lead some people to refer to their use as a ???black art???. In the HeurEAKA approach, domain knowledge is introduced by incrementally building a knowledge base that controls parts of the evolutionary algorithm. For example, the fitness function and the mutation operators in a genetic algorithm. An evolutionary search algorithm ismonitored by a human whomakes recommendations on search strategy based on individual solution candidates. It is assumed that the human has a reasonable intuition of the search problem. The human adds rules to a knowledge base describing how candidate solutions can be improved, or why they are desirable or undesirable in the search for a good solution. The incremental knowledge acquisition approach is inspired by the idea of (Nested) Ripple Down Rules. This approach sees a human provide exception rules to rules already existing in the knowledge base using concrete examples of inappropriate performance of the existing knowledge base. The Nested Ripple Down Rules (NRDR) approach allows humans to compose rules using concepts that are natural and intuitive to them. In HeurEAKA, NRDR are significantly adapted to form part of a probabilistic search algorithm. The probabilistic search algorithms used in the presented system are a genetic algorithm and a hierarchical bayesian optimization algorithm. The success of the HeurEAKA approach is demonstrated in experiments undertaken on industrially relevant domains. Problem-solvers were developed for detailed channel and switchbox routing in VLSI design and traffic light optimisation for urban road networks. The problem-solvers were developed in a short amount of time, in domains where a large amount of effort has gone into developing existing algorithms. Experiments show that chosen benchmark problems are solved as well or better than existing approaches. Particularly in the traffic light optimisation domain excellent results are achieved.
|
6 |
Rapid development of problem-solvers with HeurEAKA! - a heuristic evolutionary algorithm and incremental knowledge acquisition approachBekmann, Joachim Peter, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
A new approach for the development of problem-solvers for combinatorial problems is proposed in this thesis. The approach combines incremental knowledge acquisition and probabilistic search algorithms, such as evolutionary algorithms, to allow a human to rapidly develop problem-solvers in new domains in a framework called HeurEAKA. The approach addresses a known problem, that is, adapting evolutionary algorithms to the search domain by the introduction of domain knowledge. The development of specialised problem-solvers has historically been labour intensive. Implementing a problem-solver from scratch is very time consuming. Another approach is to adapt a general purpose search strategy to the problem domain. This is motivated by the observation that in order to scale an algorithm to solve complex problems, domain knowledge is needed. At present there is no systematic approach allowing one to efficiently engineer a specialpurpose search strategy for a given search problem. This means that, for example, adapting evolutionary algorithms (which are general purpose algorithms) is often very difficult and has lead some people to refer to their use as a ???black art???. In the HeurEAKA approach, domain knowledge is introduced by incrementally building a knowledge base that controls parts of the evolutionary algorithm. For example, the fitness function and the mutation operators in a genetic algorithm. An evolutionary search algorithm ismonitored by a human whomakes recommendations on search strategy based on individual solution candidates. It is assumed that the human has a reasonable intuition of the search problem. The human adds rules to a knowledge base describing how candidate solutions can be improved, or why they are desirable or undesirable in the search for a good solution. The incremental knowledge acquisition approach is inspired by the idea of (Nested) Ripple Down Rules. This approach sees a human provide exception rules to rules already existing in the knowledge base using concrete examples of inappropriate performance of the existing knowledge base. The Nested Ripple Down Rules (NRDR) approach allows humans to compose rules using concepts that are natural and intuitive to them. In HeurEAKA, NRDR are significantly adapted to form part of a probabilistic search algorithm. The probabilistic search algorithms used in the presented system are a genetic algorithm and a hierarchical bayesian optimization algorithm. The success of the HeurEAKA approach is demonstrated in experiments undertaken on industrially relevant domains. Problem-solvers were developed for detailed channel and switchbox routing in VLSI design and traffic light optimisation for urban road networks. The problem-solvers were developed in a short amount of time, in domains where a large amount of effort has gone into developing existing algorithms. Experiments show that chosen benchmark problems are solved as well or better than existing approaches. Particularly in the traffic light optimisation domain excellent results are achieved.
|
7 |
Investigações no campo da programação semafórica / Research on the signal programming fieldFornaciari, Isabela Aparecida 29 October 2010 (has links)
Neste trabalho são investigados alguns aspectos relacionados com a programação de tempos de semáforos isolados. A seguir são comentados os principais resultados obtidos. Os valores obtidos na cidade de São Carlos são os seguintes: tempo médio total (no início e no final) perdido no verde mais amarelo por fase veicular nos semáforos igual a 3,12 s (interseção em nível e fluxo sem conversão); velocidade média dos pedestres na travessia em semáforos igual a 1,28 m/s e velocidade correspondente ao 85 percentil igual a 1,00 m/s. Com exceção de alguns casos especiais, os valores do atraso fornecidos pelos métodos: Webster, HCM-2000, Simulador Integration e Simulador Corsim são da mesma magnitude e, portanto, perfeitamente viáveis de serem utilizados nos estudos práticos. Na determinação dos tempos que compõem a fase destinada à travessia de pedestres em semáforos, os métodos Ferraz e MUTCD são mais indicados que os métodos Webster/Denatran e CET-SP, uma vez que proporcionam adequada segurança sem \"assustar\" os pedestres e com o mínimo de prejuízo à capacidade do fluxo veicular. O emprego de fase exclusiva para pedestres em semáforos com duas fases veiculares leva aos seguintes acréscimos aproximados nos valores do atraso médio dos veículos: 40% para fluxos veiculares até 1000 v/h, 25% para fluxos veiculares da ordem de 1100 v/h e 20% para fluxos veiculares da ordem de 1200 v/h. / In this research some aspects related to time programming of single traffic lights are investigated. The main results are commented as follows. The values obtained in the city of São Carlos are: total average lost time (in the beginning and in the end) in the green and yellow phases in each vehicular signal phase equal to 3.12 s (level intersection and flow without conversion), pedestrians average speed on the traffic lights crossing equal to 1.28 m/s and speed corresponding to the 85º percentile equal to 1.00 m/s. Except for some special cases, the values of the delay provided by the Webster, HCM-2000, Simulator Integration and Simulator Corsim methods are of the same magnitude and, therefore, they are perfectly feasible to use in practical studies. In determining the periods of the pedestrian crossing stage for the traffic signals, the Ferraz and MUTCD methods are more indicated than the Webster/Denatran and CET-SP methods, since they provide appropriate safety without \"scaring\" the pedestrians and with minimal damage to the vehicular flow capacity. The use of exclusive pedestrian phase at two vehicular stage signals leads to the following approximate increases in the values of the vehicles average delay: 40% to vehicle flow up to 1000 v/h, 25% for vehicle flow about 1100 v/h, and 20% for vehicle flow about 1200 v/h. In this research some aspects related to time programming of single traffic lights are investigated.
|
8 |
Investigações no campo da programação semafórica / Research on the signal programming fieldIsabela Aparecida Fornaciari 29 October 2010 (has links)
Neste trabalho são investigados alguns aspectos relacionados com a programação de tempos de semáforos isolados. A seguir são comentados os principais resultados obtidos. Os valores obtidos na cidade de São Carlos são os seguintes: tempo médio total (no início e no final) perdido no verde mais amarelo por fase veicular nos semáforos igual a 3,12 s (interseção em nível e fluxo sem conversão); velocidade média dos pedestres na travessia em semáforos igual a 1,28 m/s e velocidade correspondente ao 85 percentil igual a 1,00 m/s. Com exceção de alguns casos especiais, os valores do atraso fornecidos pelos métodos: Webster, HCM-2000, Simulador Integration e Simulador Corsim são da mesma magnitude e, portanto, perfeitamente viáveis de serem utilizados nos estudos práticos. Na determinação dos tempos que compõem a fase destinada à travessia de pedestres em semáforos, os métodos Ferraz e MUTCD são mais indicados que os métodos Webster/Denatran e CET-SP, uma vez que proporcionam adequada segurança sem \"assustar\" os pedestres e com o mínimo de prejuízo à capacidade do fluxo veicular. O emprego de fase exclusiva para pedestres em semáforos com duas fases veiculares leva aos seguintes acréscimos aproximados nos valores do atraso médio dos veículos: 40% para fluxos veiculares até 1000 v/h, 25% para fluxos veiculares da ordem de 1100 v/h e 20% para fluxos veiculares da ordem de 1200 v/h. / In this research some aspects related to time programming of single traffic lights are investigated. The main results are commented as follows. The values obtained in the city of São Carlos are: total average lost time (in the beginning and in the end) in the green and yellow phases in each vehicular signal phase equal to 3.12 s (level intersection and flow without conversion), pedestrians average speed on the traffic lights crossing equal to 1.28 m/s and speed corresponding to the 85º percentile equal to 1.00 m/s. Except for some special cases, the values of the delay provided by the Webster, HCM-2000, Simulator Integration and Simulator Corsim methods are of the same magnitude and, therefore, they are perfectly feasible to use in practical studies. In determining the periods of the pedestrian crossing stage for the traffic signals, the Ferraz and MUTCD methods are more indicated than the Webster/Denatran and CET-SP methods, since they provide appropriate safety without \"scaring\" the pedestrians and with minimal damage to the vehicular flow capacity. The use of exclusive pedestrian phase at two vehicular stage signals leads to the following approximate increases in the values of the vehicles average delay: 40% to vehicle flow up to 1000 v/h, 25% for vehicle flow about 1100 v/h, and 20% for vehicle flow about 1200 v/h. In this research some aspects related to time programming of single traffic lights are investigated.
|
9 |
Control of large scale traffic network / Contrôle de vaste réseau de traficGrandinetti, Pietro 11 September 2017 (has links)
La thèse concerne le contrôle de feux tricolores dans de larges réseaux urbains. Le point de départ est l’étude d’un modèle macroscopique se basant sur le Cell Transmission model. Nous avons formulé une version du modèle intégrant les feux tricolores à sa dynamique. De plus, nous avons introduit deux simplifications à ce modèle orientées vers la conception des techniques de contrôle ; la première se base sur la théorie de la moyenne et considère le pourcentage de vert des feux tricolores, la seconde décrit les trajectoires des feux tricolores en utilisant les instants d’activation et de désactivation d’un signal binaire. Nous utilisons des simulations numériques pour valider les modèles en les comparant avec le Cell Transmisson model intégrant les feux tricolores, ainsi que des simulations microscopiques (avec le logiciel Aimsun) afin de valider les mêmes modèles en les comparant cette fois-ci à un comportement réaliste des véhicules.Nous proposons deux techniques de contrôle à partir des deux modèles mentionnés ci-dessus. Le premier, qui utilise le modèle moyen de transmission de véhicules, considère les pourcentages de vert des feux tricolores comme variables contrôlées, et il est formulé comme un problème d'optimisation des mesures de trafic standards. Nous analysons un tel problème et nous montrons que cela équivaut à un problème d'optimisation convexe, afin d'assurer son efficacité de calcul. Nous analysons sa performance par rapport à un best-practice control à la fois dans des simulations MatLab, et dans des simulations microscopiques, avec un modèle Aimsun qui reproduit une grande partie de Grenoble, en France. La deuxième approche proposée est un problème d'optimisation dans lequel les variables contrôlées sont les instants d'activation et de désactivation de chaque feu tricolore. Nous utilisons la technique de modélisation Big-M dans le but de formuler un tel problème comme un programme linéaire avec variables entières, et nous montrons par des simulations numériques que l’expressivité de cette optimisation conduit à des améliorations de la dynamique du trafic, au prix de l'efficacité de calcul.Pour poursuivre la scalabilité des techniques de contrôle proposées nous développons deux algorithmes itératifs pour le problème de contrôle des feux de signalisation. Le premier, basé sur l'optimisation convexe mentionnée ci-dessus, utilise la technique dual descent et nous prouvons qu’il est optimal, i.e., il donne la même solution que l'optimisation centralisée. Le second, basé sur le problème d’optimisation entier susmentionné, est un algorithme sous-optimal qui mène à des améliorations substantielles par rapport au problème centralisé connexe, concernant l'efficacité de calcul. Nous analysons par des simulations numériques la vitesse de convergence des algorithmes itératifs, leur charge de calcul et leurs performances en matière de mesure du trafic.La thèse est conclue avec une étude de l'algorithme de contrôle des feux de circulation qui est utilisé dans plusieurs grandes intersections dans Grenoble. Nous présentons le principe de fonctionnement d'un tel algorithme, en détaillant les différences technologiques et méthodologiques par rapport aux approches proposées. Nous créons dans Aimsun le scénario représentant la partie intéressée de la ville, en reproduisant également l'algorithme de contrôle et en comparant ses performances avec celles de l'une de nos approches sur le même scénario. / The thesis focuses on traffic lights control in large scale urban networks. It starts off with a study of macroscopic modeling based on the Cell Transmission model. We formulate a signalized version of such a model in order to include traffic lights’ description into the dynamics. Moreover, we introduce two simplifications of the signalized model towards control design, one that is based on the average theory and considers duty cycles of traffic lights, and a second one that describes traffic lights trajectories with the time instants of the rising and falling edges of a binary signals. We use numerical simulations to validate the models with respect to the signalized Cell Transmission model, and microsimulations (with the software Aimsun), to validate the same model with respect to realistic vehicles’ behavior.We propose two control algorithms based on the two models above mentioned. The first one, that uses the average Cell Transmission model, considers traffic lights’ duty cycles as controlled variables and it is formulated as an optimization problem of standard traffic measures. We analyze such a problem and we show that it is equivalent to a convex optimization problem, so ensuring its computational efficiency. We analyze its performance with respect to a best-practice control scheme both in MatLab simulations and in Aimsun simulations that emulate a large portion of Grenoble, France. The second proposed approach is an optimization problem in which the decision variables are the activation and deactivation time instants of every traffic lights. We employ the Big-M modeling technique to reformulate such a problem as a mixed integer linear program, and we show via numerical simulations that the expressivity of it can lead to improvements of the traffic dynamics, at the price of the computational efficiency of the control scheme.To pursue the scalability of the proposed control techniques we develop two iterative distributed approaches to the traffic lights control problem. The first, based on the convex optimization above mentioned, uses the dual descent technique and its provably optimal, that is, it gives the same solution of the centralized optimization. The second, based on the mixed integer problem aforesaid, is a suboptimal algorithm that leads to substantial improvements by means of the computational efficiency with respect to the related centralized problem. We analyze via numerical simulations the convergence speed of the iterative algorithms, their computational burden and their performance regarding traffic metrics.The thesis is concluded with a study of the traffic lights control algorithm that is employed in several large intersections in Grenoble. We present the working principle of such an algorithm, detailing technological and methodological differences with our proposed approaches. We create into Aimsun the scenario representing the related part of the city, also reproducing the control algorithm and comparing its performance with the ones given by one of our approaches on the same scenario.
|
10 |
歩行者の交差点における信号無視行動とその態度との関連について: 公的・私的自己意識も踏まえて北折, 充隆, Kitaori, Mitsutaka 27 December 1999 (has links)
国立情報学研究所で電子化したコンテンツを使用している。
|
Page generated in 0.0724 seconds