• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 6
  • 5
  • 5
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 38
  • 38
  • 8
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The effect of geometric design on the capacity of isolated highway traffic signal approaches

Al-Mojel, A. H. S. January 1987 (has links)
No description available.
2

Experimental Investigation of Wind-induced Response of Span-wire Traffic Signal Systems

Matus, Manuel A., Mr. 27 March 2018 (has links)
The purpose of this investigation was to identify key design parameters that might significantly affect the response of span wire traffic light systems during extreme wind events. The performance of these systems was assessed through physical testing in an effort to quantify the effect of sag ratio, wire tension and wire clearance. The Wall of Wind experimental facility at Florida International University was utilized for testing the systems at different wind speeds and wind directions. The findings showed that, at all tested wind directions, lift, drag and tension forces increased with increasing wind speeds. On the contrary, increasing the wind speed resulted in higher inclination on the traffic lights, lower drag coefficients and higher lift coefficients. Overall, when the wind was approaching from the rear face of the traffic signals, increased drag coefficients were recorded. When the sag was set at 7% lower drag coefficients were observed.
3

Rapid development of problem-solvers with HeurEAKA! - a heuristic evolutionary algorithm and incremental knowledge acquisition approach

Bekmann, Joachim Peter, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
A new approach for the development of problem-solvers for combinatorial problems is proposed in this thesis. The approach combines incremental knowledge acquisition and probabilistic search algorithms, such as evolutionary algorithms, to allow a human to rapidly develop problem-solvers in new domains in a framework called HeurEAKA. The approach addresses a known problem, that is, adapting evolutionary algorithms to the search domain by the introduction of domain knowledge. The development of specialised problem-solvers has historically been labour intensive. Implementing a problem-solver from scratch is very time consuming. Another approach is to adapt a general purpose search strategy to the problem domain. This is motivated by the observation that in order to scale an algorithm to solve complex problems, domain knowledge is needed. At present there is no systematic approach allowing one to efficiently engineer a specialpurpose search strategy for a given search problem. This means that, for example, adapting evolutionary algorithms (which are general purpose algorithms) is often very difficult and has lead some people to refer to their use as a ???black art???. In the HeurEAKA approach, domain knowledge is introduced by incrementally building a knowledge base that controls parts of the evolutionary algorithm. For example, the fitness function and the mutation operators in a genetic algorithm. An evolutionary search algorithm ismonitored by a human whomakes recommendations on search strategy based on individual solution candidates. It is assumed that the human has a reasonable intuition of the search problem. The human adds rules to a knowledge base describing how candidate solutions can be improved, or why they are desirable or undesirable in the search for a good solution. The incremental knowledge acquisition approach is inspired by the idea of (Nested) Ripple Down Rules. This approach sees a human provide exception rules to rules already existing in the knowledge base using concrete examples of inappropriate performance of the existing knowledge base. The Nested Ripple Down Rules (NRDR) approach allows humans to compose rules using concepts that are natural and intuitive to them. In HeurEAKA, NRDR are significantly adapted to form part of a probabilistic search algorithm. The probabilistic search algorithms used in the presented system are a genetic algorithm and a hierarchical bayesian optimization algorithm. The success of the HeurEAKA approach is demonstrated in experiments undertaken on industrially relevant domains. Problem-solvers were developed for detailed channel and switchbox routing in VLSI design and traffic light optimisation for urban road networks. The problem-solvers were developed in a short amount of time, in domains where a large amount of effort has gone into developing existing algorithms. Experiments show that chosen benchmark problems are solved as well or better than existing approaches. Particularly in the traffic light optimisation domain excellent results are achieved.
4

Rapid development of problem-solvers with HeurEAKA! - a heuristic evolutionary algorithm and incremental knowledge acquisition approach

Bekmann, Joachim Peter, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
A new approach for the development of problem-solvers for combinatorial problems is proposed in this thesis. The approach combines incremental knowledge acquisition and probabilistic search algorithms, such as evolutionary algorithms, to allow a human to rapidly develop problem-solvers in new domains in a framework called HeurEAKA. The approach addresses a known problem, that is, adapting evolutionary algorithms to the search domain by the introduction of domain knowledge. The development of specialised problem-solvers has historically been labour intensive. Implementing a problem-solver from scratch is very time consuming. Another approach is to adapt a general purpose search strategy to the problem domain. This is motivated by the observation that in order to scale an algorithm to solve complex problems, domain knowledge is needed. At present there is no systematic approach allowing one to efficiently engineer a specialpurpose search strategy for a given search problem. This means that, for example, adapting evolutionary algorithms (which are general purpose algorithms) is often very difficult and has lead some people to refer to their use as a ???black art???. In the HeurEAKA approach, domain knowledge is introduced by incrementally building a knowledge base that controls parts of the evolutionary algorithm. For example, the fitness function and the mutation operators in a genetic algorithm. An evolutionary search algorithm ismonitored by a human whomakes recommendations on search strategy based on individual solution candidates. It is assumed that the human has a reasonable intuition of the search problem. The human adds rules to a knowledge base describing how candidate solutions can be improved, or why they are desirable or undesirable in the search for a good solution. The incremental knowledge acquisition approach is inspired by the idea of (Nested) Ripple Down Rules. This approach sees a human provide exception rules to rules already existing in the knowledge base using concrete examples of inappropriate performance of the existing knowledge base. The Nested Ripple Down Rules (NRDR) approach allows humans to compose rules using concepts that are natural and intuitive to them. In HeurEAKA, NRDR are significantly adapted to form part of a probabilistic search algorithm. The probabilistic search algorithms used in the presented system are a genetic algorithm and a hierarchical bayesian optimization algorithm. The success of the HeurEAKA approach is demonstrated in experiments undertaken on industrially relevant domains. Problem-solvers were developed for detailed channel and switchbox routing in VLSI design and traffic light optimisation for urban road networks. The problem-solvers were developed in a short amount of time, in domains where a large amount of effort has gone into developing existing algorithms. Experiments show that chosen benchmark problems are solved as well or better than existing approaches. Particularly in the traffic light optimisation domain excellent results are achieved.
5

Contribution à la modélisation et à la commande des feux de signalisation par réseaux de Petri hybrides / Contribution to the modeling and control of traffic lights with hybrid Petri nets

Sammoud, Bassem 04 September 2015 (has links)
Le trafic routier entraine de nombreux effets néfastes, dont la pollution, l'insécurité et la congestion. La plupart de méthodes développées, pour la régulation du trafic urbain au niveau des carrefours, cherche à réduire les temps d’attente et les longueurs des files d'attente. Ces méthodes se fixent principalement comme objectif l’optimisation des cycles de feu sur un horizon fini.Pour la description du trafic, nous adoptons une modélisation par les réseaux de Pétri Hybrides (RdPH), qui représente, simultanément, deux niveaux de représentation complémentaires : l'évolution continue des files d'attente et l'évolution discrète des feux tricolores. Ces deux niveaux sont, respectivement, articulés autour des réseaux de Pétri à vitesses variables et des réseaux de Pétri discrets temporisés.Nous élaborons en plus, une nouvelle stratégie pour résoudre le problème de la régulation du trafic urbain intervenant d'une manière adaptée au niveau des feux de signalisation. Nous cherchons à éviter, en premier lieu,la congestion et la sursaturation des files d'attente, qui ne doivent pas dépasser l'optimum des capacités des voies de l'intersection et, en second lieu, à réduire le temps d'évacuation des véhicules au niveau du carrefour et surtout les temps d'attente des conducteurs.Dans ce sens, un premier algorithme est élaboré pour calculer les longueurs des files d'attente, utilisant une approche qui se base sur la simplification de la modélisation d'un carrefour. Pour optimiser le temps moyen d’attente et le temps total d'évacuation sont, considérés et appliqués, avec succès, une heuristique de contrôle et une stratégie de régulation à feux fixe et à feux variables, suite à la détermination de la durée de feu vert correspondante à la situation de l'intersection en temps réel.Nous envisageons, de généraliser les résultats de nos travaux exploitant le modèle RdPH aux intersections plus complexes pour des situations réelles d'un réseau de carrefour. / Road traffic causes many adverse effects, including pollution, insecurity and congestion. Most of the developedmethods for regulation of urban traffic at crossroads, seeking to reduce wait times and lengths of queues. Thesemethods are mainly set objective optimization fire cycles over a finite horizon.To describe the composition of traffic, we opted for a traffic modeling by hybrid Petri nets, representingsimultaneously two complementary levels of representation: the continuing evolution of queues and discreetchanging traffic lights. These two levels are respectively articulated about Petri nets variable speed and discretetimed Petri nets.We chose, as well, for a new strategy to solve the problem of urban traffic control intervening in an appropriatemanner to the level of the signal lights. We sought to avoid first, congestion and the super saturation of queues,which must not exceed the optimum capacity of the intersection of routes studied and, second, to reduce theevacuation time of vehicles at the crossroads and especially waiting times for drivers.In this sense, a first algorithm is developed to calculate the lengths of queues, using a modeling simplificationapproach to a junction. To optimize the average waiting time and the total evacuation time are considered andapplied successfully by a heuristic control lights and a fixed control strategy and floating lights, following thedetermination of the duration of corresponding green light to the situation in real time intersectionWe plan to generalize the results of our work exploiting RDPH model to complex intersections for real situations of acrossroads network.
6

歩行者の交差点における信号無視行動とその態度との関連について: 公的・私的自己意識も踏まえて

北折, 充隆, Kitaori, Mitsutaka 27 December 1999 (has links)
国立情報学研究所で電子化したコンテンツを使用している。
7

Algorithmes de vision pour la pluie et les feux tricolores pour les systèmes d'aide à la conduite / Vision Algorithms for Rain and Traffic Lights in Driver Assistance Systems

De Charette, Raoul 17 September 2012 (has links)
L'utilisation d'algorithmes de vision permettrait d'élargir le domaine d'application des systèmes d'aide à la conduite à d'autres situations telles que : les scènes urbaines ou les conditions météorologiques dégradées. À cette fin, trois nouvelles applications sont étudiées dans cette thèse pour la pluie et les feux tricolores. La pluie est la condition météorologique dégradée la plus fréquente. Nous comparons les modèles physiques et photométriques existants pour la pluie et les gouttes de pluie. Lors d'une conduite en temps de pluie de jour, les gouttes sur le pare-brise diminuent considérablement la visibilité du conducteur. Lorsqu'elles sont vue par une camera embarquée standard celles-ci apparaissent défocalisées. Ainsi, nous proposons de détecter ces gouttes hors-focus en utilisant soit une approche par manque de gradients soit par l'évaluation locale du flou. Lors d'une conduite de nuit sous la pluie, ce sont les phares qui paradoxalement diminuent la visibilité car leur lumière est réfléchie par les gouttes vers le conducteur. Nous appuyant sur la conception d'un simulateur physique, nous proposons un éclairage adaptatif qui illuminerait la scène sans éclairer les gouttes qui tombent. Les résultats de notre simulateur et le premier prototype construit montre que l'idée avancée pourrait efficacement améliorer la visibilité générale d'une scène. D'autre part, nous étudions la détection et le suivi de gouttes de pluie à grande vitesse. Les feux tricolores ont un rôle crucial dans la compréhension des scènes urbaines. Bien qu'il existe déjà des systèmes de détection de feux tricolores, les algorithmes actuels ne fonctionnent que dans des conditions simples. Ainsi, nous avons développé un algorithme de détection de feux tricolores qui utilise une détection en niveau de gris des spots lumineux et une classification par reconnaissance de modèle. L'approche ainsi conçue est assez flexible pour détecter différents types de feux tricolores même avec une camera à faible dynamique. Notre proposition a été évaluée sur des séquences acquises en France, Chine et Suisse. / Vision algorithms can be used to expand the working range of the assistance systems so as to deal with urban scenes or degraded weathers. To this end, three novel applications are investigated in this thesis for both rain and traffic lights. Rain is the most frequent degraded weather condition. We review the various physics and photometry models for rain and raindrops, and highlight some of the misuses. When driving in daytime the raindrops on the windscreen lower the driver visibility. For standard on-board camera these drops appear as unfocused. Hence, we investigate the detection of unfocused raindrops using blur maps or lack of gradients with photometry. For nightime driving in rain, the headlights paradoxically reduce the visibility due to light reflected off of raindrops back toward the driver. Relying on a physic-based simulator, we propose to build an illumination device that would illuminate the scene without shining the falling particles. The performance of the simulator and a proof-of-concept prototype sustain that our idea can efficiently improve the overall scene visibility. Fast reactive drops detection and tracking is also investigated.To deal with urban scenes, traffic lights play a key role. Though traffic light recognition was attempted in the past, the existing algorithms can't handle complex scenarios. Hence, we have developed a traffic light recognition algorithm that uses a grayscale spot light detection and a template matching classification. Our approach is modular and capable of detecting various kind of traffic lights even when using a low-dynamic camera. We have evaluated our algorithm on sequences from France, China and Switzerland.
8

Investigações no campo da programação semafórica / Research on the signal programming field

Isabela Aparecida Fornaciari 29 October 2010 (has links)
Neste trabalho são investigados alguns aspectos relacionados com a programação de tempos de semáforos isolados. A seguir são comentados os principais resultados obtidos. Os valores obtidos na cidade de São Carlos são os seguintes: tempo médio total (no início e no final) perdido no verde mais amarelo por fase veicular nos semáforos igual a 3,12 s (interseção em nível e fluxo sem conversão); velocidade média dos pedestres na travessia em semáforos igual a 1,28 m/s e velocidade correspondente ao 85 percentil igual a 1,00 m/s. Com exceção de alguns casos especiais, os valores do atraso fornecidos pelos métodos: Webster, HCM-2000, Simulador Integration e Simulador Corsim são da mesma magnitude e, portanto, perfeitamente viáveis de serem utilizados nos estudos práticos. Na determinação dos tempos que compõem a fase destinada à travessia de pedestres em semáforos, os métodos Ferraz e MUTCD são mais indicados que os métodos Webster/Denatran e CET-SP, uma vez que proporcionam adequada segurança sem \"assustar\" os pedestres e com o mínimo de prejuízo à capacidade do fluxo veicular. O emprego de fase exclusiva para pedestres em semáforos com duas fases veiculares leva aos seguintes acréscimos aproximados nos valores do atraso médio dos veículos: 40% para fluxos veiculares até 1000 v/h, 25% para fluxos veiculares da ordem de 1100 v/h e 20% para fluxos veiculares da ordem de 1200 v/h. / In this research some aspects related to time programming of single traffic lights are investigated. The main results are commented as follows. The values obtained in the city of São Carlos are: total average lost time (in the beginning and in the end) in the green and yellow phases in each vehicular signal phase equal to 3.12 s (level intersection and flow without conversion), pedestrians average speed on the traffic lights crossing equal to 1.28 m/s and speed corresponding to the 85º percentile equal to 1.00 m/s. Except for some special cases, the values of the delay provided by the Webster, HCM-2000, Simulator Integration and Simulator Corsim methods are of the same magnitude and, therefore, they are perfectly feasible to use in practical studies. In determining the periods of the pedestrian crossing stage for the traffic signals, the Ferraz and MUTCD methods are more indicated than the Webster/Denatran and CET-SP methods, since they provide appropriate safety without \"scaring\" the pedestrians and with minimal damage to the vehicular flow capacity. The use of exclusive pedestrian phase at two vehicular stage signals leads to the following approximate increases in the values of the vehicles average delay: 40% to vehicle flow up to 1000 v/h, 25% for vehicle flow about 1100 v/h, and 20% for vehicle flow about 1200 v/h. In this research some aspects related to time programming of single traffic lights are investigated.
9

Investigações no campo da programação semafórica / Research on the signal programming field

Fornaciari, Isabela Aparecida 29 October 2010 (has links)
Neste trabalho são investigados alguns aspectos relacionados com a programação de tempos de semáforos isolados. A seguir são comentados os principais resultados obtidos. Os valores obtidos na cidade de São Carlos são os seguintes: tempo médio total (no início e no final) perdido no verde mais amarelo por fase veicular nos semáforos igual a 3,12 s (interseção em nível e fluxo sem conversão); velocidade média dos pedestres na travessia em semáforos igual a 1,28 m/s e velocidade correspondente ao 85 percentil igual a 1,00 m/s. Com exceção de alguns casos especiais, os valores do atraso fornecidos pelos métodos: Webster, HCM-2000, Simulador Integration e Simulador Corsim são da mesma magnitude e, portanto, perfeitamente viáveis de serem utilizados nos estudos práticos. Na determinação dos tempos que compõem a fase destinada à travessia de pedestres em semáforos, os métodos Ferraz e MUTCD são mais indicados que os métodos Webster/Denatran e CET-SP, uma vez que proporcionam adequada segurança sem \"assustar\" os pedestres e com o mínimo de prejuízo à capacidade do fluxo veicular. O emprego de fase exclusiva para pedestres em semáforos com duas fases veiculares leva aos seguintes acréscimos aproximados nos valores do atraso médio dos veículos: 40% para fluxos veiculares até 1000 v/h, 25% para fluxos veiculares da ordem de 1100 v/h e 20% para fluxos veiculares da ordem de 1200 v/h. / In this research some aspects related to time programming of single traffic lights are investigated. The main results are commented as follows. The values obtained in the city of São Carlos are: total average lost time (in the beginning and in the end) in the green and yellow phases in each vehicular signal phase equal to 3.12 s (level intersection and flow without conversion), pedestrians average speed on the traffic lights crossing equal to 1.28 m/s and speed corresponding to the 85º percentile equal to 1.00 m/s. Except for some special cases, the values of the delay provided by the Webster, HCM-2000, Simulator Integration and Simulator Corsim methods are of the same magnitude and, therefore, they are perfectly feasible to use in practical studies. In determining the periods of the pedestrian crossing stage for the traffic signals, the Ferraz and MUTCD methods are more indicated than the Webster/Denatran and CET-SP methods, since they provide appropriate safety without \"scaring\" the pedestrians and with minimal damage to the vehicular flow capacity. The use of exclusive pedestrian phase at two vehicular stage signals leads to the following approximate increases in the values of the vehicles average delay: 40% to vehicle flow up to 1000 v/h, 25% for vehicle flow about 1100 v/h, and 20% for vehicle flow about 1200 v/h. In this research some aspects related to time programming of single traffic lights are investigated.
10

Control of large scale traffic network / Contrôle de vaste réseau de trafic

Grandinetti, Pietro 11 September 2017 (has links)
La thèse concerne le contrôle de feux tricolores dans de larges réseaux urbains. Le point de départ est l’étude d’un modèle macroscopique se basant sur le Cell Transmission model. Nous avons formulé une version du modèle intégrant les feux tricolores à sa dynamique. De plus, nous avons introduit deux simplifications à ce modèle orientées vers la conception des techniques de contrôle ; la première se base sur la théorie de la moyenne et considère le pourcentage de vert des feux tricolores, la seconde décrit les trajectoires des feux tricolores en utilisant les instants d’activation et de désactivation d’un signal binaire. Nous utilisons des simulations numériques pour valider les modèles en les comparant avec le Cell Transmisson model intégrant les feux tricolores, ainsi que des simulations microscopiques (avec le logiciel Aimsun) afin de valider les mêmes modèles en les comparant cette fois-ci à un comportement réaliste des véhicules.Nous proposons deux techniques de contrôle à partir des deux modèles mentionnés ci-dessus. Le premier, qui utilise le modèle moyen de transmission de véhicules, considère les pourcentages de vert des feux tricolores comme variables contrôlées, et il est formulé comme un problème d'optimisation des mesures de trafic standards. Nous analysons un tel problème et nous montrons que cela équivaut à un problème d'optimisation convexe, afin d'assurer son efficacité de calcul. Nous analysons sa performance par rapport à un best-practice control à la fois dans des simulations MatLab, et dans des simulations microscopiques, avec un modèle Aimsun qui reproduit une grande partie de Grenoble, en France. La deuxième approche proposée est un problème d'optimisation dans lequel les variables contrôlées sont les instants d'activation et de désactivation de chaque feu tricolore. Nous utilisons la technique de modélisation Big-M dans le but de formuler un tel problème comme un programme linéaire avec variables entières, et nous montrons par des simulations numériques que l’expressivité de cette optimisation conduit à des améliorations de la dynamique du trafic, au prix de l'efficacité de calcul.Pour poursuivre la scalabilité des techniques de contrôle proposées nous développons deux algorithmes itératifs pour le problème de contrôle des feux de signalisation. Le premier, basé sur l'optimisation convexe mentionnée ci-dessus, utilise la technique dual descent et nous prouvons qu’il est optimal, i.e., il donne la même solution que l'optimisation centralisée. Le second, basé sur le problème d’optimisation entier susmentionné, est un algorithme sous-optimal qui mène à des améliorations substantielles par rapport au problème centralisé connexe, concernant l'efficacité de calcul. Nous analysons par des simulations numériques la vitesse de convergence des algorithmes itératifs, leur charge de calcul et leurs performances en matière de mesure du trafic.La thèse est conclue avec une étude de l'algorithme de contrôle des feux de circulation qui est utilisé dans plusieurs grandes intersections dans Grenoble. Nous présentons le principe de fonctionnement d'un tel algorithme, en détaillant les différences technologiques et méthodologiques par rapport aux approches proposées. Nous créons dans Aimsun le scénario représentant la partie intéressée de la ville, en reproduisant également l'algorithme de contrôle et en comparant ses performances avec celles de l'une de nos approches sur le même scénario. / The thesis focuses on traffic lights control in large scale urban networks. It starts off with a study of macroscopic modeling based on the Cell Transmission model. We formulate a signalized version of such a model in order to include traffic lights’ description into the dynamics. Moreover, we introduce two simplifications of the signalized model towards control design, one that is based on the average theory and considers duty cycles of traffic lights, and a second one that describes traffic lights trajectories with the time instants of the rising and falling edges of a binary signals. We use numerical simulations to validate the models with respect to the signalized Cell Transmission model, and microsimulations (with the software Aimsun), to validate the same model with respect to realistic vehicles’ behavior.We propose two control algorithms based on the two models above mentioned. The first one, that uses the average Cell Transmission model, considers traffic lights’ duty cycles as controlled variables and it is formulated as an optimization problem of standard traffic measures. We analyze such a problem and we show that it is equivalent to a convex optimization problem, so ensuring its computational efficiency. We analyze its performance with respect to a best-practice control scheme both in MatLab simulations and in Aimsun simulations that emulate a large portion of Grenoble, France. The second proposed approach is an optimization problem in which the decision variables are the activation and deactivation time instants of every traffic lights. We employ the Big-M modeling technique to reformulate such a problem as a mixed integer linear program, and we show via numerical simulations that the expressivity of it can lead to improvements of the traffic dynamics, at the price of the computational efficiency of the control scheme.To pursue the scalability of the proposed control techniques we develop two iterative distributed approaches to the traffic lights control problem. The first, based on the convex optimization above mentioned, uses the dual descent technique and its provably optimal, that is, it gives the same solution of the centralized optimization. The second, based on the mixed integer problem aforesaid, is a suboptimal algorithm that leads to substantial improvements by means of the computational efficiency with respect to the related centralized problem. We analyze via numerical simulations the convergence speed of the iterative algorithms, their computational burden and their performance regarding traffic metrics.The thesis is concluded with a study of the traffic lights control algorithm that is employed in several large intersections in Grenoble. We present the working principle of such an algorithm, detailing technological and methodological differences with our proposed approaches. We create into Aimsun the scenario representing the related part of the city, also reproducing the control algorithm and comparing its performance with the ones given by one of our approaches on the same scenario.

Page generated in 0.1092 seconds