Spelling suggestions: "subject:"intelligent ehicle"" "subject:"intelligent aehicle""
41 |
Applications of vehicle location and communication technology in fleet management systemsWong, Chi-tak, Keith. January 2001 (has links)
Thesis (M.A.)--University of Hong Kong, 2001. / Includes bibliographical references (leaves 62). Also available in print.
|
42 |
Incident detection on arterials using neural network data fusion of simulated probe vehicle and loop detector data /Thomas, Kim. January 2005 (has links) (PDF)
Thesis (M.Phil.) - University of Queensland, 2005. / Includes bibliography.
|
43 |
A wireless sensor network for smart roadbeds and intelligent transportation systems /Knaian, Ara N. January 1900 (has links) (PDF)
Thesis (Master of Engineering in Electrical Engineering and Computer Science)--Massachusetts Institute of Technology, 2000. / Cover title. Also available online via the MIT website (www.media.mit.edu). Includes bibliographical references (leaves 36-38).
|
44 |
Effectiveness of the Statewide Deployment and Integration of Advanced Traveler Information SystemsBelz, Nathan P. January 2008 (has links) (PDF)
No description available.
|
45 |
An Axiomatic Categorisation Framework for the Dynamic Alignment of Disparate Functions in Cyber-Physical SystemsByrne, Thomas J., Doikin, Aleksandr, Campean, Felician, Neagu, Daniel 04 April 2019 (has links)
Yes / Advancing Industry 4.0 concepts by mapping the product of the automotive industry on the spectrum of Cyber Physical Systems, we immediately recognise the convoluted processes involved in the design of new generation vehicles. New technologies developed around the communication core (IoT) enable novel interactions with data. Our framework employs previously untapped data from vehicles in the field for intelligent vehicle health management and knowledge integration into design. Firstly, the concept of an inter-disciplinary artefact is introduced to support the dynamic alignment of disparate functions, so that cyber variables change when physical variables change. Secondly, the axiomatic categorisation (AC) framework simulates functional transformations from artefact to artefact, to monitor and control automotive systems rather than components. Herein, an artefact is defined as a triad of the physical and engineered component, the information processing entity, and communication devices at their interface. Variable changes are modelled using AC, in conjunction with the artefacts, to aggregate functional transformations within the conceptual boundary of a physical system of systems. / Jaguar Land Rover funded research “Intelligent Personalised Powertrain Healthcare” 2016-2019
|
46 |
Contributions to Lane Marking Based Localization for Intelligent Vehicles / Contribution à la localisation de véhicules intelligents à partir de marquage routierLu, Wenjie 09 February 2015 (has links)
Les applications pour véhicules autonomes et les systèmes d’aide avancée à la conduite (Advanced Driving Assistance Systems - ADAS) mettent en oeuvre des processus permettant à des systèmes haut niveau de réaliser une prise de décision. Pour de tels systèmes, la connaissance du positionnement précis (ou localisation) du véhicule dans son environnement est un pré-requis nécessaire. Cette thèse s’intéresse à la détection de la structure de scène, au processus de localisation ainsi qu’à la modélisation d’erreurs. A partir d’un large spectre fonctionnel de systèmes de vision, de l’accessibilité d’un système de cartographie ouvert (Open Geographical Information Systems - GIS) et de la large diffusion des systèmes de positionnement dans les véhicules (Global Positioning System - GPS), cette thèse étudie la performance et la fiabilité d’une méthode de localisation utilisant ces différentes sources. La détection de marquage sur la route réalisée par caméra monoculaire est le point de départ permettant de connaître la structure de la scène. En utilisant, une détection multi-noyau avec pondération hiérarchique, la méthode paramétrique proposée effectue la détection et le suivi des marquages sur la voie du véhicule en temps réel. La confiance en cette source d’information a été quantifiée par un indicateur de vraisemblance. Nous proposons ensuite un système de localisation qui fusionne des informations de positionnement (GPS), la carte (GIS) et les marquages détectés précédemment dans un cadre probabiliste basé sur un filtre particulaire. Pour ce faire, nous proposons d’utiliser les marquages détectés non seulement dans l’étape de mise en correspondance des cartes mais aussi dans la modélisation de la trajectoire attendue du véhicule. La fiabilité du système de localisation, en présence d’erreurs inhabituelles dans les différentes sources d’information, est améliorée par la prise en compte de différents indicateurs de confiance. Ce mécanisme est par la suite utilisé pour identifier les sources d’erreur. Cette thèse se conclut par une validation expérimentale des méthodes proposées dans des situations réelles de conduite. Leurs performances ont été quantifiées en utilisant un véhicule expérimental et des données en libre accès sur internet. / Autonomous Vehicles (AV) applications and Advanced Driving Assistance Systems (ADAS) relay in scene understanding processes allowing high level systems to carry out decision marking. For such systems, the localization of a vehicle evolving in a structured dynamic environment constitutes a complex problem of crucial importance. Our research addresses scene structure detection, localization and error modeling. Taking into account the large functional spectrum of vision systems, the accessibility of Open Geographical Information Systems (GIS) and the widely presence of Global Positioning Systems (GPS) onboard vehicles, we study the performance and the reliability of a vehicle localization method combining such information sources. Monocular vision–based lane marking detection provides key information about the scene structure. Using an enhanced multi-kernel framework with hierarchical weights, the proposed parametric method performs, in real time, the detection and tracking of the ego-lane marking. A self-assessment indicator quantifies the confidence of this information source. We conduct our investigations in a localization system which tightly couples GPS, GIS and lane makings in the probabilistic framework of Particle Filter (PF). To this end, it is proposed the use of lane markings not only during the map-matching process but also to model the expected ego-vehicle motion. The reliability of the localization system, in presence of unusual errors from the different information sources, is enhanced by taking into account different confidence indicators. Such a mechanism is later employed to identify error sources. This research concludes with an experimental validation in real driving situations of the proposed methods. They were tested and its performance was quantified using an experimental vehicle and publicly available datasets.
|
47 |
Place recognition based visual localization in changing environments / Localisation visuelle basée sur la reconnaissance du lieu dans les environnements changeantsQiao, Yongliang 03 April 2017 (has links)
Dans de nombreuses applications, il est crucial qu'un robot ou un véhicule se localise, notamment pour la navigation ou la conduite autonome. Cette thèse traite de la localisation visuelle par des méthodes de reconnaissance de lieux. Le principe est le suivant: lors d'une phase hors-ligne, des images géo-référencées de l'environnement d'évolution du véhicule sont acquises, des caractéristiques en sont extraites et sauvegardées. Puis lors de la phase en ligne, il s'agit de retrouver l'image (ou la séquence d'images) de la base d'apprentissage qui correspond le mieux à l'image (ou la séquence d'images) courante. La localisation visuelle reste un challenge car l'apparence et l'illumination changent drastiquement en particulier avec le temps, les conditions météorologiques et les saisons. Dans cette thèse, on cherche alors à améliorer la reconnaissance de lieux grâce à une meilleure capacité de description et de reconnaissance de la scène. Plusieurs approches sont proposées dans cette thèse:1) La reconnaissance visuelle de lieux est améliorée en considérant les informations de profondeur, de texture et de forme par la combinaison de plusieurs de caractéristiques visuelles, à savoir les descripteurs CSLBP (extraits sur l'image couleur et l'image de profondeur) et HOG. De plus l'algorithme LSH (Locality Sensitive Hashing) est utilisée pour améliorer le temps de calcul;2) Une méthode de la localisation visuelle basée sur une reconnaissance de lieux par mise en correspondance de séquence d'images (au lieu d'images considérées indépendamment) et combinaison des descripteurs GIST et CSLBP est également proposée. Cette approche est en particulier testée lorsque les bases d'apprentissage et de test sont acquises à des saisons différentes. Les résultats obtenus montrent que la méthode est robuste aux changements perceptuels importants;3) Enfin, la dernière approche de localisation visuelle proposée est basée sur des caractéristiques apprises automatiquement (à l'aide d'un réseau de neurones à convolution) et une mise en correspondance de séquences localisées d'images. Pour améliorer l'efficacité computationnelle, l'algorithme LSH est utilisé afin de viser une localisation temps-réel avec une dégradation de précision limitée / In many applications, it is crucial that a robot or vehicle localizes itself within the world especially for autonomous navigation and driving. The goal of this thesis is to improve place recognition performance for visual localization in changing environment. The approach is as follows: in off-line phase, geo-referenced images of each location are acquired, features are extracted and saved. While in the on-line phase, the vehicle localizes itself by identifying a previously-visited location through image or sequence retrieving. However, visual localization is challenging due to drastic appearance and illumination changes caused by weather conditions or seasonal changing. This thesis addresses the challenge of improving place recognition techniques through strengthen the ability of place describing and recognizing. Several approaches are proposed in this thesis:1) Multi-feature combination of CSLBP (extracted from gray-scale image and disparity map) and HOG features is used for visual localization. By taking the advantages of depth, texture and shape information, visual recognition performance can be improved. In addition, local sensitive hashing method (LSH) is used to speed up the process of place recognition;2) Visual localization across seasons is proposed based on sequence matching and feature combination of GIST and CSLBP. Matching places by considering sequences and feature combination denotes high robustness to extreme perceptual changes;3) All-environment visual localization is proposed based on automatic learned Convolutional Network (ConvNet) features and localized sequence matching. To speed up the computational efficiency, LSH is taken to achieve real-time visual localization with minimal accuracy degradation.
|
48 |
Development of a public transit information system using GIS and ITS technologies /Riley, Sarah J. January 1900 (has links)
Thesis (M. App. Sc.)--Carleton University, 2002. / Includes bibliographical references (p. 205-210). Also available in electronic format on the Internet.
|
49 |
Cooperative perception : Application in the context of outdoor intelligent vehicle systems / Perception coopérative : application au contexte des systèmes de véhicules intelligents à l'extérieurLi, Hao 21 September 2012 (has links)
Le thème de recherche de cette thèse est la perception coopérative multi-véhicules appliquée au contexte des systèmes de véhicules intelligents. L’objectif général des travaux présentés dans cette thèse est de réaliser la perception coopérative de plusieurs véhicules (dite « perception coopérative »), visant ainsi à fournir des résultats de perception améliorés par rapport à la perception d’un seul véhicule (ou « perception non-coopérative »). Au lieu de concentrer nos recherches sur la performance absolue de la perception coopérative, nous nous concentrons sur les mécanismes généraux qui permettent la réalisation de la localisation coopérative et de la cartographie de l’environnement routier (y compris la détection des objets), considérant que la localisation et la cartographie sont les deux tâches les plus fondamentales pour un système de véhicule intelligent. Nous avons également exploité la possibilité d’explorer les techniques de la réalité augmentée, combinées aux fonctionnalités de perception coopérative. Nous baptisons alors cette approche « réalité augmentée coopérative ». Par conséquent, nous pouvons d’ores et déjà annoncer trois contributions des travaux présentés: la localisation coopérative, la cartographie locale coopérative, et la réalité augmentée coopérative. / The research theme of this dissertation is the multiple-vehicles cooperative perception (or cooperative perception) applied in the context of intelligent vehicle systems. The general methodology of the presented works in this dissertation is to realize multiple-intelligent vehicles cooperative perception, which aims at providing better vehicle perception result compared with single vehicle perception (or non-cooperative perception). Instead of focusing our research works on the absolute performance of cooperative perception, we focus on the general mechanisms which enable the realization of cooperative localization and cooperative mapping (and moving objects detection), considering that localization and mapping are two underlying tasks for an intelligent vehicle system. We also exploit the possibility to realize certain augmented reality effect with the help of basic cooperative perception functionalities; we name this kind of practice as cooperative augmented reality. Naturally, the contributions of the presented works consist in three aspects: cooperative localization, cooperative local mapping and moving objects detection, and cooperative augmented reality.
|
50 |
An Effective Framework of Autonomous Driving by Sensing Road/motion ProfilesZheyuan Wang (11715263) 22 November 2021 (has links)
<div>With more and more videos taken from dash cams on thousands of cars, retrieving these videos and searching for important information is a daunting task. The purpose of this work is to mine some key road and vehicle motion attributes in a large-scale driving video data set for traffic analysis, sensing algorithm development and autonomous driving test benchmarks. Current sensing and control of autonomous cars based on full-view identification makes it difficult to maintain a high-frequency with a fast-moving vehicle, since computation is increasingly used to cope with driving environment changes.</div><div><br></div><div>A big challenge in video data mining is how to deal with huge amounts of data. We use a compact representation called the road profile system to visualize the road environment in long 2D images. It reduces the data from each frame of image to one line, thereby compressing the video clip to the image. This data dimensionality reduction method has several advantages: First, the data size is greatly compressed. The data is compressed from a video to an image, and each frame in the video is compressed into a line. The data size is compressed hundreds of times. While the size and dimensionality of the data has been compressed greatly, the useful information in the driving video is still completely preserved, and motion information is even better represented more intuitively. Because of the data and dimensionality reduction, the identification algorithm computational efficiency is higher than the full-view identification method, and it makes the real-time identification on road is possible. Second, the data is easier to be visualized, because the data is reduced in dimensionality, and the three-dimensional video data is compressed into two-dimensional data, the reduction is more conducive to the visualization and mutual comparison of the data. Third, continuously changing attributes are easier to show and be captured. Due to the more convenient visualization of two-dimensional data, the position, color and size of the same object within a few frames will be easier to compare and capture. At the same time, in many cases, the trouble caused by tracking and matching can be eliminated. Based on the road profile system, there are three tasks in autonomous driving are achieved using the road profile images.</div><div><br></div><div>The first application is road edge detection under different weather and appearance for road following in autonomous driving to capture the road profile image and linearity profile image in the road profile system. This work uses naturalistic driving video data mining to study the appearance of roads, which covers large-scale road data and changes. This work excavated a large number of naturalistic driving video sets to sample the light-sensitive area for color feature distribution. The effective road contour image is extracted from the long-time driving video, thereby greatly reducing the amount of video data. Then, the weather and lighting type can be identified. For each weather and lighting condition obvious features are I identified at the edge of the road to distinguish the road edge. </div><div><br></div><div>The second application is detecting vehicle interactions in driving videos via motion profile images to capture the motion profile image in the road profile system. This work uses visual actions recorded in driving videos taken by a dashboard camera to identify this interaction. The motion profile images of the video are filtered at key locations, thereby reducing the complexity of object detection, depth sensing, target tracking and motion estimation. The purpose of this reduction is for decision making of vehicle actions such as lane changing, vehicle following, and cut-in handling.</div><div><br></div><div>The third application is motion planning based on vehicle interactions and driving video. Taking note of the fact that a car travels in a straight line, we simply identify a few sample lines in the view to constantly scan the road, vehicles, and environment, generating a portion of the entire video data. Without using redundant data processing, we performed semantic segmentation to streaming road profile images. We plan the vehicle's path/motion using the smallest data set possible that contains all necessary information for driving.</div><div><br></div><div>The results are obtained efficiently, and the accuracy is acceptable. The results can be used for driving video mining, traffic analysis, driver behavior understanding, etc.</div>
|
Page generated in 0.0882 seconds