• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 75
  • 13
  • 13
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 121
  • 34
  • 31
  • 30
  • 26
  • 22
  • 21
  • 21
  • 21
  • 19
  • 18
  • 18
  • 16
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Programming methodologies for ADAS applications in parallel heterogeneous architectures / Méthodologies de programmation d'applications ADAS sur des architectures parallèles et hétérogènes

Dekkiche, Djamila 10 November 2017 (has links)
La vision par ordinateur est primordiale pour la compréhension et l’analyse d’une scène routière afin de construire des systèmes d’aide à la conduite (ADAS) plus intelligents. Cependant, l’implémentation de ces systèmes dans un réel environnement automobile et loin d’être simple. En effet, ces applications nécessitent une haute performance de calcul en plus d’une précision algorithmique. Pour répondre à ces exigences, de nouvelles architectures hétérogènes sont apparues. Elles sont composées de plusieurs unités de traitement avec différentes technologies de calcul parallèle: GPU, accélérateurs dédiés, etc. Pour mieux exploiter les performances de ces architectures, différents langages sont nécessaires en fonction du modèle d’exécution parallèle. Dans cette thèse, nous étudions diverses méthodologies de programmation parallèle. Nous utilisons une étude de cas complexe basée sur la stéréo-vision. Nous présentons les caractéristiques et les limites de chaque approche. Nous évaluons ensuite les outils employés principalement en terme de performances de calcul et de difficulté de programmation. Le retour de ce travail de recherche est crucial pour le développement de futurs algorithmes de traitement d’images en adéquation avec les architectures parallèles avec un meilleur compromis entre les performances de calcul, la précision algorithmique et la difficulté de programmation. / Computer Vision (CV) is crucial for understanding and analyzing the driving scene to build more intelligent Advanced Driver Assistance Systems (ADAS). However, implementing CV-based ADAS in a real automotive environment is not straightforward. Indeed, CV algorithms combine the challenges of high computing performance and algorithm accuracy. To respond to these requirements, new heterogeneous circuits are developed. They consist of several processing units with different parallel computing technologies as GPU, dedicated accelerators, etc. To better exploit the performances of such architectures, different languages are required depending on the underlying parallel execution model. In this work, we investigate various parallel programming methodologies based on a complex case study of stereo vision. We introduce the relevant features and limitations of each approach. We evaluate the employed programming tools mainly in terms of computation performances and programming productivity. The feedback of this research is crucial for the development of future CV algorithms in adequacy with parallel architectures with a best compromise between computing performance, algorithm accuracy and programming efforts.
12

Development of an Automation Test Setup for Navigation Data Processing

Bhonsle, Dhruvjit Vilas 18 February 2016 (has links)
With the development of Advanced Driving Assistance Systems (ADAS) vehicles have undergone better experience in field of safety, better driving and enhanced vehicle systems. Today these systems are one of the fastest growing in automotive domain. Physical parameters like map data, vehicle position and speed are crucial for the advancement of functionalities implemented for ADAS. All the navigation map databases are stored in proprietary format. So for the ADAS application to access this data an appropriate interface has to be defined. This is the main aim of Advance Driver Assistant Systems Interface Specifications (ADASIS) consortium. This new specification allows a coordinated effort of more than one industry to improve comfort and fuel efficiency. My research during the entire duration of my master thesis mainly focuses on two stages namely XML Comparator and CAN stream generation stages from ADASIS Test Environment that was developed in our company. In this test environment ADASIS Reconstructor of our company is tested against the parameters of Reference Reconstructor provided by ADASIS consortium. The main aim of this environment is to develop a Reconstructor which will adhere to all the specifications given in ADASIS Reconstructor. My implementation in this master thesis focuses on two stages of test environment setup which are XML Comparison and CAN Stream Generation Tool respectively. Prior to my working, these stages lacked in-depth research and usability features for further working.
13

Effectiveness of Intersection Advanced Driver Assistance Systems in Preventing Crashes and Injuries in Left Turn Across Path / Opposite Direction Crashes in the United States

Bareiss, Max January 2019 (has links)
Intersection crashes represent one-fifth of all police reported traffic crashes and one-sixth of all fatal crashes in the United States each year. Active safety systems have the potential to reduce crashes and injuries across all crash modes by partially or fully controlling the vehicle in the event that a crash is imminent. The objective of this thesis was to evaluate crash and injury reduction in a future United States fleet equipped with intersection advanced driver assistance systems (I-ADAS). In order to evaluate this, injury risk modeling was performed. The dataset used to evaluate injury risk was the National Automotive Sampling System / Crashworthiness Data System (NASS/CDS). An injured occupant was defined as vehicle occupant who experienced an injury of maximum Abbreviated Injury Scale (AIS) of 2 or greater, or who were fatally injured. This was referred to as MAIS2+F injury. Cases were selected in which front-row occupants of late-model vehicles were exposed to a frontal, near-, or far-side crash. Logistic regression was used to develop an injury model with occupant, vehicle, and crash parameters as predictor variables. For the frontal and near-side impact models, New Car Assessment Program (NCAP) test results were used as a predictor variable. This work quantitatively described the injury risk for a wide variety of crash modes, informing effectiveness estimates. This work reconstructed 501 vehicle-to-vehicle left turn across path / opposite direction (LTAP/OD) crashes in the United States which had been originally investigated in NMVCCS. The performance of thirty different I-ADAS system variations was evaluated for each crash. These variations were the combinations of five Time to Collision (TTC) activation thresholds, three latency times, and two different intervention types (automated braking and driver warning). In addition, two sightline assumptions were modeled for each crash: one where the turning vehicle was visible long before the intersection, and one where the turning vehicle was only visible after entering the intersection. For resimulated crashes which were not avoided by I-ADAS, a new crash delta-v was computed for each vehicle. The probability of MAIS2+F injury to each front row occupant was computed. Depending on the system design, sightline assumption, I-ADAS variation, and fleet penetration, an I-ADAS system that automatically applies emergency braking could avoid 18%-84% of all LTAP/OD crashes. An I-ADAS system which applies emergency braking could prevent 44%-94% of front row occupants from receiving MAIS2+F injuries. I-ADAS crash and injured person reduction effectiveness was higher when both vehicles were equipped with I-ADAS. This study presented the simulated effectiveness of a hypothetical intersection active safety system on real crashes which occurred in the United States, showing strong potential for these systems to reduce crashes and injuries. However, this crash and injury reduction effectiveness made the idealized assumption of full installation in all vehicles of a future fleet. In order to evaluate I-ADAS effectiveness in the United States fleet the proportion of new vehicles with I-ADAS was modeled using Highway Loss Data Institute (HLDI) fleet penetration predictions. The number of potential LTAP/OD conflicts was modeled as increasing year over year due to a predicted increase in Vehicle Miles Traveled (VMT). Finally, the combined effect of these changes was used to predict the number of LTAP/OD crashes each year from 2019 to 2060. In 2060, we predicted 70,439 NMVCCS-type LTAP/OD crashes would occur. The predicted number of MAIS2+F injured front row occupants in 2060 was 3,836. This analysis shows that even in the long-term fleet penetration of Intersection Active Safety Systems, many injuries will continue to occur. This underscores the importance of maintaining passive safety performance in future vehicles. / M.S. / Future vehicles will have electronic systems that can avoid crashes in some cases where a human driver is unable, unaware, or reacts insufficiently to avoid the crash without assistance. The objective of this work was to determine, on a national scale, how many crashes and injuries could be avoided due to Intersection Advanced Driver Assistance Systems (I-ADAS), a hypothetical version of one of these up-and-coming systems. This work focused on crashes where one car is turning left at an intersection and the other car is driving through the intersection and not turning. The I-ADAS system has sensors which continuously search for other vehicles. When the I-ADAS system determines that a crash may happen, it applies the brakes or otherwise alerts the driver to apply the brakes. Rather than conduct actual crash tests, this was simulated on a computer for a large number of variations of the I-ADAS system. The basis for the simulations was real crashes that happened from 2005 to 2007 across the United States. The variations that were simulated changed the time at which the I-ADAS system triggered the brakes (or alert) and the simulated amount of computer time required for the I-ADAS system to make a choice. In some turning crashes, the car cannot see the other vehicle because of obstructions, such as a line of people waiting to turn left across the road. Because of this, simulations were conducted both with and without the visual obstruction. For comparison, we performed a simulation of the original crash as it happened in real life. Finally, since there are two cars in each crash, there are simulations when either car has the I-ADAS system or when both cars have the I-ADAS system. Each simulation either ends in a crash or not, and these are tallied up for each system variation. The number of crashes avoided compared to the number of simulations run is crash effectiveness. Crash effectiveness ranged from 1% to 84% based on the system variation. For each crash that occurred, there is another simulation of the time immediately after impact to determine how severe the impact was. This is used to determine how many injuries are avoided, because often the crashes which still happened were made less severe by the I-ADAS system. In order to determine how many injuries can be avoided by making the crash less severe, the first chapter focuses on injury modeling. This analysis was based on crashes from 2008 to 2015 which were severe enough that one of the vehicles was towed. This was then filtered down by only looking at crashes where the front or sides were damaged. Then, we compared the outcome (injury as reported by the hospital) to the circumstances (crash severity, age, gender, seat belt use, and others) to develop an estimate for how each of these crash circumstances affected the injury experienced by each driver and front row passenger. A second goal for this chapter was to evaluate whether federal government crash ratings, commonly referred to as “star ratings”, are related to whether the driver and passengers are injured or not. In frontal crashes (where a vehicle hits something going forwards), the star rating does not seem to be related to the injury outcome. In near-side crashes (the side next to the occupant is hit), a higher star rating is better. For frontal crashes, the government test is more extreme than all but a few crashes observed in real life, and this might be why the injury outcomes measured in this study are not related to frontal star rating. Finally, these crash and injury effectiveness values will only ever be achieved if every car has an I-ADAS system. The objective of the third chapter was to evaluate how the crash and injury effectiveness numbers change each year as new cars are purchased and older cars are scrapped. Early on, few cars will have I-ADAS and crashes and injuries will likely still occur at roughly the rate they would without the system. This means that crashes and injuries will continue to increase each year because the United States drives more miles each year. Eventually, as consumers buy new cars and replace older ones, left turn intersection crashes and injuries are predicted to be reduced. Long into the future (around 2050), the increase in crashes caused by miles driven each year outpaces the gains due to new cars with the I-ADAS system, since almost all of the old cars without I-ADAS have been removed from the fleet. In 2025, there will be 173,075 crashes and 15,949 injured persons that could be affected by the I-ADAS system. By 2060, many vehicles will have I-ADAS and there will be 70,439 crashes and 3,836 injuries remaining. Real cars will not have a system identical to the hypothetical I-ADAS system studied here, but systems like it have the potential to significantly reduce crashes and injuries.
14

WiFi-Based Driver Activity Recognition Using CSI Signal

Bai, Yunhao January 2020 (has links)
No description available.
15

Intersection Collision Avoidance For Autonomous Vehicles Using Petri Nets

Shankar Kumar, Valli Sanghami 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Autonomous vehicles currently dominate the automobile field for their impact on humanity and society. Connected and Automated Vehicles (CAV’s) are vehicles that use different communication technologies to communicate with other vehicles, infrastructure, the cloud, etc. With the information received from the sensors present, the vehicles analyze and take necessary steps for smooth, collision-free driving. This the sis talks about the cruise control system along with the intersection collision avoidance system based on Petri net models. It consists of two internal controllers for velocity and distance control, respectively, and three external ones for collision avoidance. Fault-tolerant redundant controllers are designed to keep these three controllers in check. The model is built using a PN toolbox and tested for various scenarios. The model is also validated, and its distinct properties are analyzed.
16

Development of Personalized Lateral and Longitudinal Driver Behavior Models for Optimal Human-Vehicle Interactive Control

Schnelle, Scott C. January 2016 (has links)
No description available.
17

Track and Screen Evaluation of the Mobileye ADAS Camera System

Bartholomew, Meredith Carol 09 August 2022 (has links)
No description available.
18

Capteurs visuels bio-inspirés pour des applications robotiques et automobiles / Bio-inspired visual sensors for robotic and automotive applications

Mafrica, Stefano 12 July 2016 (has links)
Grâce aux progrès réalisés dans les domaines de la robotique et des systèmes de transport intelligents (ITS), les véhicules autonomes du futur sont en train de devenir une réalité. Comme les véhicules autonomes devront se comporter en toute sécurité en présence d’autres véhicules, de piétions et d’autres objets fixes ou en mouvement, une des choses les plus importantes qu’ils doivent faire est de percevoir efficacement à la fois leur mouvement et l’environnement autour d’eux. Dans cette thèse, nous avons d’abord étudié comment des capteurs visuels bio-inspirés, qui mesurent le flux optique en 1-D en utilisant seulement quelques pixels sur la base du système visuel de la mouche, pourraient être utilisés pour améliorer les manœuvres de stationnement automatiques. Nous avons ensuite travaillé sur une nouvelle rétine de silicium bio-inspirée, en montrant que le nouveau pixel, appelé M²APIX, est capable de s’auto-adapter dans une gamme de 7 décades et de répondre de manière appropriée à des changements de luminosité rapides jusqu’à ±3 décades, tout en conservant une sensibilité aux contrastes aussi bas que 2%. Nous avons enfin développé et testé un nouveau capteur de flux optique basé sur cette rétine auto-adaptative et sur une nouvelle méthode robuste pour le calcul du flux optique, qui est robuste aux variations de lumière, textures et vibrations que l’on retrouve en milieu routier. Nous avons également construit un robot de type voiture, appelé BioCarBot, qui estime sa vitesse et son angle de braquage au moyen d’un filtre de Kalman étendu (EKF), en utilisant uniquement les mesures de flux optique délivrées par deux capteurs de ce type regardant vers le sol. / Thanks to the advances in the fields of robotics and intelligent transportation systems (ITS), the autonomous vehicles of the future are gradually becoming a reality. As autonomous vehicles will have to behave safely in presence of other vehicles, pedestrians and other fixed and moving objects, one of the most important things they need to do is to effectively perceive both their motion and the environment around them. In this thesis, we first investigated how bio-inspired visual sensors, giving 1-D optic flow using a few pixels based on the findings on the fly’s visual system, could be used to improve automatic parking maneuvers. We subsequently tested a novel bio-inspired silicon retina, showing that the novel pixel, called M2APix, can auto-adapt in a 7-decade range and respond appropriately to step changes up to ±3 decades, while keeping sensitivity to contrasts as low as 2%. We lastly developed and tested a novel optic flow sensor based on this auto-adaptive retina and a new robust method for computing the optic flow, which is robust to the light levels, textures and vibrations that can be found while operating on the road. We also constructed a car-like robot, called BioCarBot, which estimates its velocity and steering angle by means of an extended Kalman filter (EKF) using only the optic flow measurements delivered by two downward-facing sensors of this kind.
19

Benchmarking of Vision-Based Prototyping and Testing Tools

Balasubramanian, ArunKumar 08 November 2017 (has links) (PDF)
The demand for Advanced Driver Assistance System (ADAS) applications is increasing day by day and their development requires efficient prototyping and real time testing. ADTF (Automotive Data and Time Triggered Framework) is a software tool from Elektrobit which is used for Development, Validation and Visualization of Vision based applications, mainly for ADAS and Autonomous driving. With the help of ADTF tool, Image or Video data can be recorded and visualized and also the testing of data can be processed both on-line and off-line. The development of ADAS applications needs image and video processing and the algorithm has to be highly efficient and must satisfy Real-time requirements. The main objective of this research would be to integrate OpenCV library with ADTF cross platform. OpenCV libraries provide efficient image processing algorithms which can be used with ADTF for quick benchmarking and testing. An ADTF filter framework has been developed where the OpenCV algorithms can be directly used and the testing of the framework is carried out with .DAT and image files with a modular approach. CMake is also explained in this thesis to build the system with ease of use. The ADTF filters are developed in Microsoft Visual Studio 2010 in C++ and OpenMP API are used for Parallel programming approach.
20

Méthodologies et outils de portage d’algorithmes de traitement d’images sur cibles hardware mixte / Methodologies and tools for embedding image processing algorithms on heterogeneous architectures

Saussard, Romain 03 July 2017 (has links)
Les constructeurs automobiles proposent de plus en plus des systèmes d'aide à la conduite, en anglais Advanced Driver Assistance Systems (ADAS), utilisant des caméras et des algorithmes de traitement d'images. Pour embarquer des applications ADAS, les fondeurs proposent des architectures embarquées hétérogènes. Ces Systems-on-Chip (SoCs) intègrent sur la même puce plusieurs processeurs de différentes natures. Cependant, avec leur complexité croissante, il devient de plus en plus difficile pour un industriel automobile de choisir un SoC qui puisse exécuter une application ADAS donnée avec le respect des contraintes temps-réel. De plus le caractère hétérogène amène une nouvelle problématique : la répartition des charges de calcul entre les différents processeurs du même SoC.Pour répondre à cette problématique, nous avons défini au cours de cette thèse une méthodologie globale de l’analyse de l'embarquabilité d'algorithmes de traitement d'images pour une exécution temps-réel. Cette méthodologie permet d'estimer l'embarquabilité d'un algorithme de traitement d'images sur plusieurs SoCs hétérogènes en explorant automatiquement les différentes répartitions de charge de calcul possibles. Elle est basée sur trois contributions majeures : la modélisation d'un algorithme et ses contraintes temps-réel, la caractérisation d'un SoC hétérogène et une méthode de prédiction de performances multi-architecture. / Car manufacturers increasingly provide Advanced Driver Assistance Systems (ADAS) based on cameras and image processing algorithms. To embed ADAS applications, semiconductor companies propose heterogeneous architectures. These Systems-on-Chip (SoCs) are composed of several processors with different capabilities on the same chip. However, with the increasing complexity of such systems, it becomes more and more difficult for an automotive actor to chose a SoC which can execute a given ADAS application while meeting real-time constraints. In addition, embedding algorithms on this type of hardware is not trivial: one needs to determine how to spread the computational load between the different processors, in others words the mapping of the computational load.In response to this issue, we defined during this thesis a global methodology to study the embeddability of image processing algorithms for real-time execution. This methodology predicts the embeddability of a given image processing algorithm on several heterogeneous SoCs by automatically exploring the possible mapping. It is based on three major contributions: the modeling of an algorithm and its real-time constraints, the characterization of a heterogeneous SoC, and a performance prediction approach which can address different types of architectures.

Page generated in 0.4247 seconds