• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 44
  • 44
  • 17
  • 16
  • 13
  • 11
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Avaliação e proposta de sistemas de câmeras estéreo para detecção de pedestres em veículos inteligentes / Stereo cameras systems evaluation and proposal for pedestrian detection on intelligent vehicles

Angelica Tiemi Mizuno Nakamura 06 December 2017 (has links)
Detecção de pedestres é uma importante área em visão computacional com o potencial de salvar vidas quando aplicada em veículos. Porém, essa aplicação exige detecções em tempo real, com alta acurácia e menor quantidade de falsos positivos possível. Durante os últimos anos, diversas ideias foram exploradas e os métodos mais recentes que utilizam arquiteturas profundas de redes neurais possibilitaram um grande avanço nesta área, melhorando significativamente o desempenho das detecções. Apesar desse progresso, a detecção de pedestres que estão distantes do veículo continua sendo um grande desafio devido às suas pequenas escalas na imagem, sendo necessária a avaliação da eficácia dos métodos atuais em evitar ou atenuar a gravidade dos acidentes de trânsito que envolvam pedestres. Dessa forma, como primeira proposta deste trabalho, foi realizado um estudo para avaliar a aplicabilidade dos métodos estado-da-arte para evitar colisões em cenários urbanos. Para isso, a velocidade e dinâmica do veículo, o tempo de reação e desempenho dos métodos de detecção foram considerados. Através do estudo, observou-se que em ambientes de tráfego rápido ainda não é possível utilizar métodos visuais de detecção de pedestres para assistir o motorista, pois nenhum deles é capaz de detectar pedestres que estão distantes do veículo e, ao mesmo tempo, operar em tempo real. Mas, ao considerar apenas pedestres em maiores escalas, os métodos tradicionais baseados em janelas deslizantes já conseguem atingir um bom desempenho e rápida execução. Dessa forma, com a finalidade de restringir a operação dos detectores apenas para pedestres em maiores escalas e assim, possibilitar a aplicação de métodos visuais em veículos, foi proposta uma configuração de câmeras que possibilitou obter imagens para um maior intervalo de distância à frente do veículo com pedestres em resolução quase duas vezes maior em comparação à uma câmera comercial. Resultados experimentais mostraram considerável melhora no desempenho das detecções, possibilitando superar a dificuldade causada pelas pequenas escalas dos pedestres nas imagens. / Pedestrian detection is an important area in computer vision with the potential to save lives when applied on vehicles. This application requires accurate detections and real-time operation, keeping the number of false positives as minimal as possible. Over the past few years, several ideas were explored, including approaches with deep network architectures, which have reached considerably better performances. However, detecting pedestrians far from the camera is still challenging due to their small sizes on images, making it necessary to evaluate the effectiveness of existing approaches on avoiding or reducing traffic accidents that involves pedestrians. Thus, as the first proposal of this work, a study was done to verify the state-of-the-art methods applicability for collision avoidance in urban scenarios. For this, the speed and dynamics of the vehicle, the reaction time and performance of the detection methods were considered. The results from this study show that it is still not possible to use a vision-based pedestrian detector for driver assistance on urban roads with fast moving traffic, since none of them is able to handle real-time pedestrian detection. However, for large-scale pedestrians on images, methods based on sliding window approach can already perform reliably well with fast inference time. Thus, in order to restrict the operation of detectors only for pedestrians in larger scales and enable the application of vision-based methods in vehicles, it was proposed a camera setup that provided to get images for a larger range of distances in front of the vehicle with pedestrians resolution almost twice as large compared to a commercial camera. Experimental results reveal a considerable enhancement on detection performance, overcoming the difficulty caused by the reduced scales that far pedestrians have on images.
12

Night Pedestrian Detection System Based On Fuzzy Reasoning

Chang, Shun-Kai 16 August 2012 (has links)
none
13

Pedestrian detection and driver attention : cues needed to determine risky pedestrian behaviour in traffic

Larsson, Annika January 2005 (has links)
<p>The purpose of this thesis was to determine which perceptual cues drivers use to identify pedestrians that may constitute a risk in traffic. Methods chosen were recordings of pedestrian behaviour in Linköping by means of a stationary video camera as well as video camera mounted in a car. Interviews on the recordings from the mobile camera were conducted with taxi drivers and driving instructors.</p><p>Results include that drivers not only react to pedestrians they believe will behave in a dangerous way, but also react to pedestrians that probably not will behave in such a way, but where the possibility still exists. The study concluded that it was not possible to determine how risky a pedestrian is considered to be by only using behavioural factors such as trajectory or position on the sidewalk, and distance. It is necessary also to include environmental factors, mainly where the pedestrian and car are positioned in relation to the side of the road, so that the behaviour of the pedestrian can be interpreted.</p>
14

Reliability evaluation and error mitigation in pedestrian detection algorithms for embedded GPUs / Validação da confiabilidade e tolerância a falhas em algoritmos de detecção de pedestres para GPUs embarcadas

Santos, Fernando Fernandes dos January 2017 (has links)
A confiabilidade de algoritmos para detecção de pedestres é um problema fundamental para carros auto dirigíveis ou com auxílio de direção. Métodos que utilizam algoritmos de detecção de objetos como Histograma de Gradientes Orientados (HOG - Histogram of Oriented Gradients) ou Redes Neurais de Convolução (CNN – Convolutional Neural Network) são muito populares em aplicações automotivas. Unidades de Processamento Gráfico (GPU – Graphics Processing Unit) são exploradas para executar detecção de objetos de uma maneira eficiente. Infelizmente, as arquiteturas das atuais GPUs tem se mostrado particularmente vulneráveis a erros induzidos por radiação. Este trabalho apresenta uma validação e um estudo analítico sobre a confiabilidade de duas classes de algoritmos de detecção de objetos, HOG e CNN. Esta pesquisa almeja não somente quantificar, mas também qualificar os erros produzidos por radiação em aplicações de detecção de objetos em GPUs embarcadas. Os resultados experimentais com HOG foram obtidos usando duas arquiteturas de GPU embarcadas diferentes (Tegra e AMD APU), cada uma foi exposta por aproximadamente 100 horas em um feixe de nêutrons em Los Alamos National Lab (LANL). As métricas Precision e Recall foram usadas para validar a criticalidade do erro. Uma análise final mostrou que por um lado HOG é intrinsecamente resiliente a falhas (65% a 85% dos erros na saída tiveram um pequeno impacto na detecção), do outro lado alguns erros críticos aconteceram, tais que poderiam resultar em pedestres não detectados ou paradas desnecessárias do veículo. Este trabalho também avaliou a confiabilidade de duas Redes Neurais de Convolução para detecção de Objetos:Darknet e Faster RCNN. Três arquiteturas diferentes de GPUs foram expostas em um feixe de nêutrons controlado (Kepler, Maxwell, e Pascal), com as redes detectando objetos em dois data sets, Caltech e Visual Object Classes. Através da análise das saídas corrompidas das redes neurais, foi possível distinguir entre erros toleráveis e erros críticos, ou seja, erros que poderiam impactar na detecção de objetos. Adicionalmente, extensivas injeções de falhas no nível da aplicação (GDB) e em nível arquitetural (SASSIFI) foram feitas, para identificar partes críticas do código para o HOG e as CNNs. Os resultados mostraram que não são todos os estágios da detecção de objetos que são críticos para a confiabilidade da detecção final. Graças a injeção de falhas foi possível identificar partes do HOG e da Darknet, que se protegidas, irão com uma maior probabilidade aumentar a sua confiabilidade, sem adicionar um overhead desnecessário. A estratégia de tolerância a falhas proposta para o HOG foi capaz de detectar até 70% dos erros com 12% de overhead de tempo. / Pedestrian detection reliability is a fundamental problem for autonomous or aided driving. Methods that use object detection algorithms such as Histogram of Oriented Gradients (HOG) or Convolutional Neural Networks (CNN) are today very popular in automotive applications. Embedded Graphics Processing Units (GPUs) are exploited to make object detection in a very efficient manner. Unfortunately, GPUs architecture has been shown to be particularly vulnerable to radiation-induced failures. This work presents an experimental evaluation and analytical study of the reliability of two types of object detection algorithms: HOG and CNNs. This research aim is not just to quantify but also to qualify the radiation-induced errors on object detection applications executed in embedded GPUs. HOG experimental results were obtained using two different architectures of embedded GPUs (Tegra and AMD APU), each exposed for about 100 hours to a controlled neutron beam at Los Alamos National Lab (LANL). Precision and Recall metrics are considered to evaluate the error criticality. The reported analysis shows that, while being intrinsically resilient (65% to 85% of output errors only slightly impact detection), HOG experienced some particularly critical errors that could result in undetected pedestrians or unnecessary vehicle stops. This works also evaluates the reliability of two Convolutional Neural Networks for object detection: You Only Look Once (YOLO) and Faster RCNN. Three different GPU architectures were exposed to controlled neutron beams (Kepler, Maxwell, and Pascal) detecting objects in both Caltech and Visual Object Classes data sets. By analyzing the neural network corrupted output, it is possible to distinguish between tolerable errors and critical errors, i.e., errors that could impact detection. Additionally, extensive GDB-level and architectural-level fault-injection campaigns were performed to identify HOG and YOLO critical procedures. Results show that not all stages of object detection algorithms are critical to the final classification reliability. Thanks to the fault injection analysis it is possible to identify HOG and Darknet portions that, if hardened, are more likely to increase reliability without introducing unnecessary overhead. The proposed HOG hardening strategy is able to detect up to 70% of errors with a 12% execution time overhead.
15

Reliability evaluation and error mitigation in pedestrian detection algorithms for embedded GPUs / Validação da confiabilidade e tolerância a falhas em algoritmos de detecção de pedestres para GPUs embarcadas

Santos, Fernando Fernandes dos January 2017 (has links)
A confiabilidade de algoritmos para detecção de pedestres é um problema fundamental para carros auto dirigíveis ou com auxílio de direção. Métodos que utilizam algoritmos de detecção de objetos como Histograma de Gradientes Orientados (HOG - Histogram of Oriented Gradients) ou Redes Neurais de Convolução (CNN – Convolutional Neural Network) são muito populares em aplicações automotivas. Unidades de Processamento Gráfico (GPU – Graphics Processing Unit) são exploradas para executar detecção de objetos de uma maneira eficiente. Infelizmente, as arquiteturas das atuais GPUs tem se mostrado particularmente vulneráveis a erros induzidos por radiação. Este trabalho apresenta uma validação e um estudo analítico sobre a confiabilidade de duas classes de algoritmos de detecção de objetos, HOG e CNN. Esta pesquisa almeja não somente quantificar, mas também qualificar os erros produzidos por radiação em aplicações de detecção de objetos em GPUs embarcadas. Os resultados experimentais com HOG foram obtidos usando duas arquiteturas de GPU embarcadas diferentes (Tegra e AMD APU), cada uma foi exposta por aproximadamente 100 horas em um feixe de nêutrons em Los Alamos National Lab (LANL). As métricas Precision e Recall foram usadas para validar a criticalidade do erro. Uma análise final mostrou que por um lado HOG é intrinsecamente resiliente a falhas (65% a 85% dos erros na saída tiveram um pequeno impacto na detecção), do outro lado alguns erros críticos aconteceram, tais que poderiam resultar em pedestres não detectados ou paradas desnecessárias do veículo. Este trabalho também avaliou a confiabilidade de duas Redes Neurais de Convolução para detecção de Objetos:Darknet e Faster RCNN. Três arquiteturas diferentes de GPUs foram expostas em um feixe de nêutrons controlado (Kepler, Maxwell, e Pascal), com as redes detectando objetos em dois data sets, Caltech e Visual Object Classes. Através da análise das saídas corrompidas das redes neurais, foi possível distinguir entre erros toleráveis e erros críticos, ou seja, erros que poderiam impactar na detecção de objetos. Adicionalmente, extensivas injeções de falhas no nível da aplicação (GDB) e em nível arquitetural (SASSIFI) foram feitas, para identificar partes críticas do código para o HOG e as CNNs. Os resultados mostraram que não são todos os estágios da detecção de objetos que são críticos para a confiabilidade da detecção final. Graças a injeção de falhas foi possível identificar partes do HOG e da Darknet, que se protegidas, irão com uma maior probabilidade aumentar a sua confiabilidade, sem adicionar um overhead desnecessário. A estratégia de tolerância a falhas proposta para o HOG foi capaz de detectar até 70% dos erros com 12% de overhead de tempo. / Pedestrian detection reliability is a fundamental problem for autonomous or aided driving. Methods that use object detection algorithms such as Histogram of Oriented Gradients (HOG) or Convolutional Neural Networks (CNN) are today very popular in automotive applications. Embedded Graphics Processing Units (GPUs) are exploited to make object detection in a very efficient manner. Unfortunately, GPUs architecture has been shown to be particularly vulnerable to radiation-induced failures. This work presents an experimental evaluation and analytical study of the reliability of two types of object detection algorithms: HOG and CNNs. This research aim is not just to quantify but also to qualify the radiation-induced errors on object detection applications executed in embedded GPUs. HOG experimental results were obtained using two different architectures of embedded GPUs (Tegra and AMD APU), each exposed for about 100 hours to a controlled neutron beam at Los Alamos National Lab (LANL). Precision and Recall metrics are considered to evaluate the error criticality. The reported analysis shows that, while being intrinsically resilient (65% to 85% of output errors only slightly impact detection), HOG experienced some particularly critical errors that could result in undetected pedestrians or unnecessary vehicle stops. This works also evaluates the reliability of two Convolutional Neural Networks for object detection: You Only Look Once (YOLO) and Faster RCNN. Three different GPU architectures were exposed to controlled neutron beams (Kepler, Maxwell, and Pascal) detecting objects in both Caltech and Visual Object Classes data sets. By analyzing the neural network corrupted output, it is possible to distinguish between tolerable errors and critical errors, i.e., errors that could impact detection. Additionally, extensive GDB-level and architectural-level fault-injection campaigns were performed to identify HOG and YOLO critical procedures. Results show that not all stages of object detection algorithms are critical to the final classification reliability. Thanks to the fault injection analysis it is possible to identify HOG and Darknet portions that, if hardened, are more likely to increase reliability without introducing unnecessary overhead. The proposed HOG hardening strategy is able to detect up to 70% of errors with a 12% execution time overhead.
16

Motion based vision methods and their applications / Méthodes de vision à la motion et leurs applications

Wang, Yi January 2017 (has links)
La détection de mouvement est une opération de base souvent utilisée en vision par ordinateur, que ce soit pour la détection de piétons, la détection d’anomalies, l’analyse de scènes vidéo ou le suivi d’objets en temps réel. Bien qu’un très grand nombre d’articles ait été publiés sur le sujet, plusieurs questions restent en suspens. Par exemple, il n’est toujours pas clair comment détecter des objets en mouvement dans des vidéos contenant des situations difficiles à gérer comme d'importants mouvements de fonds et des changements d’illumination. De plus, il n’y a pas de consensus sur comment quantifier les performances des méthodes de détection de mouvement. Aussi, il est souvent difficile d’incorporer de l’information de mouvement à des opérations de haut niveau comme par exemple la détection de piétons. Dans cette thèse, j’aborde quatre problèmes en lien avec la détection de mouvement: 1. Comment évaluer efficacement des méthodes de détection de mouvement? Pour répondre à cette question, nous avons mis sur pied une procédure d’évaluation de telles méthodes. Cela a mené à la création de la plus grosse base de données 100\% annotée au monde dédiée à la détection de mouvement et organisé une compétition internationale (CVPR 2014). J’ai également exploré différentes métriques d’évaluation ainsi que des stratégies de combinaison de méthodes de détection de mouvement. 2. L’annotation manuelle de chaque objet en mouvement dans un grand nombre de vidéos est un immense défi lors de la création d’une base de données d’analyse vidéo. Bien qu’il existe des méthodes de segmentation automatiques et semi-automatiques, ces dernières ne sont jamais assez précises pour produire des résultats de type “vérité terrain”. Pour résoudre ce problème, nous avons proposé une méthode interactive de segmentation d’objets en mouvement basée sur l’apprentissage profond. Les résultats obtenus sont aussi précis que ceux obtenus par un être humain tout en étant 40 fois plus rapide. 3. Les méthodes de détection de piétons sont très souvent utilisées en analyse de la vidéo. Malheureusement, elles souffrent parfois d’un grand nombre de faux positifs ou de faux négatifs tout dépendant de l’ajustement des paramètres de la méthode. Dans le but d’augmenter les performances des méthodes de détection de piétons, nous avons proposé un filtre non linéaire basée sur la détection de mouvement permettant de grandement réduire le nombre de faux positifs. 4. L’initialisation de fond ({\em background initialization}) est le processus par lequel on cherche à retrouver l’image de fond d’une vidéo sans les objets en mouvement. Bien qu’un grand nombre de méthodes ait été proposé, tout comme la détection de mouvement, il n’existe aucune base de donnée ni procédure d’évaluation pour de telles méthodes. Nous avons donc mis sur pied la plus grosse base de données au monde pour ce type d’applications et avons organisé une compétition internationale (ICPR 2016). / Abstract : Motion detection is a basic video analytic operation on which many high-level computer vision tasks are built upon, e.g., pedestrian detection, anomaly detection, scene understanding and object tracking strategies. Even though a large number of motion detection methods have been proposed in the last decades, some important questions are still unanswered, including: (1) how to separate the foreground from the background accurately even under extremely challenging circumstances? (2) how to evaluate different motion detection methods? And (3) how to use motion information extracted by motion detection to help improving high-level computer vision tasks? In this thesis, we address four problems related to motion detection: 1. How can we benchmark (and on which videos) motion detection method? Current datasets are either too small with a limited number of scenarios, or only provide bounding box ground truth that indicates the rough location of foreground objects. As a solution, we built the largest and most objective motion detection dataset in the world with pixel accurate ground truth to evaluate and compare motion detection methods. We also explore various evaluation metrics as well as different combination strategies. 2. Providing pixel accurate ground truth is a huge challenge when building a motion detection dataset. While automatic labeling methods suffer from a too large false detection rate to be used as ground truth, manual labeling of hundreds of thousands of frames is extremely time consuming. To solve this problem, we proposed an interactive deep learning method for segmenting moving objects from videos. The proposed method can reach human-level accuracies while lowering the labeling time by a factor of 40. 3. Pedestrian detectors always suffer from either false positive detections or false negative detections all depending on the parameter tuning. Unfortunately, manual adjustment of parameters for a large number of videos is not feasible in practice. In order to make pedestrian detectors more robust on a large variety of videos, we combined motion detection with various state-of-the-art pedestrian detectors. This is done by a novel motion-based nonlinear filtering process which improves detectors by a significant margin. 4. Scene background initialization is the process by which a method tries to recover the RGB background image of a video without foreground objects in it. However, one of the reasons that background modeling is challenging is that there is no good dataset and benchmarking framework to estimate the performance of background modeling methods. To fix this problem, we proposed an extensive survey as well as a novel benchmarking framework for scene background initialization.
17

Empirical Study of Pedestrian Detection using Deep Learning

Kapkic, Ahmet 11 May 2021 (has links)
No description available.
18

Synthetic Data for Training and Evaluation of Critical Traffic Scenarios

Collin, Sofie January 2021 (has links)
Modern camera-based vehicle safety systems heavily rely on machine learning and consequently require large amounts of training data to perform reliably. However, collecting and annotating the needed data is an extremely expensive and time-consuming process. In addition, it is exceptionally difficult to collect data that covers critical scenarios. This thesis investigates to what extent synthetic data can replace real-world data for these scenarios. Since only a limited amount of data consisting of such real-world scenarios is available, this thesis instead makes use of proxy scenarios, e.g. situations when pedestrians are located closely in front of the vehicle (for example at a crosswalk). The presented approach involves training a detector on real-world data where all samples of these proxy scenarios have been removed and compare it to other detectors trained on data where the removed samples have been replaced with various degrees of synthetic data. A method for generating and automatically and accurately annotating synthetic data, using features in the CARLA simulator, is presented. Also, the domain gap between the synthetic and real-world data is analyzed and methods in domain adaptation and data augmentation are reviewed. The presented experiments show that aligning statistical properties between the synthetic and real-world datasets distinctly mitigates the domain gap. There are also clear indications that synthetic data can help detect pedestrians in critical traffic situations / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
19

Investigation regarding the Performance of YOLOv8 in Pedestrian Detection / Undersökning angående YOLOv8s prestanda att detektera fotgängare

Jönsson Hyberg, Jonatan, Sjöberg, Adam January 2023 (has links)
Autonomous cars have become a trending topic as cars become better and better at driving autonomously. One of the big changes that have allowed autonomous cars to progress is the improvements in machine learning. Machine learning has made autonomous cars able to detect and react to obstacles on the road in real time. Like in all machine learning, there exists no solution that works better than all others, each solution has different strengths and weaknesses. That is why this study has tried to find the strengths and weaknesses of the object detector You Only Look Once v8 (YOLOv8) in autonomous cars. YOLOv8 was tested for how fast and accurately it could detect pedestrians in traffic in normal daylight images and light-augmented images. The trained YOLOv8 model was able to learn to detect pedestrians at high accuracy on daylight images, with the model achieving a mean Average Precision 50 (mAP50) of 0.874 with a Frames per second (FPS) of 67. Finally, the model struggled especially when the images got darker which means that the YOLOv8 in the current stage might not be good as the main detector for autonomous cars, as the detector loses accuracy at night. More tests with other datasets are needed to find all strengths and weaknesses of YOLOv8. / Autonoma bilar har blivit ett trendigt ämne då bilar blir bättre och bättre på att köra självständigt. En av de stora förändringarna som har gjort det möjligt för autonoma bilar att utvecklas är framstegen inom maskininlärning. Maskininlärning har gjort att autonoma bilar kan upptäcka och reagera på hinder på vägen i realtid. Som i all maskininlärning finns det ingen lösning som fungerar bättre än alla andra, varje lösning har olika styrkor och svagheter. Det är därför den här studien har försökt hitta styrkorna och svagheterna hos objektdetektorn You Only Look Once v8 (YOLOv8) i autonoma bilar. YOLOv8 testades för hur snabbt och precist den kunde upptäcka fotgängare i bilder av trafiken i dagsljus och bilder där ljuset har förändrat. Den tränade YOLOv8-modellen kunde lära sig att upptäcka fotgängare med hög noggrannhet på bilder i dagsljus, där modellen uppnådde en genomsnittlig medelprecision 50 (mAP50) på 0,874 med en antal bilder per sekund (FPS) på 67. Modellen hade särskilt svårt när bilderna blev mörkare vilket gör att YOLOv8 i det aktuella stadiet kanske inte är tillräckligt bra som huvuddetektor för autonoma bilar, eftersom detektorn tappar noggrannhet på mörkare bilder. Fler tester med andra datauppsättningar behövs för att hitta alla styrkor och svagheter med YOLOv8.
20

Multi-modal, Multi-Domain Pedestrian Detection and Classification : Proposals and Explorations in Visible over StereoVision, FIR and SWIR

Miron, Alina Dana 16 July 2014 (has links) (PDF)
The main purpose of constructing Intelligent Vehicles is to increase the safety for all traffic participants. The detection of pedestrians, as one of the most vulnerable category of road users, is paramount for any Advance Driver Assistance System (ADAS). Although this topic has been studied for almost fifty years, a perfect solution does not exist yet. This thesis focuses on several aspects regarding pedestrian classification and detection, and has the objective of exploring and comparing multiple light spectrums (Visible, ShortWave Infrared, Far Infrared) and modalities (Intensity, Depth by Stereo Vision, Motion).From the variety of images, the Far Infrared cameras (FIR), capable of measuring the temperature of the scene, are particular interesting for detecting pedestrians. These will usually have higher temperature than the surroundings. Due to the lack of suitable public datasets containing Thermal images, we have acquired and annotated a database, that we will name RIFIR, containing both Visible and Far-Infrared Images. This dataset has allowed us to compare the performance of different state of the art features in the two domains. Moreover, we have proposed a new feature adapted for FIR images, called Intensity Self Similarity (ISS). The ISS representation is based on the relative intensity similarity between different sub-blocks within a pedestrian region of interest. The experiments performed on different image sequences have showed that, in general, FIR spectrum has a better performance than the Visible domain. Nevertheless, the fusion of the two domains provides the best results. The second domain that we have studied is the Short Wave Infrared (SWIR), a light spectrum that was never used before for the task of pedestrian classification and detection. Unlike FIRcameras, SWIR cameras can image through the windshield, and thus be mounted in the vehicle's cabin. In addition, SWIR imagers can have the ability to see clear at long distances, making it suitable for vehicle applications. We have acquired and annotated a database, that we will name RISWIR, containing both Visible and SWIR images. This dataset has allowed us to compare the performance of different pedestrian classification algorithms, along with a comparison between Visible and SWIR. Our tests have showed that SWIR might be promising for ADAS applications,performing better than the Visible domain on the considered dataset. Even if FIR and SWIR have provided promising results, Visible domain is still widely used due to the low cost of the cameras. The classical monocular imagers used for object detectionand classification can lead to a computational time well beyond real-time. Stereo Vision providesa way of reducing the hypothesis search space through the use of depth information contained in the disparity map. Therefore, a robust disparity map is essential in order to have good hypothesis over the location of pedestrians. In this context, in order to compute the disparity map, we haveproposed different cost functions robust to radiometric distortions. Moreover, we have showed that some simple post-processing techniques can have a great impact over the quality of the obtained depth images.The use of the disparity map is not strictly limited to the generation of hypothesis, and couldbe used for some feature computation by providing complementary information to color images.We have studied and compared the performance of features computed from different modalities(Intensity, Depth and Flow) and in two domains (Visible and FIR). The results have showed that the most robust systems are the ones that take into consideration all three modalities, especially when dealing with occlusions.

Page generated in 0.1317 seconds