Spelling suggestions: "subject:"pedestrian detection"" "subject:"pedestrian 1detection""
1 |
Pedestrian Detection and Recognition System Using Support Vector MachinesWang, Sz-bo 03 September 2010 (has links)
This study considers the dynamic pedestrian detection system and the static pedestrian detection system with a single camera. In the static detection system, this study reconstructs the static database. As to feature extraction, HOG combining with SVM classifier is used in this study. Experimental results show the database can detect people by this algorithm in several scenes. In the dynamic detection system, because the population of older persons and disabled persons increases gradually nowadays, cross the intersection is a challenge for older persons and disabled persons, so this study researches in dynamic pedestrian detection system by a single camera for assisting autonomous transport robots, and this system detects people at the intersection for assisting older persons and disabled persons when they cross the intersection. As to the algorithm this study uses the foot detection algorithm to detect dynamic pedestrians. According to the experimental results, the light and clothes effect on the experimental results both in the dynamic pedestrian system and the static pedestrian system. The dynamic pedestrian system still shows real-time performance not only in the longitudinal direction but also in the lateral direction.
|
2 |
Color Features for Boosted Pedestrian Detection / Färgsärdrag för boostingbaserad fotgängardetekteringHansson, Niklas January 2015 (has links)
The car has increasingly become more and more intelligent throughout the years. Today's radar and vision based safety systems can warn a driver and brake the vehicle automatically if obstacles are detected. Research projects such as the Google Car have even succeeded in creating fully autonomous cars. The demands to obtain the highest rating in safety tests such as Euro NCAP are also steadily increasing, and as a result, the development of these systems have become more attractive for car manufacturers. In the near future, a car must have a system for detecting, and performing automatic braking for pedestrians to receive the highest safety rating of five stars. The prospect is that the volume of active safety system will increase drastically when the car manufacturers start installing them in not only luxury cars, but also in the regularly priced ones. The use of automatic braking comes with a high demand on the performance of active safety systems, false positives must be avoided at all costs. Dollar et al. [2014] introduced Aggregated Channel Features (ACF) which is based on a 10-channel LUV+HOG feature map. The method uses decision trees learned from boosting and has been shown to outperform previous algorithms in object detection tasks. The rediscovery of neural networks, and especially Convolutional Neural Networks (CNN) has increased the performance in almost every field of machine learning, including pedestrian detection. Recently Yang et al.[2015] combined the two approaches by using the the feature maps from a CNN as input to a decision tree based boosting framework. This resulted in state of the art performance on the challenging Caltech pedestrian data set. This thesis presents an approach to improve the performance of a cascade of boosted classifiers by investigating the impact of using color information for pedestrian detection. The color self similarity feature introduced by Walk et al.[2010] was used to create a version better adapted for boosting. This feature is then used in combination with a gradient based feature at the last step of a cascade. The presented feature increases the performance compared to currently used classifiers at Autoliv, on data recorded by Autoliv and on the benchmark Caltech pedestrian data set. / Bilen har genom åren kommit att bli mer och mer intelligent. Dagens radar- och kamerabaserade säkerhetssystem kan varna och bromsa bilen automatiskt om hider detekteras. Forskningsprojekt såsom Google Car har t.o.m lyckats köra bilar helt autonomt. Kraven för att uppnå den högsta säkerhetsklassningen i t.ex. Euro NCAP blir allt strängare i takt med att dessa system utvecklas och som följd har dessa system blivit attraktivare för biltillverkare. Inom en snart framtid kommer det att krävas att en bil har ett system för att upptäcka och att bromsa automatiskt för fotgängare för att uppnå den högsta klassen, fem stjärnor. Förutsikterna är att produktionsvolymer för aktiva säkerhetsytem kommer att öka drastiskt när biltillverkarna börjar utrusta vanliga bilar och inte enbart lyxmodeller med dessa system. Användningen av aktiv bromsning ställer höga krav på prestanda, felakting aktivering av system måste i högsta grad undvikas. Dollar et al. [2014] presenterade Aggregated Channel Features (ACF) som baseras på en tiokanalig LUV+HOG särdragskarta. Metoden använder beslutsträd på pixelnivå som tas fram genom boosting och överträffade tidigare algoritmer för objektigenkänning. Återupptäkten av neurala nätverker och i synnerlighet Convolutional Neural Networks (CNN) har medfört en ökning i prestanda inom nästan alla fält av maskininlärning, inklusive fotgängardetektion. Nyligen kombinerades dessa två metoder av Yang et al.[2015] genom att särdragskartan från ett CNN användes som insignal till ett beslutsträdsbaserat boostingramverk. Detta ledde till det hittills bästa resultatet på det utmanande Caltech pedestrian dataset. I det här examensarbetet presenteras en metod som kan öka prestandan för en kaskad av boostingklassificerare ämnad för fotgängardetektion. Det färgbaserad särdraget color self similarity, Walk et al.[2010], används för att skapa en version som är bättre lämpad för boosting. Det presenterade särdraget ökade prestandan jämfört med befintliga klassificerare som används av Autoliv på både data inspelat av Autoliv och på Caltech pedestrian dataset.
|
3 |
Pedestrian Detection on FPGAQureshi, Kamran January 2014 (has links)
Image processing emerges from the curiosity of human vision. To translate, what we see in everyday life and how we differentiate between objects, to robotic vision is a challenging and modern research topic. This thesis focuses on detecting a pedestrian within a standard format of an image. The efficiency of the algorithm is observed after its implementation in FPGA. The algorithm for pedestrian detection was developed using MATLAB as a base. To detect a pedestrian, a histogram of oriented gradient (HOG) of an image was computed. Study indicates that HOG is unique for different objects within an image. The HOG of a series of images was computed to train a binary classifier. A new image was then fed to the classifier in order to test its efficiency. Within the time frame of the thesis, the algorithm was partially translated to a hardware description using VHDL as a base descriptor. The proficiency of the hardware implementation was noted and the result exported to MATLAB for further processing. A hybrid model was created, in which the pre-processing steps were computed in FPGA and a classification performed in MATLAB. The outcome of the thesis shows that HOG is a very efficient and effective way to classify and differentiate different objects within an image. Given its efficiency, this algorithm may even be extended to video.
|
4 |
Multiple Human Body Detection in CrowdsFeng, Weinan January 2012 (has links)
The objective of this project is to use digital imaging devices to monitor a delineated area of the public space and to register statistics about people moving across this area. A feasible detecting approach, which is based on background subtraction, has been developed and has been tested on 39 images. Individual pedestrians in images can be detected and counted. The approach is suitably used to detect and count pedestrians without overlapping. Accuracy rate of detection is higher than 80%.
|
5 |
Keypoint-Based Binocular Distance Measurement for Pedestrian Detection System on VehicleZhao, Mingchang January 2014 (has links)
The Pedestrian Detection System (PDS) has become a significant area of research designed to protect pedestrians. Despite the huge number of research work, the most current PDSs are designed to detect pedestrians without knowing their distances from cars. In fact, a priori knowledge of the distance between a car and pedestrian allows this system to make the appropriate decision in order to avoid collisions. Typical methods of distance measurement require additional equipment (e.g., Radars) which, unfortunately, cannot identify objects. Moreover, traditional stereo-vision methods have poor precision in long-range conditions. In this thesis, we use the keypoint-based feature extraction method to generate the parallax in a binocular vision system in order to measure a detectable object; this is used instead of a disparity map. Our method enhances the tolerance to instability of a moving
vehicle; and, it also enables binocular measurement systems to be equipped with a zoom lens and to have greater distance between cameras. In addition, we designed a crossover re-detection and tracking method in order to reinforce the robustness of the system (one camera helps the other reduce detection errors). Our system is able to measure the distance between cars and pedestrians; and, it can also be used efficiently to measure the distance between cars and other objects such as Traffic signs or animals. Through a real word experiment, the system shows a 7.5% margin of error in outdoor and long-range conditions.
|
6 |
A Novel Semantic Feature Fusion-based Pedestrian Detection System to Support Autonomous VehiclesSha, Mingzhi 27 May 2021 (has links)
Intelligent transportation systems (ITS) have become a popular method to enhance the safety and efficiency of transportation. Pedestrians, as an essential participant of ITS, are very vulnerable in a traffic collision, compared with the passengers inside the vehicle. In order to protect the safety of all traffic participants and enhance transportation efficiency, the novel autonomous vehicles are required to detect pedestrians accurately and timely.
In the area of pedestrian detection, deep learning-based pedestrian detection methods have gained significant development since the appearance of powerful GPUs. A large number of researchers are paying efforts to improve the accuracy of pedestrian detection by utilizing the Convolutional Neural Network (CNN)-based detectors.
In this thesis, we propose a one-stage anchor-free pedestrian detector named Bi-Center Network (BCNet), which is aided by the semantic features of pedestrians' visible parts. The framework of our BCNet has two main modules: the feature extraction module produces the concatenated feature maps that extracted from different layers of ResNet, and the four parallel branches in the detection module produce the full body center keypoint heatmap, visible part center keypoint heatmap, heights, and offsets, respectively. The final bounding boxes are converted from the high response points on the fused center keypoint heatmap and corresponding predicted heights and offsets.
The fused center keypoint heatmap contains the semantic feature fusion of the full body and the visible part of each pedestrian. Thus, we conduct ablation studies and discover the efficiency of feature fusion and how visibility features benefit the detector's performance by proposing two types of approaches: introducing two weighting hyper-parameters and applying three different attention mechanisms.
Our BCNet gains 9.82% MR-2 (the lower the better) on the Reasonable setup of the CityPersons dataset, compared to baseline model which gains 12.14% MR-2 .
The experimental results indicate that the performance of pedestrian detection could be significantly improved because the visibility semantic could prompt stronger responses on the heatmap. We compare our BCNet with state-of-the-art models on the CityPersons dataset and ETH dataset, which shows that our detector is effective and achieves a promising performance.
|
7 |
Pedestrian Detection on Dewarped Fisheye Images using Deep Neural NetworksJEEREDDY, UTTEJH REDDY January 2019 (has links)
In the field of autonomous vehicles, Advanced Driver Assistance Systems (ADAS)play a key role. Their applications vary from aiding with critical safety systems to assisting with trivial parking scenarios. To optimize the use of resources, trivial ADAS applications are often limited to make use of low-cost sensors. As a result, sensors such as Cameras and UltraSonics are preferred over LiDAR (Light Detection and Ranging) and RADAR (RAdio Detection And Ranging) in assisting the driver with parking. In a parking scenario, to ensure the safety of people in and around the car, the sensors need to detect objects around the car in real-time. With the advancements in Deep Learning, Deep Neural Networks (DNN) are becoming increasingly effective in detecting objects with real-time performance. Therefore, the thesis aims to investigate the viability of Deep Neural Networks using Fisheye cameras to detect pedestrians around the car. To achieve the objective, an experiment was conducted on a test vehicle equipped with multiple Fisheye cameras. Three Deep Neural Networks namely, YOLOv3 (You Only Look Once), its faster variant Tiny-YOLOv3 ND ResNet-50 were chosen to detect pedestrians. The Networks were trained on Fisheye image dataset with the help of transfer learning. After training, the models were also compared to pre-trained models that were trained to detect pedestrians on normal images. Our experiments have shown that the YOLOv3 variants have performed well but with a difficulty of localizing the pedestrians. The ResNet model has failed to generate acceptable detections and thus performed poorly. The three models produced detections with a real-time performance for a single camera but when scaled to multiple cameras, the detection speed was not on par. The YOLOv3 variants could detect pedestrians successfully on dewarped fish-eye images but the pipeline still needs a better dewarping algorithm to lessen the distortion effects. Further, the models need to be optimized in order to generate detections with real-time performance on multiple cameras and also to fit the model on an embedded system.
|
8 |
Avaliação e proposta de sistemas de câmeras estéreo para detecção de pedestres em veículos inteligentes / Stereo cameras systems evaluation and proposal for pedestrian detection on intelligent vehiclesNakamura, Angelica Tiemi Mizuno 06 December 2017 (has links)
Detecção de pedestres é uma importante área em visão computacional com o potencial de salvar vidas quando aplicada em veículos. Porém, essa aplicação exige detecções em tempo real, com alta acurácia e menor quantidade de falsos positivos possível. Durante os últimos anos, diversas ideias foram exploradas e os métodos mais recentes que utilizam arquiteturas profundas de redes neurais possibilitaram um grande avanço nesta área, melhorando significativamente o desempenho das detecções. Apesar desse progresso, a detecção de pedestres que estão distantes do veículo continua sendo um grande desafio devido às suas pequenas escalas na imagem, sendo necessária a avaliação da eficácia dos métodos atuais em evitar ou atenuar a gravidade dos acidentes de trânsito que envolvam pedestres. Dessa forma, como primeira proposta deste trabalho, foi realizado um estudo para avaliar a aplicabilidade dos métodos estado-da-arte para evitar colisões em cenários urbanos. Para isso, a velocidade e dinâmica do veículo, o tempo de reação e desempenho dos métodos de detecção foram considerados. Através do estudo, observou-se que em ambientes de tráfego rápido ainda não é possível utilizar métodos visuais de detecção de pedestres para assistir o motorista, pois nenhum deles é capaz de detectar pedestres que estão distantes do veículo e, ao mesmo tempo, operar em tempo real. Mas, ao considerar apenas pedestres em maiores escalas, os métodos tradicionais baseados em janelas deslizantes já conseguem atingir um bom desempenho e rápida execução. Dessa forma, com a finalidade de restringir a operação dos detectores apenas para pedestres em maiores escalas e assim, possibilitar a aplicação de métodos visuais em veículos, foi proposta uma configuração de câmeras que possibilitou obter imagens para um maior intervalo de distância à frente do veículo com pedestres em resolução quase duas vezes maior em comparação à uma câmera comercial. Resultados experimentais mostraram considerável melhora no desempenho das detecções, possibilitando superar a dificuldade causada pelas pequenas escalas dos pedestres nas imagens. / Pedestrian detection is an important area in computer vision with the potential to save lives when applied on vehicles. This application requires accurate detections and real-time operation, keeping the number of false positives as minimal as possible. Over the past few years, several ideas were explored, including approaches with deep network architectures, which have reached considerably better performances. However, detecting pedestrians far from the camera is still challenging due to their small sizes on images, making it necessary to evaluate the effectiveness of existing approaches on avoiding or reducing traffic accidents that involves pedestrians. Thus, as the first proposal of this work, a study was done to verify the state-of-the-art methods applicability for collision avoidance in urban scenarios. For this, the speed and dynamics of the vehicle, the reaction time and performance of the detection methods were considered. The results from this study show that it is still not possible to use a vision-based pedestrian detector for driver assistance on urban roads with fast moving traffic, since none of them is able to handle real-time pedestrian detection. However, for large-scale pedestrians on images, methods based on sliding window approach can already perform reliably well with fast inference time. Thus, in order to restrict the operation of detectors only for pedestrians in larger scales and enable the application of vision-based methods in vehicles, it was proposed a camera setup that provided to get images for a larger range of distances in front of the vehicle with pedestrians resolution almost twice as large compared to a commercial camera. Experimental results reveal a considerable enhancement on detection performance, overcoming the difficulty caused by the reduced scales that far pedestrians have on images.
|
9 |
Pedestrian Detection Based on Data and Decision Fusion Using Stereo Vision and Thermal ImagingSun, Roy 25 April 2016 (has links)
Pedestrian detection is a canonical instance of object detection that remains a popular topic of research and a key problem in computer vision due to its diverse applications. These applications have the potential to positively improve the quality of life. In recent years, the number of approaches to detecting pedestrians in monocular and binocular images has grown steadily. However, the use of multispectral imaging is still uncommon. This thesis work presents a novel approach to data and feature fusion of a multispectral imaging system for pedestrian detection. It also includes the design and building of a test rig which allows for quick data collection of real-world driving. An application of the mathematical theory of trifocal tensor is used to post process this data. This allows for pixel level data fusion across a multispectral set of data. Performance results based on commonly used SVM classification architectures are evaluated against the collected data set. Lastly, a novel cascaded SVM architecture used in both classification and detection is discussed. Performance improvements through the use of feature fusion is demonstrated.
|
10 |
Reliability evaluation and error mitigation in pedestrian detection algorithms for embedded GPUs / Validação da confiabilidade e tolerância a falhas em algoritmos de detecção de pedestres para GPUs embarcadasSantos, Fernando Fernandes dos January 2017 (has links)
A confiabilidade de algoritmos para detecção de pedestres é um problema fundamental para carros auto dirigíveis ou com auxílio de direção. Métodos que utilizam algoritmos de detecção de objetos como Histograma de Gradientes Orientados (HOG - Histogram of Oriented Gradients) ou Redes Neurais de Convolução (CNN – Convolutional Neural Network) são muito populares em aplicações automotivas. Unidades de Processamento Gráfico (GPU – Graphics Processing Unit) são exploradas para executar detecção de objetos de uma maneira eficiente. Infelizmente, as arquiteturas das atuais GPUs tem se mostrado particularmente vulneráveis a erros induzidos por radiação. Este trabalho apresenta uma validação e um estudo analítico sobre a confiabilidade de duas classes de algoritmos de detecção de objetos, HOG e CNN. Esta pesquisa almeja não somente quantificar, mas também qualificar os erros produzidos por radiação em aplicações de detecção de objetos em GPUs embarcadas. Os resultados experimentais com HOG foram obtidos usando duas arquiteturas de GPU embarcadas diferentes (Tegra e AMD APU), cada uma foi exposta por aproximadamente 100 horas em um feixe de nêutrons em Los Alamos National Lab (LANL). As métricas Precision e Recall foram usadas para validar a criticalidade do erro. Uma análise final mostrou que por um lado HOG é intrinsecamente resiliente a falhas (65% a 85% dos erros na saída tiveram um pequeno impacto na detecção), do outro lado alguns erros críticos aconteceram, tais que poderiam resultar em pedestres não detectados ou paradas desnecessárias do veículo. Este trabalho também avaliou a confiabilidade de duas Redes Neurais de Convolução para detecção de Objetos:Darknet e Faster RCNN. Três arquiteturas diferentes de GPUs foram expostas em um feixe de nêutrons controlado (Kepler, Maxwell, e Pascal), com as redes detectando objetos em dois data sets, Caltech e Visual Object Classes. Através da análise das saídas corrompidas das redes neurais, foi possível distinguir entre erros toleráveis e erros críticos, ou seja, erros que poderiam impactar na detecção de objetos. Adicionalmente, extensivas injeções de falhas no nível da aplicação (GDB) e em nível arquitetural (SASSIFI) foram feitas, para identificar partes críticas do código para o HOG e as CNNs. Os resultados mostraram que não são todos os estágios da detecção de objetos que são críticos para a confiabilidade da detecção final. Graças a injeção de falhas foi possível identificar partes do HOG e da Darknet, que se protegidas, irão com uma maior probabilidade aumentar a sua confiabilidade, sem adicionar um overhead desnecessário. A estratégia de tolerância a falhas proposta para o HOG foi capaz de detectar até 70% dos erros com 12% de overhead de tempo. / Pedestrian detection reliability is a fundamental problem for autonomous or aided driving. Methods that use object detection algorithms such as Histogram of Oriented Gradients (HOG) or Convolutional Neural Networks (CNN) are today very popular in automotive applications. Embedded Graphics Processing Units (GPUs) are exploited to make object detection in a very efficient manner. Unfortunately, GPUs architecture has been shown to be particularly vulnerable to radiation-induced failures. This work presents an experimental evaluation and analytical study of the reliability of two types of object detection algorithms: HOG and CNNs. This research aim is not just to quantify but also to qualify the radiation-induced errors on object detection applications executed in embedded GPUs. HOG experimental results were obtained using two different architectures of embedded GPUs (Tegra and AMD APU), each exposed for about 100 hours to a controlled neutron beam at Los Alamos National Lab (LANL). Precision and Recall metrics are considered to evaluate the error criticality. The reported analysis shows that, while being intrinsically resilient (65% to 85% of output errors only slightly impact detection), HOG experienced some particularly critical errors that could result in undetected pedestrians or unnecessary vehicle stops. This works also evaluates the reliability of two Convolutional Neural Networks for object detection: You Only Look Once (YOLO) and Faster RCNN. Three different GPU architectures were exposed to controlled neutron beams (Kepler, Maxwell, and Pascal) detecting objects in both Caltech and Visual Object Classes data sets. By analyzing the neural network corrupted output, it is possible to distinguish between tolerable errors and critical errors, i.e., errors that could impact detection. Additionally, extensive GDB-level and architectural-level fault-injection campaigns were performed to identify HOG and YOLO critical procedures. Results show that not all stages of object detection algorithms are critical to the final classification reliability. Thanks to the fault injection analysis it is possible to identify HOG and Darknet portions that, if hardened, are more likely to increase reliability without introducing unnecessary overhead. The proposed HOG hardening strategy is able to detect up to 70% of errors with a 12% execution time overhead.
|
Page generated in 0.0797 seconds