Spelling suggestions: "subject:"autonome""
371 |
[pt] MODELAGEM E CONTROLE DE UM QUADRICÓPTERO PARA NAVEGAÇÃO AUTÔNOMA EM CAMPOS AGRÍCOLAS / [en] MODELING AND CONTROL OF A QUADCOPTER FOR AUTONOMOUS NAVIGATION IN AGRICULTURAL FIELDSYESSICA ROSAS CUEVAS 04 October 2021 (has links)
[pt] Neste trabalho, aborda-se a modelagem e controle de um quadricóptero para navegação autônoma em ambientes agrícolas. Os modelos cinemático e dinâmico do veículo aéreo são computados a partir do formalismo de Newton-Euler, incluindo efeitos aerodinâmicos e características das hélices.
O sistema de movimento do quadricóptero pode ser dividido em dois subsistemas, um translacional e outro rotacional, responsáveis pelo controle de posição nos eixos x, y, z, and atitude do veículo no espaço Cartesiano. A primeira abordagem de controle é linear, se presenta dois controladores, um controlador proporcional-derivativo (PD) e o adaptativo baseado no espaço de estados. A segunda abordagem é não-linear e baseada em um controlador adaptativo a fim de lidar com a presença de incertezas nos
parâmetros do sistema. Simulações numéricas são executadas em Matlab para ilustrar o desempenho e a viabilidade da metodologia de controle proposta. Simulações computacionais 3D são executadas em Gazebo para verificar a navegação autônoma em um campo agrícola. / [en] In this work, we address the modeling and control design of a quadrotor for autonomous navigation in agricultural environments. The kinematic and dynamic models of the aerial vehicle are derived following
the Newton-Euler formalism. The motion system of the quadrotor can be split into two subsystems, that is, translational and rotational subsystems, responsible for controlling the position along the longitudinal, transverse and vertical axes of the Cartesian space as well as its orientation about the corresponding axes. The first linear control approach is based on the proportional-derivative (PD) controller, whereas the second nonlinear control approach is based on an adaptive controller in order to deal with the presence of uncertainties in the system parameters. Numerical simulations are carried out in Matlab to illustrate the performance and feasibility of the proposed control methodology. Gazebo was used to perform the 3D
simulations for verifying autonomous navigation in agricultural fields.
|
372 |
Handling Occlusion using Trajectory Prediction in Autonomous Vehicles / Ocklusionshantering med hjälp av banprediktion för självkörande fordonLjung, Mattias, Nagy, Bence January 2022 (has links)
Occlusion is a frequently occuring challenge in vision systems for autonomous driving. The density of objects in the field-of-view of the vehicle may be so high that some objects are only visible intermittently. It is therefore beneficial to investigate ways to predict the paths of objects under occlusion. In this thesis, we investigate whether trajectory prediction methods can be used to solve the occlusion prediction problem. We investigate two different types of approaches, one based on motion models, and one based on machine learning models. Furthermore, we investigate whether these two approaches can be fused to produce an even more reliable model. We evaluate our models on a pedestrian trajectory prediction dataset, an autonomous driving dataset, and a subset of the autonomous driving dataset that only includes validation examples of occlusion. The comparison of our different approaches shows that pure motion model-based methods perform the worst out of the three. On the other hand, machine learning-based models perform better, yet they require additional computing resources for training. Finally, the fused method performs the best on both the driving dataset and the occlusion data. Our results also indicate that trajectory prediction methods, both motion model-based and learning-based ones, can indeed accurately predict the path of occluded objects up to at least 3 seconds in the autonomous driving scenario.
|
373 |
Benchmarking Object Detection Algorithms for Optical Character Recognition of Odometer MileageHjelm, Mandus, Andersson, Eric January 2022 (has links)
Machine learning algorithms have had breakthroughs in many areas in the last decades. The hardest task, to solve with machine learning, was solving tasks that humans solve intuitively, e.g. understanding natural language or recognizing specific objects in images. To overcome these problems is to allow the computer to learn from experience, instead of implementing a pre-written program to solve the problem at hand - that is how Neural Networks came to be. Neural Network is widely used in image analysis, and object detection algorithms have evolved considerably in the last years. Two of these algorithms are Faster Region-basedConvolutional Neural Networks(Faster R-CNN) and You Only Look Once(YOLO). The purpose of this thesis is to evaluate and benchmark state-of-the-art object detection methods and then analyze their performance based on reading information from images. The information that we aim to extract is digital and analog digits from the odometer of a car, this will be done through object recognition and region-based image analysis. Our models will be compared to the open-source Optical Character Recognition(OCR) model Tesseract, which is in production by the Stockholm-based company Greater Than. In this project we will take a more modern approach and focus on two object detection models, Faster R-CNN and YOLO. When training these models, we will use transfer learning. This means that we will use models that are pre-trained, in our case on a dataset called ImageNet, specifically for object detection. We will then use the TRODO dataset to train these models further, this dataset consists of 2 389 images of car odometers. The models are then evaluated through the measures of mean average precision(mAP), prediction accuracy, and Levenshtein Distance. Our findings are that the object detection models are out-performing Tesseract for all measurements. The highest mAP and accuracy is attained by Faster R-CNN while the best results, regarding Levenshtein distance, are achieved by a YOLO model. The final result is clear, both of our approaches have more diversity and are far better thanTesseract, for solving this specific problem.
|
374 |
Transformer Based Object Detection and Semantic Segmentation for Autonomous DrivingHardebro, Mikaela, Jirskog, Elin January 2022 (has links)
The development of autonomous driving systems has been one of the most popular research areas in the 21st century. One key component of these kinds of systems is the ability to perceive and comprehend the physical world. Two techniques that address this are object detection and semantic segmentation. During the last decade, CNN based models have dominated these types of tasks. However, in 2021, transformer based networks were able to outperform the existing CNN approach, therefore, indicating a paradigm shift in the domain. This thesis aims to explore the use of a vision transformer, particularly a Swin Transformer, in an object detection and semantic segmentation framework, and compare it to a classical CNN on road scenes. In addition, since real-time execution is crucial for autonomous driving systems, the possibility of a parameter reduction of the transformer based network is investigated. The results appear to be advantageous for the Swin Transformer compared to the convolutional based network, considering both object detection and semantic segmentation. Furthermore, the analysis indicates that it is possible to reduce the computational complexity while retaining the performance.
|
375 |
Forecasting checking account balance : Using supervised machine learningDannelind, Martin January 2022 (has links)
The introduction of open banking has made it possible for companies to build the next generation of applications based on transactional data. Enabling economic forecasts which private individuals can use to make responsible financial decisions. This project investigated forecasting account balances using supervised learning. 7 different regression models were run on transactional data from 377 anonymised checking accounts split into subgroups. The results concluded that multivariate XGBoost optimised with feature selection was the best performing forecasting model and the subgroup with recurring income transactions was easiest to forecast. Based on the result from this project it can be concluded that a viable option to forecast account balances is to split the transactional data into subgroups and forecast them separately. Minimising the errors given by certain random, infrequent and large types of transactions.
|
376 |
Facial recognition techniques comparison for in-field applications : Database setup and environmental influence of the access controlNorvik, Gustav January 2022 (has links)
A currently ongoing project at Stanley Security is to develop a facial recognition system as access control system to prevent theft of heavy vehicles for terrorist purposes. As of today, there are several providers of facial recognition techniques and the spectrum ranges from multi dollar licenses to easy access open source. This fact, the range of different algorithms and the difficulty to estimate their differences in performance is the fundamental inspiration to investigate in this thesis. Four facial recognition algorithms, Eigenfaces, Fisherfaces, Local Binary Patterns Histogram and Convolutional Neural Network, that are easily accessible and open source were therefore chosen and investigated. They were implemented and tested on the STANLEY database. The STANLEY database was built up for this test specifically. It contains of 11 persons with 10 pictures each, taken from the driver’s seat, including sunglasses, various head sizes and strong local face lighting and shadowing to simulate the environment in which the algorithms were tested. The Convolutional Neural Network algorithm had the highest precision, the least false positives and was also the fastest in the speed test. The Eigenfaces and Fisherfaces algorithms were not very robust and had trouble with images where high local face lighting and shadowing were present and are therefore considered not suited for access control for heavy vehicles. The Local Binary Patters Histogram algorithm stood out from the Eigenfaces and Fisherfaces algorithms but was still not near the Convolutional Neural Network algorithm in performance. The Convolutional Neural Network algorithm had a precision or true positive rate of 95,8% of the persons on all the given images and had 0% false positives. The precision for the Eigenface, Fisherface and LBPH algorithms were 16.2%, 17,6% and 26,5% respectively and the true positives were 23,5%, 24,5% and 10,2% respectively. The high false positive rate would have negative impact on access control applications. The convolutional neural network-based algorithm was concluded to be the facial recognition technique of choice for STANLEY Security and the biggest obstacle for implementing and commercializing this solution is the jurisdictional aspects regarding license and usage of code and specifically the pre-trained face recognition model. The jurisdictional aspect was never treated in this thesis although it was one of the extensions.
|
377 |
Lekmannabedömning av ett självkörande fordons körförmåga: betydelsen av att erfara fordonet i trafiken / Lay assessment of a selfdriving vehicle’s driving ability: the influence of experiencing the vehicle in trafficÅkerström, Ulrika January 2022 (has links)
Datorstyrda maskiner som både kan styra sina egna aktiviteter och som har ett stort rörelseomfång kommer snart att dela vår fysiska miljö vilket kommer innebära en drastisk förändring för vår nuvarande mänskliga kontext. Tidigare olyckor som skett mellan mänskliga förare och automatiserade fordon kan förklaras genom en bristande förståelse för de automatiserade fordonets beteende. Det är därför viktigt att ta reda på hur människor förstår automatiserade fordons förmågor och begränsningar. SAE International, en global yrkeskår får ingenjörer verksamma inom fordonsindustrin, har definierat ett ramverk som beskriver funktionaliteten hos automatiserade fordon i 6 olika nivåer. Den rapporterade studien undersökte med utgångspunkt i detta ramverk vilken automationsgrad deltagarna antar att en självkörande buss har genom deltagarnas upplevelse av fordonet. Inom ramarna för studien färdades deltagarna en kort sträcka på en självkörande buss och besvarade en enkät om hur de ser på bussens förmågor och begränsningar både före och efter färden. Studieresultatet visade att hälften av deltagarna överskattade bussens automationsgrad. Efter att ha färdats med bussen justerade deltagarna ner sina förväntningar på fordonets körförmåga vilket stämde bättre överens med bussens förmågor och begränsningar. Deltagarna rapporterade även att de var mer säkra i sina bedömningar efter erfarenhet av fordonet. Sammanfattningsvis tyder resultatet på att (1) människor tenderar att överskatta automatiserade fordons körförmåga, men att (2) deras uppfattning justeras i samband med att de kommer i kontakt med det automatiserade fordonet i verkligheten och att (3) de då även blir mer säkra i sina bedömningar. Detta borde tas i beaktning vid utveckling av självkörande fordon för att minska risken för olyckor i trafiken.
|
378 |
Railway Fastener Fault Detection using YOLOv5Efraimsson, Alva, Lemón, Elin January 2022 (has links)
The railway system is an important part of the sociotechnical society, as it enables efficient, reliable, and sustainable transportation of both people and goods. Despite increasing investments, the Swedish railway has encountered structural and technical problems due to worn-out infrastructure as a result of insufficient maintenance. Two important technical aspects of the rail are the stability and robustness. To prevent transversal and longitudinal deviations, the rail is attached to sleepers by fasteners. The fasteners’ conditions are therefore crucial for the stability of the track and the safeness of the railway. Automatic fastener inspections enable efficient and objective inspections which are a prerequisite for a more adequate maintenance of the railway. This master thesis aims to investigate how machine learning can be applied to the problem of automatic fastener fault detection. The master thesis includes the complete process of applying and evaluating machine learning algorithms to the given problem, including data gathering, data preprocessing, model training, and model evaluation. The chosen model was the state-of-the-art object detector YOLOv5s. To assess the model’s performance and robustness to the given problem, different settings regarding both the dataset and the model’s architecture in terms of transfer learning and hyperparameters were tested. The results indicate that YOLOv5s is an appropriate machine learning algorithm for fastener fault detection. The models that achieved the highest performance reached an mAP[0.5:0.95] above 0.744 during training and 0.692 during testing. Furthermore, several combinations of different settings had a positive effect on the different models’ performances. In conclusion, YOLOv5s is in general a suitable model for detecting fasteners. By closer analysis of the result, the models failed when both fasteners and missing fasteners were partly visible in the lower and upper parts of the image. These cases were not annotated in the dataset and therefore resulted in misclassification. In production, the cropped fasteners can be reduced by accurately synchronizing the frequency of capturing data with the distance between the sleepers, in such a way that only one sleeper and corresponding fasteners are visible per image leading to more accurate results. To conclude, machine learning can be applied as an effective and robust technique to the problem of automatic fastener fault detection.
|
379 |
Point clouds in the application of Bin PickingAnand, Abhijeet January 2023 (has links)
Automatic bin picking is a well-known problem in industrial automation and computer vision, where a robot picks an object from a bin and places it somewhere else. There is continuous ongoing research for many years to improve the contemporary solution. With camera technology advancing rapidly and available fast computation resources, solving this problem with deep learning has become a current interest for several researchers. This thesis intends to leverage the current state-of-the-art deep learning based methods of 3D instance segmentation and point cloud registration and combine them to improve the bin picking solution by improving the performance and make them robust. The problem of bin picking becomes complex when the bin contains identical objects with heavy occlusion. To solve this problem, a 3D instance segmentation is performed with Fast Point Cloud Clustering (FPCC) method to detect and locate the objects in the bin. Further, an extraction strategy is proposed to choose one predicted instance at a time. Inthe next step, a point cloud registration technique is implemented based on PointNetLK method to estimate the pose of the selected object from the bin. The above implementation is trained, tested, and evaluated on synthetically generated datasets. The synthetic dataset also contains several noisy point clouds to imitate a real situation. The real data captured at the company ’SICK IVP’ is also tested with the implemented model. It is observed that the 3D instance segmentation can detect and locate the objects available in the bin. In a noisy environment, the performance degrades as the noise level increase. However, the decrease in the performance is found to be not so significant. Point cloud registration is observed to register best with the full point cloud of the object, when compared to point cloud with missing points.
|
380 |
Guardrail detection for landmark-based localizationGumaelius, Nils January 2022 (has links)
A requirement for safe autonomous driving is to have an accurate global localization of the ego vehicle. Methods based on Global Navigation Satellite System (GNSS) are the most common but are not precise enough in areas without good satellite signals. Instead, methods likelandmark-based localization (LBL) can be used. In LBL, sensors onboard the vehicle detectlandmarks near the vehicle. With these detections, the vehicle’s position is deduced by looking up matching landmarks on a high-definition map. Commonly found along roads, stretching for long distances, guardrails are a great landmark that can be used for LBL. In this thesis, two different methods are proposed to detect and vectorize guardrails from vehicle sensor data to enable future map matching for LBL. The first method uses semantically labeled LiDAR data with pre-classified guardrail LiDAR points as input data. The method is based on the DBSCAN clustering algorithm to cluster and filter out false positives from the pre-classified LiDAR points. The second algorithm uses raw LiDAR data as input. The algorithm finds guardrail candidate points by segmenting high-densityareas and matching these with thresholds taken from the geometry of guardrails. Similar to the first method, these are then clustered into guardrail clusters. The clusters are then vectorized into the wanted output of a 2D vector, corresponding to points inside the guardrail with aspecific interval. To evaluate the performance of the proposed algorithms, simulations from real-life data are analyzed in both a quantitative and qualitative way. The qualitative experiments showcase that both methods perform well even in difficult scenarios. Timings of the simulations show that both methods are fast enough to be applicable in real-time use cases. The defined performance measures show that the method using raw LiDAR data is more robust and manages to detect more and longer parts of the guardrails.
|
Page generated in 0.054 seconds