Spelling suggestions: "subject:"autonome""
201 |
Utvärdering av styrbeteenden för grupper av navigerande agenter / Evaluation of steering behaviors for groups of navigating agentsSiponmaa, Stefan January 2013 (has links)
Detta examensarbete undersöker navigering för grupper av autonoma agenter i dataspelsmiljöer. Genom att kombinera olika styrbeteenden och beräkningsmodeller utvärderar arbetet vilken av dessa tekniker som är mest effektiv med avseende på tid och vägval i trånga spelmiljöer. En experimentmiljö har utvecklats som implementerar fyra stycken tekniker och utvärderar dessa i tre olika miljöer med 10 respektive 50 agenter som navigerar genom miljön. Som grund använder samtliga tekniker ett vägföljningsbeteende och ett flockbeteende. Det som skiljer teknikerna åt är vilken beräkningsmodell som används samt att två av teknikerna använder ett väggundvikelsebeteende. Resultatet visar att alla tekniker är användbara men att den mer avancerade beräkningsmodellen ger ett bättre resultat överlag. Väggundvikelsebeteendet bidrar också till ett bättre resultat och gör alltså nytta i de miljöer som använts. Ett problem med styrbeteenden är dock balanseringen av vikterna som används i teknikerna och det kan krävas mycket finjustering innan man får ett bra beteende.
|
202 |
Utveckling av ett active vision system för demonstration av EDSDK++ i tillämpningar inom datorseendeKargén, Rolf January 2014 (has links)
Datorseende är ett snabbt växande, tvärvetenskapligt forskningsområde vars tillämpningar tar en allt mer framskjutande roll i dagens samhälle. Med ett ökat intresse för datorseende ökar också behovet av att kunna kontrollera kameror kopplade till datorseende system. Vid Linköpings tekniska högskola, på avdelningen för datorseende, har ramverket EDSDK++ utvecklats för att fjärrstyra digitala kameror tillverkade av Canon Inc. Ramverket är mycket omfattande och innehåller en stor mängd funktioner och inställningsalternativ. Systemet är därför till stor del ännu relativt oprövat. Detta examensarbete syftar till att utveckla ett demonstratorsystem till EDSDK++ i form av ett enkelt active vision system, som med hjälp av ansiktsdetektion i realtid styr en kameratilt, samt en kamera monterad på tilten, till att följa, zooma in och fokusera på ett ansikte eller en grupp av ansikten. Ett krav var att programbiblioteket OpenCV skulle användas för ansiktsdetektionen och att EDSDK++ skulle användas för att kontrollera kameran. Dessutom skulle ett API för att kontrollera kameratilten utvecklas. Under utvecklingsarbetet undersöktes bl.a. olika metoder för ansiktsdetektion. För att förbättra prestandan användes multipla ansiktsdetektorer, som med hjälp av multitrådning avsöker en bild parallellt från olika vinklar. Såväl experimentella som teoretiska ansatser gjordes för att bestämma de parametrar som behövdes för att kunna reglera kamera och kameratilt. Resultatet av arbetet blev en demonstrator, som uppfyllde samtliga krav. / Computer vision is a rapidly growing, interdisciplinary field whose applications are taking an increasingly prominent role in today's society. With an increased interest in computer vision there is also an increasing need to be able to control cameras connected to computer vision systems. At the division of computer vision, at Linköping University, the framework EDSDK++ has been developed to remotely control digital cameras made by Canon Inc. The framework is very comprehensive and contains a large amount of features and configuration options. The system is therefore largely still relatively untested. This thesis aims to develop a demonstrator to EDSDK++ in the form of a simple active vision system, which utilizes real-time face detection in order to control a camera tilt, and a camera mounted on the tilt, to follow, zoom in and focus on a face or a group of faces. A requirement was that the OpenCV library would be used for face detection and EDSDK++ would be used to control the camera. Moreover, an API to control the camera tilt was to be developed. During development, different methods for face detection were investigated. In order to improve performance, multiple, parallel face detectors using multithreading, were used to scan an image from different angles. Both experimental and theoretical approaches were made to determine the parameters needed to control the camera and camera tilt. The project resulted in a fully functional demonstrator, which fulfilled all requirements.
|
203 |
EVALUATING THE IMPACT OF UNCERTAINTY ON THE INTEGRITY OF DEEP NEURAL NETWORKSHarborn, Jakob January 2021 (has links)
Deep Neural Networks (DNNs) have proven excellent performance and are very successful in image classification and object detection. Safety critical industries such as the automotive and aerospace industry aim to develop autonomous vehicles with the help of DNNs. In order to certify the usage of DNNs in safety critical systems, it is essential to prove the correctness of data within the system. In this thesis, the research is focused on investigating the sources of uncertainty, what effects various sources of uncertainty has on NNs, and how it is possible to reduce uncertainty within an NN. Probabilistic methods are used to implement an NN with uncertainty estimation to analyze and evaluate how the integrity of the NN is affected. By analyzing and discussing the effects of uncertainty in an NN it is possible to understand the importance of including a method of estimating uncertainty. Preventing, reducing, or removing the presence of uncertainty in such a network improves the correctness of data within the system. With the implementation of the NN, results show that estimating uncertainty makes it possible to identify and classify the presence of uncertainty in the system and reduce the uncertainty to achieve an increased level of integrity, which improves the correctness of the predictions.
|
204 |
Classification of tree species from 3D point clouds using convolutional neural networksWiklander, Marcus January 2020 (has links)
In forest management, knowledge about a forest's distribution of tree species is key. Being able to automate tree species classification for large forest areas is of great interest, since it is tedious and costly labour doing it manually. In this project, the aim was to investigate the efficiency of classifying individual tree species (pine, spruce and deciduous forest) from 3D point clouds acquired by airborne laser scanning (ALS), using convolutional neural networks. Raw data consisted of 3D point clouds and photographic images of forests in northern Sweden, collected from a helicopter flying at low altitudes. The point cloud of each individual tree was connected to its representation in the photos, which allowed for manual labeling of training data to be used for training of convolutional neural networks. The training data consisted of labels and 2D projections created from the point clouds, represented as images. Two different convolutional neural networks were trained and tested; an adaptation of the LeNet architecture and the ResNet architecture. Both networks reached an accuracy close to 98 %, the LeNet adaptation having a slightly lower loss score for both validation and test data compared to that of ResNet. Confusion matrices for both networks showed similar F1 scores for all tree species, between 97 % and 98 %. The accuracies computed for both networks were found higher than those achieved in similar studies using ALS data to classify individual tree species. However, the results in this project were never tested against a true population sample to confirm the accuracy. To conclude, the use of convolutional neural networks is indeed an efficient method for classification of tree species, but further studies on unbiased data is needed to validate these results.
|
205 |
Natural Fingerprinting of SteelStrömbom, Johannes January 2021 (has links)
A cornerstone in the industry's ongoing digital revolution, which is sometimes referred to as Industry 4.0, is the ability to trace products not only within the own production line but also throughout the remaining lifetime of the products. Traditionally, this is done by labeling products with, for instance, bar codes or radio-frequency identification (RFID) tags. In recent years, using the structure of the product itself as a unique identifier, a "fingerprint", has become a popular area of research. The purpose of this work was to develop software for an identification system using laser speckles as a unique identifier of steel components. Laser speckles, or simply speckles, are generated by illuminating a rough surface with coherent light, typically laser light. As the light is reflected, the granular pattern known as speckles can be seen by an observer. The complex nature of a speckle pattern together with its sensitivity to changes in the setup makes it robust against false-positive identifications and almost impossible to counterfeit. Because of this, speckles are suitable to be used as unique identifiers. In this work, three different identification algorithms have been tested in both simulations and experiments. The tested algorithms included one correlation-based, one method based on local feature extraction, and one method based on global feature extraction. The results showed that the correlation-based identification is most robust against speckle decorrelation, i.e changes in the speckle pattern, while being quite computationally expensive. The local feature-based method was shown to be unfit for this current application due to its sensitivity to speckle decorrelation and erroneous results. The global feature extraction method achieved high accuracy and fast computational speed when combined with a clustering method based on overlapping speckle patterns and a k-nearest neighbours (k-NN) search. In all the investigated methods, parallel calculations can be utilized to increase the computational speed.
|
206 |
Semantic Segmentation of Point Clouds Using Deep Learning / Semantisk Segmentering av Punktmoln med Deep LearningTosteberg, Patrik January 2017 (has links)
In computer vision, it has in recent years become more popular to use point clouds to represent 3D data. To understand what a point cloud contains, methods like semantic segmentation can be used. Semantic segmentation is the problem of segmenting images or point clouds and understanding what the different segments are. An application for semantic segmentation of point clouds are e.g. autonomous driving, where the car needs information about objects in its surrounding. Our approach to the problem, is to project the point clouds into 2D virtual images using the Katz projection. Then we use pre-trained convolutional neural networks to semantically segment the images. To get the semantically segmented point clouds, we project back the scores from the segmentation into the point cloud. Our approach is evaluated on the semantic3D dataset. We find our method is comparable to state-of-the-art, without any fine-tuning on the Semantic3Ddataset.
|
207 |
Components of Embodied Visual Object Recognition : Object Perception and Learning on a Robotic PlatformWallenberg, Marcus January 2013 (has links)
Object recognition is a skill we as humans often take for granted. Due to our formidable object learning, recognition and generalisation skills, it is sometimes hard to see the multitude of obstacles that need to be overcome in order to replicate this skill in an artificial system. Object recognition is also one of the classical areas of computer vision, and many ways of approaching the problem have been proposed. Recently, visually capable robots and autonomous vehicles have increased the focus on embodied recognition systems and active visual search. These applications demand that systems can learn and adapt to their surroundings, and arrive at decisions in a reasonable amount of time, while maintaining high object recognition performance. Active visual search also means that mechanisms for attention and gaze control are integral to the object recognition procedure. This thesis describes work done on the components necessary for creating an embodied recognition system, specifically in the areas of decision uncertainty estimation, object segmentation from multiple cues, adaptation of stereo vision to a specific platform and setting, and the implementation of the system itself. Contributions include the evaluation of methods and measures for predicting the potential uncertainty reduction that can be obtained from additional views of an object, allowing for adaptive target observations. Also, in order to separate a specific object from other parts of a scene, it is often necessary to combine multiple cues such as colour and depth in order to obtain satisfactory results. Therefore, a method for combining these using channel coding has been evaluated. Finally, in order to make use of three-dimensional spatial structure in recognition, a novel stereo vision algorithm extension along with a framework for automatic stereo tuning have also been investigated. All of these components have been tested and evaluated on a purpose-built embodied recognition platform known as Eddie the Embodied. / Embodied Visual Object Recognition
|
208 |
3D Camera Selection for Obstacle Detection in a Warehouse Environment / Val av 3D-kamera för Obstacle Detection i en lagermiljöJarnemyr, Pontus, Markus, Gustafsson January 2020 (has links)
The increasing demand for online commerce has led to an increasing demand of autonomous vehicles in the logistics sector. The work in this thesis aims to improve the obstacle detection of autonomous forklifts by using 3D sensor technology. Three different products were compared based on a number of criteria. These criteria were provided by Toyota Material Handling, a manufacturer of autonomous forklifts. One of the products was chosen for developing a prototype. The prototype was used to determine if 3D camera technology could provide sufficient obstacle detection in a warehouse environment. The determination was based on the prototype's performance in a series of tests. The tests ranged from human to pallet detection, and were aimed to fulfill all criteria. The advantages and disadvantages of the chosen camera is presented. The conclusion is that the chosen 3D camera cannot provide sufficient obstacle detection due to certain environmental factors.
|
209 |
Efficient Autonomous Exploration Planning of Large-Scale 3D-Environments : A tool for autonomous 3D exploration indoor / Effektiv Autonom Utforskningsplanering av Storskaliga 3D-Miljöer : Ett verktyg för 3D utforskning inomhusSelin, Magnus January 2019 (has links)
Exploration is of interest for autonomous mapping and rescue applications using unmanned vehicles. The objective is to, without any prior information, explore all initially unmapped space. We present a system that can perform fast and efficient exploration of large scale arbitrary 3D environments. We combine frontier exploration planning (FEP) as a global planning strategy, together with receding horizon planning (RH-NBVP) for local planning. This leads to plans that incorporate information gain along the way, but do not get stuck in already explored regions. Furthermore, we make the potential information gain estimation more efficient, through sparse ray-tracing, and caching of already estimated gains. The worked carried out in this thesis has been published as a paper in Robotand Automation letters and presented at the International Conference on Robotics and Automation in Montreal 2019.
|
210 |
Transfer Learning for Friction Estimation : Using Deep Reduced FeaturesSvensson, Erik January 2020 (has links)
Autonomous cars are now becoming a reality, but there are still technical hurdles needed to be overcome for the technology to be safe and reliable. One of these issues is the cars’ ability to estimate braking distances. This function relies heavily on one parameter, friction. Friction is difficult to estimate for a car since the friction coefficient is dependent on both surfaces in contact - the tires and the road. This thesis presents anovel approach to the problem using a neural network classifier trained on features extracted from images of the road. One major advantage the presented method gives over the few but existing conventional methods is the ability to estimate friction on road segments ahead of the vehicle. This gives the vehicle time to slow down while the friction is still sufficient. The estimation pipeline performs significantly better than the baseline methods explored in the thesis and provides satisfying results which demonstrates its potential.
|
Page generated in 0.0452 seconds