Spelling suggestions: "subject:"[een] AUTONOMOUS SYSTEMS"" "subject:"[enn] AUTONOMOUS SYSTEMS""
51 |
Scalable Decision-Making for Autonomous Systems in Space MissionsWan, Changhuang January 2021 (has links)
No description available.
|
52 |
Deep learning navigation for UGVs on forests paths / Deep learning-navigation för obemannade markfordon på skogsstigarLind, Linus January 2018 (has links)
Artificial intelligence and machine learning have seen great progress in recent years. In this work, we will look at the application of machine learning in visual navigational systems for unmanned vehicles in natural environments. Previous works have focused on navigational systems with deep convolutional neural networks (CNNs) for unmanned aerial vehicles (UAVs). In this work, we evaluate the robustness and applicability of these methods for unmanned ground vehicles (UGVs). To evaluate the robustness and applicability of this machine learning approach for UGV two experiments where performed. In the first, data from Swiss trails and photos collected in Swedish forests where used to train deep CNNs. Several models are trained using data collected in different environments at different heights. By cross evaluating the trained models on the other datasets the impact of changing camera position and switching environment can be evaluated. In the second experiment, a navigational system using the trained CNN models were constructed. By evaluating the ability of the system to autonomously follow a forest path an understanding of the applicability of these methods for UGVs in general can be obtained. There where several results from the experiments. When comparing models trained on different datasets, we could see that the environment has an effect on the performance of the navigation, but even more so, the approach is sensitive to the camera position. Finally, an online test to evaluate the applicability of this approach as an end-to-end navigation system for UGVs is done. This experiment showed that these methods, on their own, are not a viable option for an end-to-end navigational system for UGVs in forest environments. / Artificiell intelligens och maskininlärning har gjort stora framsteg de senaste åren. I detta arbete tittar vi på tillämpningen av maskininlärning i visuella navigationssystem för obemannade fordon i naturliga miljöer. Tidigare verk har fokuserat på navigeringssystem med djupa ``convolutional neural networks'' (CNNs) för obemannade luftfarkoster. I detta arbete, utvärderar vi hur pass applicerbara och robusta dessa metoder är som navigationssystem för obemannade markfordon (UGVs). För att utvärdera hur pass applicerbara och robusta dessa maskininlärningsmetoder är för UGVs så utfördes två experiment. I det första experimentet utvärderas hur systemet reagerar på nya miljöer och kamerapositioner. Ett redan existerande dataset, med med foton från stigar i de schweiziska alperna, kompletterade med två nya dataset. Dessa två nya samlingar består av foton från svenska skogsstigar insamlade på två olika höjder. Dessa tre olika dataset användes för att träna tre olika olika modeller. Genom att korsutvärdera de tränade modellerna på de olika dataseten kan effekten av att förändrad kameraposition samt att byta miljö utvärderas. I det andra experimentet utrustades en UGV med ett navigationssystem byggt på dessa tränade modeller. Genom att utvärdering hur pass autonomt denna UGV kan följa en skogsstig så ges en förståelse för hur pass applicerbara dessa metoder är för UGVs generellt. Experimentet gav flera resultat. Korsutvärderingen visade att dessa metoder är känsliga för både kameraposition och miljö. Där byte av kameraposition har en större negativ påverkan på navigationsresultatet, än byte av miljö. Slutligen visade ett online-test att dessa metoder, i sin naiva form, inte är ett lämpligt alternativ för navigationssystem för UGVs i skogsmiljöer.
|
53 |
A GPU Implementation of Kinodynamic Path Planning in Large Continuous Costmap Environments : Using Dijkstra's Algorithm and Simulated AnnealingLarsson, Robin January 2023 (has links)
Path planning that takes kinodynamic constraints into account is a crucial part of critical missions where autonomous vehicles must function independently without communication from an operator as this ensures that the vehicle will be able to follow the planned path. In this thesis, an algorithm is presented that can plan kinodynamically feasible paths through large scale continuous costmap environments for different constraints on the maximum allowed acceleration and jerk along the path. The algorithm begins by taking a small stochastic sample of the costmap, with a higher probability to keep more information from the cheaper, interesting areas of the map. This random sample is turned into a graph to which Dijkstra's algorithm is applied in order to obtain an initial guess of a path. Simulated annealing is then used to first smooth this initial guess to obey the kinodynamic constraints and then optimize the path with respect to cost while keeping the kinodynamics below the set limits. The majority of the simulated annealing iterations utilize a GPU to significantly reduce the computational time needed. The performance of the algorithm was evaluated by studying the paths generated from a large number of different start and end points in a complex continuous costmap with a high resolution of 2551×2216 pixels. To evaluate the robustness of the algorithm a large number of paths were generated, both with the same and with different start and end points, and the paths were inspected both visually, and the spread of the cost of the different paths was studied. It was concluded that the algorithm is able to generate paths of high quality for different limits on the allowed acceleration and jerk as well as achieving a low spread in cost when generating multiple paths between the same pair of points. The utilization of a GPU to improve computational performance proved successful as the GPU executed between 2.4 and 2.8 times more simulated annealing iterations in a given time compared to the CPU. This result hopefully inspires future work to utilize GPUs to improve computational performance, even in problems that traditionally are solved using sequential algorithms.
|
54 |
Object Identification for Autonomous Forest OperationsLi, Songyu January 2022 (has links)
The need to further unlock productivity of forestry operations urges the increase of forestry automation. Many essential operations in forest production, such as harvesting, forwarding, planting, etc., have the potential to be automated and obtain benefits such as improved production efficiency, reduced operating costs, and an improved working environment. In view of the fact that forestry operations are performed in forest environments, the automation of forestry operations is thus complex and extremely challenging. To build the ability of forest machine automation, it is necessary to construct an environmental cognitive ability of the forest machine as basis. Through a combination of exteroceptive sensors and algorithms, forest machine vision can be realized. Using new and off-the-shelf solutions for detecting, locating, classifying and analyzing the status of objects of concern surrounding the machine during forestry operations in combination with smart judgement and control, forest operations can be automated. This thesis focuses on the introduction of vision systems on an unmanned forest platform, aiming to create the foundation for autonomous decision-making and execution in forestry operations. Initially, the vision system is designed to work on an unmanned forest machine platform, to create necessary conditions to either assist operators or to realize automatic operation as a further step. In this thesis, vision systems based on stereo camera sensing are designed and deployed on an unmanned forest machine platform and the functions of detection, localization and pose estimation of objects that surround the machine are developed and evaluated. These mainly include a positioning function for forest terrain obstacles such as stones and stumps based on stereo camera data and deep learning, and a localization and pose estimation function for ground logs based on stereo camera and deep learning with added functionality of color difference comparison. By testing these systems’ performance in realistic scenarios, this thesis describe the feasibility of improving the automation level of forest machine operation by building a vision system. In addition, the thesis also demonstrate that the accuracy of stump detection can be improved without significantly increasing the processing load by introducing depth information into training and execution.
|
55 |
Calibration using a general homogeneous depth camera model / Kalibrering av en generell homogen djupkameramodellSjöholm, Daniel January 2017 (has links)
Being able to accurately measure distances in depth images is important for accurately reconstructing objects. But the measurement of depth is a noisy process and depth sensors could use additional correction even after factory calibration. We regard the pair of depth sensor and image sensor to be one single unit, returning complete 3D information. The 3D information is combined by relying on the more accurate image sensor for everything except the depth measurement. We present a new linear method of correcting depth distortion, using an empirical model based around the constraint of only modifying depth data, while keeping planes planar. The depth distortion model is implemented and tested on the Intel RealSense SR300 camera. The results show that the model is viable and generally decreases depth measurement errors after calibrating, with an average improvement in the 50 percent range on the tested data sets. / Att noggrant kunna mäta avstånd i djupbilder är viktigt för att kunna göra bra rekonstruktioner av objekt. Men denna mätprocess är brusig och dagens djupsensorer tjänar på ytterligare korrektion efter fabrikskalibrering. Vi betraktar paret av en djupsensor och en bildsensor som en enda enhet som returnerar komplett 3D information. 3D informationen byggs upp från de två sensorerna genom att lita på den mer precisa bildsensorn för allt förutom djupmätningen. Vi presenterar en ny linjär metod för att korrigera djupdistorsion med hjälp av en empirisk modell, baserad kring att enbart förändra djupdatan medan plana ytor behålls plana. Djupdistortionsmodellen implementerades och testades på kameratypen Intel RealSense SR300. Resultaten visar att modellen fungerar och i regel minskar mätfelet i djupled efter kalibrering, med en genomsnittlig förbättring kring 50 procent för de testade dataseten.
|
56 |
DDI: A Novel Technology And Innovation Model for Dependable, Collaborative and Autonomous SystemsArmengaud, E., Schneider, D., Reich, J., Sorokos, I., Papadopoulos, Y., Zeller, M., Regan, G., Macher, G., Veledar, O., Thalmann, S., Kabir, Sohag 06 April 2022 (has links)
Yes / Digital transformation fundamentally changes established practices in public and private sector. Hence, it represents an opportunity to improve the value creation processes (e.g., “industry 4.0”) and to rethink how to address customers’ needs such as “data-driven business models” and “Mobility-as-a-Service”. Dependable, collaborative and autono-mous systems are playing a central role in this transformation process. Furthermore, the emergence of data-driven approaches combined with autonomous systems will lead to new business models and market dynamics. Innovative approaches to re-organise the value creation ecosystem, to enable distributed engineering of dependable systems and to answer urgent questions such as liability will be required. Consequently, digital transformation requires a comprehensive multi-stakeholder approach which properly balances technology, ecosystem and business innovation. Targets of this paper are (a) to introduce digital transformation and the role of / opportunities provided by autonomous systems, (b) to introduce Digital Depednability Identities (DDI) - a technology for dependability engineering of collaborative, autonomous CPS, and (c) to propose an appropriate agile approach for innovation management based on business model innovation and co-entrepreneurship. / Science Foundation Ireland grant 13/RC/2094, by the Horizon 2020 programme within the OpenInnoTrain project (grant agreement 823971) ; H2020 SESAME project (grant agreement 101017258).
|
57 |
Towards human-inspired perception in robotic systems by leveraging computational methods for semantic understandingSaucedo, Mario Alberto Valdes January 2024 (has links)
This thesis presents a recollection of developments and results towards the research of human-like semantic understanding of the environment for robotics systems. Achieving a level of understanding in robots comparable to humans has proven to be a significant challenge in robotics, although modern sensors like stereo cameras and neuromorphic cameras enable robots to perceive the world in a manner akin to human senses, extracting and interpreting semantic information proves to be significantly inefficient by comparison. This thesis explores different aspects of the machine vision field to level computational methods in order to address real-life challenges for the task of semantic scene understanding in both everyday environments as well as challenging unstructured environments. The works included in this thesis present key contributions towards three main research directions. The first direction establishes novel perception algorithms for object detection and localization, aimed at real-life deployments in onboard mobile devices for %perceptually degraded unstructured environments. Along this direction, the contributions focus on the development of robust detection pipelines as well as fusion strategies for different sensor modalities including stereo cameras, neuromorphic cameras, and LiDARs. The second research direction establishes a computational method for levering semantic information into meaningful knowledge representations to enable human-inspired behaviors for the task of traversability estimation for reactive navigation. The contribution presents a novel decay function for traversability soft image generation based on exponential decay, by fusing semantic and geometric information to obtain density images that represent the pixel-wise traversability of the scene. Additionally, it presents a novel Encoder-Decoder lightweight network architecture for coarse semantic segmentation of terrain, integrated with a memory module based on a dynamic certainty filter. Finally, the third research direction establishes the novel concept of Belief Scene Graphs, which are utility-driven extensions of partial 3D scene graphs, that enable efficient high-level task planning with partial information.The research thus presents an approach to meaningfully incorporate unobserved objects as nodes into an incomplete 3D scene graph using the proposed method Computation of Expectation based on Correlation Information (CECI), to reasonably approximate the probability distribution of the scene by learning histograms from available training data. Extensive simulations and real-life experimental setups support the results and assumptions presented in this work.
|
58 |
Exploring Computer Vision-Based AI-Assisted Coaching for Youth Football PlayersGustafsson, Emil Folke January 2024 (has links)
Recently advances in computer vision have been made with the aid of artificial intelligence. This has made tracking sports in real-time a possibility. Specifically, this project aim to track objects in a football exercise in real-time with a single low-angle video camera, and quickly create feedback. Detecting the players during the exercise was mostly successful, but the ball could only be detected in certain areas of the playing area. Keeping track of the different players in the exercise proved to be difficult with the setup. Alternate ways to provide feedback without knowing the identities of the players were possible but limited. To be able to reliably provide insightful feedback the system would likely need to be changed to a high-angle or multi-camera system.
|
59 |
COMPUTER VISION-BASED HUMAN AWARENESS DETECTION FROM A CONSTRUCTION MACHINE PERSPECTIVELagerhäll, Walter, Rågberger, Erik January 2024 (has links)
In the field of construction equipment, a future is envisioned in which humans and autonomous machines can collaborate seamlessly. An example of this vision is embodied in the Volvo prototype LX03, an autonomous wheel loader engineered to function as a smart and safe partner with collaborative capabilities. In these situations, it is crucial that humans and machines communicate effectively. One critical aspect for machines to consider is the awareness level of humans, as it significantly influences their decision-making processes. This thesis investigates the feasibility of constructing a deep learning model to classify if a human is aware towards the machine or not using computer vision from the machines Point of View. To test this, a state-of-the-art action recognition model was used, namely RGBPose-Conv3D which is a 3D Convolutional Neural Network. This model uses two modalities, namely RGB and Pose, which could be used together or separately. The model was modified and trained to classify aware and unaware behaviour. The dataset used to train and test the model was collected with actors that mimicked aware or unaware behaviour. When using only RGB the model did not perform well, but when using Pose only or Pose and RGB fused, the model performed well in classifying the awareness state. Furthermore, the model exhibited good generalisability to scenarios on which it had not previously been trained. Such as with a machine movement, multiple people or previously not seen scenarios. The thesis highlights the viability of employing deep learning and computer vision for awareness detection, showcasing a novel method that achieves high accuracy despite minimal comparative research.
|
60 |
Learning to Search for Targets : A Deep Reinforcement Learning Approach to Visual Search in Unseen Environments / Inlärd sökning efter målLundin, Oskar January 2022 (has links)
Visual search is the perceptual task of locating a target in a visual environment. Due to applications in areas like search and rescue, surveillance, and home assistance, it is of great interest to automate visual search. An autonomous system can potentially search more efficiently than a manually controlled one and has the advantages of reduced risk and cost of labor. In many environments, there is structure that can be utilized to find targets quicker. However, manually designing search algorithms that properly utilize structure to search efficiently is not trivial. Different environments may exhibit vastly different characteristics, and visual cues may be difficult to pick up. A learning system has the advantage of being applicable to any environment where there is a sufficient number of samples to learn from. In this thesis, we investigate how an agent that learns to search can be implemented with deep reinforcement learning. Our approach jointly learns control of visual attention, recognition, and localization from a set of sample search scenarios. A recurrent convolutional neural network takes an image of the visible region and the agent's position as input. Its outputs indicate whether a target is visible and control where the agent looks next. The recurrent step serves as a memory that lets the agent utilize features of the explored environment when searching. We compare two memory architectures: an LSTM, and a spatial memory that remembers structured visual information. Through experimentation in three simulated environments, we find that the spatial memory architecture achieves superior search performance. It also searches more efficiently than a set of baselines that do not utilize the appearance of the environment and achieves similar performance to that of a human searcher. Finally, the spatial memory scales to larger search spaces and is better at generalizing from a limited number of training samples.
|
Page generated in 0.0566 seconds