• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 439
  • 42
  • 8
  • 7
  • 6
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 530
  • 530
  • 459
  • 450
  • 448
  • 447
  • 444
  • 441
  • 441
  • 151
  • 99
  • 81
  • 80
  • 74
  • 69
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Online Whole-Body Control using Hierarchical Quadratic Programming : Implementation and Evaluation of the HiQP Control Framework

Johansson, Marcus January 2016 (has links)
The application of local optimal control is a promising paradigm for manipulative robot motion generation.In practice this involves instantaneous formulations of convex optimization problems depending on the current joint configuration of the robot and the environment.To be effective, however, constraints have to be carefully constructed as this kind of motion generation approach has a trade-off of completeness.Local optimal solvers, which are greedy in a temporal sense, have proven to be significantly more effective computationally than classical grid-based or sampling-based planning approaches. In this thesis we investigate how a local optimal control approach, namely the task function approach, can be implemented to grant high usability, extendibility and effectivity.This has resulted in the HiQP control framework, which is compatible with ROS, written in C++.The framework supports geometric primitives to aid in task customization by the user.It is also modular as to what communication system it is being used with, and to what optimization library it uses for finding optimal controls. We have evaluated the software quality of the framework according to common quantitative methods found in the literature.We have also evaluated an approach to perform tasks using minimal jerk motion generation with promising results.The framework also provides simple translation and rotation tasks based on six rudimentary geometric primitives.Also, task definitions for specific joint position setting, and velocity limitations were implemented.
22

Facial animation parameter extraction using high-dimensional manifolds

Ellner, Henrik January 2006 (has links)
This thesis presents and examines a method that can potentially be used for extracting parameters from a manifold in a space. In the thesis the method is presented, and a potential application is described. The application is determining FAP-values. FAP-values are used for parameterizing faces, which can e.g. be used to compress data when sending video sequences over limited bandwidth.
23

On the derivation and analysis of decision architectures for unmanned aircraft systems

Patchett, C H 08 October 2013 (has links)
Operation of Unmanned Air Vehicles (UAVs) has increased significantly over the past few years. However, routine operation in non-segregated airspace remains a challenge, primarily due to nature of the environment and restrictions and challenges that accompany this. Currently, tight human control is envisaged as a means to achieve the oft quoted requirements of transparency , equivalence and safety. However, the problems of high cost of human operation, potential communication losses and operator remoteness remain as obstacles. One means of overcoming these obstacles is to devolve authority, from the ground controller to an on-board system able to understand its situation and make appropriate decisions when authorised. Such an on-board system is known as an Autonomous System. The nature of the autonomous system, how it should be designed, when and how authority should be transferred and in what context can they be allowed to control the vehicle are the general motivation for this study. To do this, the system must overcome the negative aspects of differentiators that exist between UASs and manned aircraft and introduce methods to achieve required increases in the levels of versatility, cost, safety and performance. The general thesis of this work is that the role and responsibility of an airborne autonomous system are sufficiently different from those of other conventionally controlled manned and unmanned systems to require a different architectural approach. Such a different architecture will also have additional requirements placed upon it in order to demonstrate acceptable levels of Transparency, Equivalence and Safety. The architecture for the system is developed from an analysis of the basic requirements and adapted from a consideration of other, suitable candidates for effective control of the vehicle under devolved authority. The best practices for airborne systems in general are identified and amalgamated with established principles and approaches of robotics and intelligent agents. From this, a decision architecture, capable of interacting with external human agencies such as the UAS Commander and Air Traffic Controllers, is proposed in detail. This architecture has been implemented and a number of further lessons can be drawn from this. In order to understand in detail the system safety requirements, an analysis of manned and unmanned aircraft accidents is made. Particular interest is given to the type of control moding of current unmanned aircraft in order to make a comparison, and prediction, with accidents likely to be caused by autonomously controlled vehicles. The effect of pilot remoteness on the accident rate is studied and a new classification of this remoteness is identified as a major contributor to accidents A preliminary Bayesian model for unmanned aircraft accidents is developed and results and predictions are made as an output of this model. From the accident analysis and modelling, strategies to improve UAS safety are identified. Detailed implementations within these strategies are analysed and a proposal for more advanced Human-Machine Interaction made. In particular, detailed analysis is given on exemplar scenarios that a UAS may encounter. These are: Sense and Avoid , Mission Management Failure, Take Off/Landing, and Lost Link procedures and Communications Failure. These analyses identify the nature of autonomous, as opposed to automatic, operation and clearly show the benefits to safety of autonomous air vehicle operation, with an identifiable decision architecture, and its relationship with the human controller. From the strategies and detailed analysis of the exemplar scenarios, proposals are made for the improvement of unmanned vehicle safety The incorporation of these proposals into the suggested decision architecture are accompanied by analysis of the levels of benefit that may be expected. These suggest that a level approaching that of conventional manned aircraft is achievable using currently available technologies but with substantial architectural design methodologies than currently fielded. / ©Cranfield University © BAE Systems
24

Reading Barcodes with Neural Networks

Fridborn, Fredrik January 2017 (has links)
Barcodes are ubiquituous in modern society and they have had industrial application for decades. However, for noisy images modern methods can underperform. Poor lighting conditions, occlusions and low resolution can be problematic in decoding. This thesis aims to solve this problem by using neural networks, which have enjoyed great success in many computer vision competitions the last years. We investigate how three different networks perform on data sets with noisy images. The first network is a single classifier, the second network is an ensemble classifier and the third is based on a pre-trained feature extractor. For comparison, we also test two baseline methods that are used in industry today. We generate training data using software and modify it to ensure proper generalization. Testing data is created by photographing barcodes in different settings, creating six image classes - normal, dark, white, rotated, occluded and wrinkled. The proposed single classifier and ensemble classifier outperform the baseline as well as the pre-trained feature extractor by a large margin. The thesis work was performed at SICK IVP, a machine vision company in Linköping in 2017.
25

Simulated SAR with GIS data and pose estimation using affine projection

Divak, Martin January 2017 (has links)
Pilots or autonomous aircraft need to know where they are in relation to the environment. On board aircraft there are inertial sensors that are prone to drift which requires corrections by referencing against known items, places, or signals. One such method of referencing is with global navigation satellite systems, and others, that are highlighted in this work, are based on using visual sensors. In particular the use of Synthetic Aperture Radar is emerging as a viable alternative. To use radar images in qualitative or quantitative analysis they must be registered with geographical information. Position data on an aircraft or spacecraft is not sufficient to determine with certainty what or where it is one is looking at in a radar image without referencing other images over the same area. It is demonstrated in this thesis that a digital elevation model can be split up and classified into different types of radar scatterers. Different parts of the terrain yielding different types of echoes increases the amount of radar specific characteristics in simulated reference images. This work also presents an interpretation of the imaging geometry of SAR such that existing methods in Computer Vision may be used to estimate the position from which a radar image has been taken. This is a direct image matching without requiring registration that is necessary for other proposals of SAR-based navigation solutions. By determination of position continuously from radar images, aircraft could navigate independently of day light, weather, and satellite data.
26

Designing a Lightweight Convolutional Neural Network for Onion and Weed Classification

Bäckström, Nils January 2018 (has links)
The data set for this project consists of images containing onion and weed samples. It is of interest to investigate if Convolutional Neural Networks can learn to classify the crops correctly as a step in automatizing weed removal in farming. The aim of this project is to solve a classification task involving few classes with relatively few training samples (few hundred per class). Usually, small data sets are prone to overfitting, meaning that the networks generalize bad to unseen data. It is also of interest to solve the problem using small networks with low computational complexity, since inference speed is important and memory often is limited on deployable systems. This work shows how transfer learning, network pruning and quantization can be used to create lightweight networks whose classification accuracy exceeds the same architecture trained from scratch. Using these techniques, a SqueezeNet v1.1 architecture (which is already a relatively small network) can reach 1/10th of the original model size and less than half MAC operations during inference, while still maintaining a higher classification accuracy compared to a SqueezeNet v1.1 trained from scratch (96.9±1.35% vs 92.0±3.11% on 5-fold cross validation)
27

Prototyping an automated robotic shopping cart with visual perception

Norell, Jakob January 2018 (has links)
Intelligent atonomous robots are expected to be more common in the future and it is a topic of interest for science and companies. Instead of letting the customer pull a heavy cart by hand, an intelligent robotic shopping cart can aid a customer with their shopping by automatically following them. For this purpose, a prototype of an automated robotic shopping cart was implemented on the robotino 3 system, using tools from the programming environment robotino view created by FESTO. Some tools were used for computer vision to identify a customer bearing a colored symbol. The symbol could be uniquely designed for one individual customer and the identification was not sensitive to external disturbances of light, thanks to two lamps attached to the symbol. Collision avoidance was implemented with IR-sensors using scripts written in LUA based on a version of the bug 2 algorithm. Distance was accurately determined to obstacles and to the customer by using information from these two implementations. The robot successfully followed a human while avoiding obstacles that were in the way. After moving towards the customer, it safely stopped close to the customer – making it possible for the customer to place an object in the shopping cart. The robotino used a comprehendable routine such that the customer and the robotino understood the intention of the other actor.
28

Online Learning for Robot Vision

Öfjäll, Kristoffer January 2014 (has links)
In tele-operated robotics applications, the primary information channel from the robot to its human operator is a video stream. For autonomous robotic systems however, a much larger selection of sensors is employed, although the most relevant information for the operation of the robot is still available in a single video stream. The issue lies in autonomously interpreting the visual data and extracting the relevant information, something humans and animals perform strikingly well. On the other hand, humans have great diculty expressing what they are actually looking for on a low level, suitable for direct implementation on a machine. For instance objects tend to be already detected when the visual information reaches the conscious mind, with almost no clues remaining regarding how the object was identied in the rst place. This became apparent already when Seymour Papert gathered a group of summer workers to solve the computer vision problem 48 years ago [35]. Articial learning systems can overcome this gap between the level of human visual reasoning and low-level machine vision processing. If a human teacher can provide examples of what to be extracted and if the learning system is able to extract the gist of these examples, the gap is bridged. There are however some special demands on a learning system for it to perform successfully in a visual context. First, low level visual input is often of high dimensionality such that the learning system needs to handle large inputs. Second, visual information is often ambiguous such that the learning system needs to be able to handle multi modal outputs, i.e. multiple hypotheses. Typically, the relations to be learned  are non-linear and there is an advantage if data can be processed at video rate, even after presenting many examples to the learning system. In general, there seems to be a lack of such methods. This thesis presents systems for learning perception-action mappings for robotic systems with visual input. A range of problems are discussed, such as vision based autonomous driving, inverse kinematics of a robotic manipulator and controlling a dynamical system. Operational systems demonstrating solutions to these problems are presented. Two dierent approaches for providing training data are explored, learning from demonstration (supervised learning) and explorative learning (self-supervised learning). A novel learning method fullling the stated demands is presented. The method, qHebb, is based on associative Hebbian learning on data in channel representation. Properties of the method are demonstrated on a vision-based autonomously driving vehicle, where the system learns to directly map low-level image features to control signals. After an initial training period, the system seamlessly continues autonomously. In a quantitative evaluation, the proposed online learning method performed comparably with state of the art batch learning methods.
29

Global Pose Estimation from Aerial Images : Registration with Elevation Models

Grelsson, Bertil January 2014 (has links)
Over the last decade, the use of unmanned aerial vehicles (UAVs) has increased drastically. Originally, the use of these aircraft was mainly military, but today many civil applications have emerged. UAVs are frequently the preferred choice for surveillance missions in disaster areas, after earthquakes or hurricanes, and in hazardous environments, e.g. for detection of nuclear radiation. The UAVs employed in these missions are often relatively small in size which implies payload restrictions. For navigation of the UAVs, continuous global pose (position and attitude) estimation is mandatory. Cameras can be fabricated both small in size and light in weight. This makes vision-based methods well suited for pose estimation onboard these vehicles. It is obvious that no single method can be used for pose estimation in all dierent phases throughout a ight. The image content will be very dierent on the runway, during ascent, during  ight at low or high altitude, above urban or rural areas, etc. In total, a multitude of pose estimation methods is required to handle all these situations. Over the years, a large number of vision-based pose estimation methods for aerial images have been developed. But there are still open research areas within this eld, e.g. the use of omnidirectional images for pose estimation is relatively unexplored. The contributions of this thesis are three vision-based methods for global egopositioning and/or attitude estimation from aerial images. The rst method for full 6DoF (degrees of freedom) pose estimation is based on registration of local height information with a geo-referenced 3D model. A dense local height map is computed using motion stereo. A pose estimate from navigation sensors is used as an initialization. The global pose is inferred from the 3D similarity transform between the local height map and the 3D model. Aligning height information is assumed to be more robust to season variations than feature matching in a single-view based approach. The second contribution is a method for attitude (pitch and roll angle) estimation via horizon detection. It is one of only a few methods in the literature that use an omnidirectional (sheye) camera for horizon detection in aerial images. The method is based on edge detection and a probabilistic Hough voting scheme. In a  ight scenario, there is often some knowledge on the probability density for the altitude and the attitude angles. The proposed method allows this prior information to be used to make the attitude estimation more robust. The third contribution is a further development of method two. It is the very rst method presented where the attitude estimates from the detected horizon in omnidirectional images is rened through registration with the geometrically expected horizon from a digital elevation model. It is one of few methods where the ray refraction in the atmosphere is taken into account, which contributes to the highly accurate pose estimates. The attitude errors obtained are about one order of magnitude smaller than for any previous vision-based method for attitude estimation from horizon detection in aerial images.
30

Development and Evaluation of a Kinect based Bin-Picking System

Mishra, Chintan, Khan, Zeeshan January 2015 (has links)
No description available.

Page generated in 0.0377 seconds