• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 441
  • 53
  • Tagged with
  • 494
  • 489
  • 485
  • 417
  • 414
  • 412
  • 409
  • 407
  • 407
  • 166
  • 103
  • 103
  • 98
  • 89
  • 82
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Online Whole-Body Control using Hierarchical Quadratic Programming : Implementation and Evaluation of the HiQP Control Framework

Johansson, Marcus January 2016 (has links)
The application of local optimal control is a promising paradigm for manipulative robot motion generation.In practice this involves instantaneous formulations of convex optimization problems depending on the current joint configuration of the robot and the environment.To be effective, however, constraints have to be carefully constructed as this kind of motion generation approach has a trade-off of completeness.Local optimal solvers, which are greedy in a temporal sense, have proven to be significantly more effective computationally than classical grid-based or sampling-based planning approaches. In this thesis we investigate how a local optimal control approach, namely the task function approach, can be implemented to grant high usability, extendibility and effectivity.This has resulted in the HiQP control framework, which is compatible with ROS, written in C++.The framework supports geometric primitives to aid in task customization by the user.It is also modular as to what communication system it is being used with, and to what optimization library it uses for finding optimal controls. We have evaluated the software quality of the framework according to common quantitative methods found in the literature.We have also evaluated an approach to perform tasks using minimal jerk motion generation with promising results.The framework also provides simple translation and rotation tasks based on six rudimentary geometric primitives.Also, task definitions for specific joint position setting, and velocity limitations were implemented.
12

Facial animation parameter extraction using high-dimensional manifolds

Ellner, Henrik January 2006 (has links)
This thesis presents and examines a method that can potentially be used for extracting parameters from a manifold in a space. In the thesis the method is presented, and a potential application is described. The application is determining FAP-values. FAP-values are used for parameterizing faces, which can e.g. be used to compress data when sending video sequences over limited bandwidth.
13

Reading Barcodes with Neural Networks

Fridborn, Fredrik January 2017 (has links)
Barcodes are ubiquituous in modern society and they have had industrial application for decades. However, for noisy images modern methods can underperform. Poor lighting conditions, occlusions and low resolution can be problematic in decoding. This thesis aims to solve this problem by using neural networks, which have enjoyed great success in many computer vision competitions the last years. We investigate how three different networks perform on data sets with noisy images. The first network is a single classifier, the second network is an ensemble classifier and the third is based on a pre-trained feature extractor. For comparison, we also test two baseline methods that are used in industry today. We generate training data using software and modify it to ensure proper generalization. Testing data is created by photographing barcodes in different settings, creating six image classes - normal, dark, white, rotated, occluded and wrinkled. The proposed single classifier and ensemble classifier outperform the baseline as well as the pre-trained feature extractor by a large margin. The thesis work was performed at SICK IVP, a machine vision company in Linköping in 2017.
14

Simulated SAR with GIS data and pose estimation using affine projection

Divak, Martin January 2017 (has links)
Pilots or autonomous aircraft need to know where they are in relation to the environment. On board aircraft there are inertial sensors that are prone to drift which requires corrections by referencing against known items, places, or signals. One such method of referencing is with global navigation satellite systems, and others, that are highlighted in this work, are based on using visual sensors. In particular the use of Synthetic Aperture Radar is emerging as a viable alternative. To use radar images in qualitative or quantitative analysis they must be registered with geographical information. Position data on an aircraft or spacecraft is not sufficient to determine with certainty what or where it is one is looking at in a radar image without referencing other images over the same area. It is demonstrated in this thesis that a digital elevation model can be split up and classified into different types of radar scatterers. Different parts of the terrain yielding different types of echoes increases the amount of radar specific characteristics in simulated reference images. This work also presents an interpretation of the imaging geometry of SAR such that existing methods in Computer Vision may be used to estimate the position from which a radar image has been taken. This is a direct image matching without requiring registration that is necessary for other proposals of SAR-based navigation solutions. By determination of position continuously from radar images, aircraft could navigate independently of day light, weather, and satellite data.
15

Designing a Lightweight Convolutional Neural Network for Onion and Weed Classification

Bäckström, Nils January 2018 (has links)
The data set for this project consists of images containing onion and weed samples. It is of interest to investigate if Convolutional Neural Networks can learn to classify the crops correctly as a step in automatizing weed removal in farming. The aim of this project is to solve a classification task involving few classes with relatively few training samples (few hundred per class). Usually, small data sets are prone to overfitting, meaning that the networks generalize bad to unseen data. It is also of interest to solve the problem using small networks with low computational complexity, since inference speed is important and memory often is limited on deployable systems. This work shows how transfer learning, network pruning and quantization can be used to create lightweight networks whose classification accuracy exceeds the same architecture trained from scratch. Using these techniques, a SqueezeNet v1.1 architecture (which is already a relatively small network) can reach 1/10th of the original model size and less than half MAC operations during inference, while still maintaining a higher classification accuracy compared to a SqueezeNet v1.1 trained from scratch (96.9±1.35% vs 92.0±3.11% on 5-fold cross validation)
16

Prototyping an automated robotic shopping cart with visual perception

Norell, Jakob January 2018 (has links)
Intelligent atonomous robots are expected to be more common in the future and it is a topic of interest for science and companies. Instead of letting the customer pull a heavy cart by hand, an intelligent robotic shopping cart can aid a customer with their shopping by automatically following them. For this purpose, a prototype of an automated robotic shopping cart was implemented on the robotino 3 system, using tools from the programming environment robotino view created by FESTO. Some tools were used for computer vision to identify a customer bearing a colored symbol. The symbol could be uniquely designed for one individual customer and the identification was not sensitive to external disturbances of light, thanks to two lamps attached to the symbol. Collision avoidance was implemented with IR-sensors using scripts written in LUA based on a version of the bug 2 algorithm. Distance was accurately determined to obstacles and to the customer by using information from these two implementations. The robot successfully followed a human while avoiding obstacles that were in the way. After moving towards the customer, it safely stopped close to the customer – making it possible for the customer to place an object in the shopping cart. The robotino used a comprehendable routine such that the customer and the robotino understood the intention of the other actor.
17

Online Learning for Robot Vision

Öfjäll, Kristoffer January 2014 (has links)
In tele-operated robotics applications, the primary information channel from the robot to its human operator is a video stream. For autonomous robotic systems however, a much larger selection of sensors is employed, although the most relevant information for the operation of the robot is still available in a single video stream. The issue lies in autonomously interpreting the visual data and extracting the relevant information, something humans and animals perform strikingly well. On the other hand, humans have great diculty expressing what they are actually looking for on a low level, suitable for direct implementation on a machine. For instance objects tend to be already detected when the visual information reaches the conscious mind, with almost no clues remaining regarding how the object was identied in the rst place. This became apparent already when Seymour Papert gathered a group of summer workers to solve the computer vision problem 48 years ago [35]. Articial learning systems can overcome this gap between the level of human visual reasoning and low-level machine vision processing. If a human teacher can provide examples of what to be extracted and if the learning system is able to extract the gist of these examples, the gap is bridged. There are however some special demands on a learning system for it to perform successfully in a visual context. First, low level visual input is often of high dimensionality such that the learning system needs to handle large inputs. Second, visual information is often ambiguous such that the learning system needs to be able to handle multi modal outputs, i.e. multiple hypotheses. Typically, the relations to be learned  are non-linear and there is an advantage if data can be processed at video rate, even after presenting many examples to the learning system. In general, there seems to be a lack of such methods. This thesis presents systems for learning perception-action mappings for robotic systems with visual input. A range of problems are discussed, such as vision based autonomous driving, inverse kinematics of a robotic manipulator and controlling a dynamical system. Operational systems demonstrating solutions to these problems are presented. Two dierent approaches for providing training data are explored, learning from demonstration (supervised learning) and explorative learning (self-supervised learning). A novel learning method fullling the stated demands is presented. The method, qHebb, is based on associative Hebbian learning on data in channel representation. Properties of the method are demonstrated on a vision-based autonomously driving vehicle, where the system learns to directly map low-level image features to control signals. After an initial training period, the system seamlessly continues autonomously. In a quantitative evaluation, the proposed online learning method performed comparably with state of the art batch learning methods.
18

Global Pose Estimation from Aerial Images : Registration with Elevation Models

Grelsson, Bertil January 2014 (has links)
Over the last decade, the use of unmanned aerial vehicles (UAVs) has increased drastically. Originally, the use of these aircraft was mainly military, but today many civil applications have emerged. UAVs are frequently the preferred choice for surveillance missions in disaster areas, after earthquakes or hurricanes, and in hazardous environments, e.g. for detection of nuclear radiation. The UAVs employed in these missions are often relatively small in size which implies payload restrictions. For navigation of the UAVs, continuous global pose (position and attitude) estimation is mandatory. Cameras can be fabricated both small in size and light in weight. This makes vision-based methods well suited for pose estimation onboard these vehicles. It is obvious that no single method can be used for pose estimation in all dierent phases throughout a ight. The image content will be very dierent on the runway, during ascent, during  ight at low or high altitude, above urban or rural areas, etc. In total, a multitude of pose estimation methods is required to handle all these situations. Over the years, a large number of vision-based pose estimation methods for aerial images have been developed. But there are still open research areas within this eld, e.g. the use of omnidirectional images for pose estimation is relatively unexplored. The contributions of this thesis are three vision-based methods for global egopositioning and/or attitude estimation from aerial images. The rst method for full 6DoF (degrees of freedom) pose estimation is based on registration of local height information with a geo-referenced 3D model. A dense local height map is computed using motion stereo. A pose estimate from navigation sensors is used as an initialization. The global pose is inferred from the 3D similarity transform between the local height map and the 3D model. Aligning height information is assumed to be more robust to season variations than feature matching in a single-view based approach. The second contribution is a method for attitude (pitch and roll angle) estimation via horizon detection. It is one of only a few methods in the literature that use an omnidirectional (sheye) camera for horizon detection in aerial images. The method is based on edge detection and a probabilistic Hough voting scheme. In a  ight scenario, there is often some knowledge on the probability density for the altitude and the attitude angles. The proposed method allows this prior information to be used to make the attitude estimation more robust. The third contribution is a further development of method two. It is the very rst method presented where the attitude estimates from the detected horizon in omnidirectional images is rened through registration with the geometrically expected horizon from a digital elevation model. It is one of few methods where the ray refraction in the atmosphere is taken into account, which contributes to the highly accurate pose estimates. The attitude errors obtained are about one order of magnitude smaller than for any previous vision-based method for attitude estimation from horizon detection in aerial images.
19

Development and Evaluation of a Kinect based Bin-Picking System

Mishra, Chintan, Khan, Zeeshan January 2015 (has links)
No description available.
20

Kamerakalibrering

Jonsson, Rickard, Törnström Andersson, Andreas January 2013 (has links)
Saab har utvecklat en produkt, Remote Tower, vars syfte är att fjärrstyra en flygplats. Fjärrstyrning innebär övervakning och styrning av flygplatsen från en annan plats än flygledningstornet. På den flygplats som ska övervakas har kameror monterats på ett sätt så att en 360-graders vy ges och bilden från dessa skickas till den plats varifrån man styr flygplatsen. När en ominstallation av en eller flera kameror görs så behövs det i dagsläget två installatörer. Dessa kan behöva arbeta på olika orter då serverdatorn inte behöver vara placerad i närheten av flygplatsen i sig. Detta är både tidskrävande och kostsamt, vilket har medfört att ett behov av en smidigare installation av kamerorna har identifierats. En servertjänst har utvecklats vilken kan kommunicera med kamerorna och styra deras funktioner såsom zoom, fokus och iris, men även utföra beräkningar baserat på en bildjämförelse. För att använda serverns funktioner så har även en androidapplikation arbetats fram. Det är denna som installatören kommer att använda när denna är uppe i kameramasten och justerar kamerorna. Genom denna applikation fås tillgång till nuvarande vy, kamerans alla funktioner samt möjlighet att köra guider som skapats för att underlätta kalibreringen. / Saab has developed a product, Remote Tower, which is designed to remotely control an airport that is monitored and controlled from a location other than the air traffic control tower. At the airport you want to control the cameras mounted in a manner such that a 360 degree view is given and the image of those sent to the place from which to control the airport. When a re-installation of one or many cameras is needed there is today required two people. These may have to work in different places because the server does not need to be located in the vicinity of the airport itself. This is both time consuming and costly, which has led to a need for a versatile installation of the cameras. A server application has been developed which can communicate with the cameras and control their functions such as zoom, focus and iris, but also make image comparison calculations. An Android application was also developed to access the server. It is this that the installer will use when he is up in the camera tower and adjusts the cameras. This application has access to the current view, the camera's features and also the possibility to run guides created to facilitate calibration.

Page generated in 0.0399 seconds