• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 404
  • 43
  • Tagged with
  • 447
  • 447
  • 446
  • 445
  • 443
  • 442
  • 441
  • 441
  • 441
  • 141
  • 91
  • 77
  • 72
  • 64
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Hyperspectral Image Registration and Construction From Irregularly Sampled Data

Freij, Hannes January 2021 (has links)
Hyperspectral imaging based on the use of an exponentially variable filter gives the possibility to construct a lightweight hyperspectral sensor. The exponentially variable filter captures the whole spectral range in each image where each column captures a different wavelength. To gather the full spectrum for any given point in the image requires the fusion of several gathered images with movement in between captures. The construction of a hyperspectral cube requires registration of the gathered images. With a lightweight sensor comes the possibility to mount the hyperspectral sensor on an unmanned aerial vehicle to collect aerial footage. This thesis presents a registration algorithm capable of constructing a complete hyperspectral cube of almost any chosen area in the captured region. The thesis presents the result of a construction method using a multi-frame super-resolution algorithm trying to increase the spectral resolution and a spline interpolation method interpolating missing spectral data. The result of an algorithm trying to suggest the optimal spectral and spatial resolution before constructing the hyperspectral cube is also presented. Lastly, the result of an algorithm providing information about the quality of the constructed hyperspectral cube is also presented.
22

Domain Adaptation of Unreal Images for Image Classification / Domänöversättning av syntetiska bilder för bildklassificiering

Thornström, Johan January 2019 (has links)
Deep learning has been intensively researched in computer vision tasks like im-age classification. Collecting and labeling images that these neural networks aretrained on is labor-intensive, which is why alternative methods of collecting im-ages are of interest. Virtual environments allow rendering images and automaticlabeling,  which could speed up the process of generating training data and re-duce costs.This  thesis  studies  the  problem  of  transfer  learning  in  image  classificationwhen the classifier has been trained on rendered images using a game engine andtested on real images. The goal is to render images using a game engine to createa classifier that can separate images depicting people wearing civilian clothingor camouflage.  The thesis also studies how domain adaptation techniques usinggenerative  adversarial  networks  could  be  used  to  improve  the  performance  ofthe classifier.  Experiments show that it is possible to generate images that canbe used for training a classifier capable of separating the two classes.  However,the experiments with domain adaptation were unsuccessful.  It is instead recom-mended to improve the quality of the rendered images in terms of features usedin the target domain to achieve better results.
23

Vehicle Detection, at a Distance : Done Efficiently via Fusion of Short- and Long-Range Images / Fordonsdetektion, på avstånd

Luusua, Emil January 2020 (has links)
Object detection is a classical computer vision task, encountered in many practical applications such as robotics and autonomous driving. The latter involves serious consequences of failure and a multitude of challenging demands, including high computational efficiency and detection accuracy. Distant objects are notably difficult to detect accurately due to their small scale in the image, consisting of only a few pixels. This is especially problematic in autonomous driving, as objects should be detected at the earliest possible stage to facilitate handling of hazardous situations. Previous work has addressed small objects via use of feature pyramids and super-resolution techniques, but the efficiency of such methods is limited as computational cost increases with image resolution. Therefore, a trade-off must be made between accuracy and cost. Opportunely though, a common characteristic of driving scenarios is the predominance of distant objects in the centre of the image. Thus, the full-frame image can be downsampled to reduce computational cost, and a crop can be extracted from the image centre to preserve resolution for distant vehicles. In this way, short- and long-range images are generated. This thesis investigates the fusion of such images in a convolutional neural network, particularly the fusion level, fusion operation, and spatial alignment. A novel framework — DetSLR — is proposed for the task and examined via the aforementioned aspects. Through adoption of the framework for the well-established SSD detector and MobileNetV2 feature extractor, it is shown that the framework significantly improves upon the original detector without incurring additional cost. The fusion level is shown to have great impact on the performance of the framework, favouring high-level fusion, while only insignificant differences exist between investigated fusion operations. Finally, spatial alignment of features is demonstrated to be a crucial component of the framework.
24

Obstacle avoidance for platforms in three-dimensional environments / Kollisionsundvikande metoder för plattformar i tredimensionella miljöer

Ekström, Johan January 2016 (has links)
The field of obstacle avoidance is a well-researched area. Despite this, research on obstacle avoidance in three dimensions is surprisingly sparse. For platforms which are able to navigate three-dimensional space, such as multirotor UAVs, such methods will become more common. In this thesis, an obstacle avoidance method, intended for a three-dimensional environment, is presented. First the method reduces the dimensionality of the three-dimensional world into two dimensions by projecting obstacle observations onto a two-dimensional spherical depth map, retaining information on direction and distance to obstacles. Next, the method accounts for the dimensions of the platform by applying a post-processing on the depth map. Finally, knowing the motion model, a look-ahead verification step is taken, using information from the depth map, to ensure that the platform does not collide with any obstacles by not allowing control inputs which leads to collisions. If there are multiple control input candidates after verification that lead to velocity vectors close to a desired velocity vector, a heuristic cost function is used to select one single control input, where the similarity in direction and magnitude of the resulting and desired velocity vector is valued. Evaluation of the method reveals that platforms are able to maintain distances to obstacles. However, more work is suggested in order to improve the reliability of the method and to perform a real world evaluation. / Fältet inom kollisionsundvikande är ett välforskat område. Trots detta så är forskning inom kollisionsundvikande metoder i tre dimensioner förvånansvärt magert. För plattformar som kan navigera det tredimensionella rummet, såsom multirotor-baserade drönare kommer sådana metoder att bli mer vanliga. I denna tes presenteras en kollisionsundvikande metod, menad för det tredimensionella rummet. Först reduceras dimensionaliteten av det tredimensionella rummet genom att projicera hinderobservationer på ett tvådimensionellt sfärisk ark i form av en djupkarta som bibehåller information om riktning och avstånd till hinder. Därefter beaktas plattformens dimensioner genom att tillämpa ett efterbehandlingssteg på djupkartan. Till sist, med kunskap om rörelsemodellen, ett verifieringssteg där information från djupkartan används för att försäkra sig om att plattformen inte kolliderar med några hinder genom att inte tillåta kontrollinmatningar som leder till kollisioner. Om det finns flera kontrollinmatningskandidater efter verifikationssteget som leder till hastighetsvektorer nära en önskad hastighetsvektor så används en heuristisk kostnadsfunktion, där likheten i riktning och magnitud av den resulterande vektorn och önskade hastighetsvektorn värderas, för att välja en av dem. Utvärdering av metoden visar att plattformar kan bibehålla avstånd till hinder. Dock föreslås ytterligare arbete för att förbättra tillförlitligheten av metoden samt att utvärdera metoden i den verkliga världen.
25

Feature-Feature Matching For Object Retrieval in Point Clouds

Staniaszek, Michal January 2015 (has links)
In this project, we implement a system for retrieving instances of objects from point clouds using feature based matching techniques. The target dataset of point clouds consists of approximately 80 full scans of office rooms over a period of one month. The raw clouds are reprocessed to remove regions which are unlikely to contain objects. Using locations determined by one of several possible interest point selection methods, one of a number of descriptors is extracted from the processed clouds. Descriptors from a target cloud are compared to those from a query object using a nearest neighbour approach. The nearest neighbours of each descriptor in the query cloud are used to vote for the position of the object in a 3D grid overlaid on the room cloud. We apply clustering in the voting space and rank the clusters according to the number of votes they contain. The centroid of each of the clusters is used to extract a region from the target cloud which, in the ideal case, corresponds to the query object. We perform an experimental evaluation of the system using various parameter settings in order to investigate factors affecting the usability of the system, and the efficacy of the system in retrieving correct objects. In the best case, we retrieve approximately 50% of the matching objects in the dataset. In the worst case, we retrieve only 10%. We find that the best approach is to use a uniform sampling over the room clouds, and to use a descriptor which factors in both colour and shape information to describe points.
26

Calibration in deep-learning eye tracking / Kalibrering i djupinlärd ögonspårning

Lindén, Erik January 2021 (has links)
Personal variations severely limit the performance of appearance-based gaze tracking. Adapting to these variations using standard neural network model adaptation methods is difficult. The problems range from overfitting, due to small amounts of training data, to underfitting, due to restrictive model architectures. In this thesis, these problems are tackled by introducing the SPatial Adaptive GaZe Estimator (\spaze{}). By modeling personal variations as a low-dimensional latent parameter space, \spaze{} provides just enough adaptability to capture the range of personal variations without being prone to overfitting. Calibrating \spaze{} for a new person reduces to solving a small optimization problem. \spaze{} achieves an error of \ang{2.70} with \num{9} calibration samples on MPIIGaze, improving on the state-of-the-art by \SI{14}{\percent}. In the introductory chapters the history, methods and applications of eye tracking are reviewed, with focus on video-based eye tracking and the use of personal calibration in these methods. Emphasis is placed on methods using neural networks and the strengths and weaknesses of how these methods implement personal calibration. / <p>QC 20210528</p>
27

Bird's-eye view vision-system for heavy vehicles with integrated human-detection

Harms Looström, Julia, Frisk, Emma January 2021 (has links)
No description available.
28

Comparing pre-trained CNN models on agricultural machines

Söderström, Douglas January 2021 (has links)
No description available.
29

A deep learning approach to defect detection with limited data availability

Boman, Jimmy January 2020 (has links)
In industrial processes, products are often visually inspected for defects inorder to verify their quality. Many automated visual inspection algorithms exist, and in many cases humans still perform the inspections. Advances in machine learning have showed that deep learning methods lie at the forefront of reliability and accuracy in such inspection tasks. In order to detect defects, most deep learning methods need large amounts of training data to learn from. This makes demonstrating such methods to a new customer problematic, since such data often does not exist beforehand, and has to be gathered specifically for the task. The aim of this thesis is to develop a method to perform such demonstrations. With access to only a small dataset, the method should be able to analyse an image and return a map of binary values, signifying which pixels in the original image belong to a defect and which do not. A method was developed that divides an image into overlapping patches, and analyses each patch individually for defects, using a deep learning method. Three different deep learning methods for classifying the patches were evaluated; a convolutional neural network, a transfer learning model based on the VGG19 network, and an autoencoder. The three methods were first compared in a simple binary classification task, without the patching method. They were then tested together with the patching method on two sets of images. The transfer learning model was able to identify every defect across both tests, having been trained using only four training images, proving that defect detection with deep learning can be done successfully even when there is not much training data available.
30

Segmentation and Analysis of Volume Images, with Applications

Malmberg, Filip January 2008 (has links)
Digital image analysis is the field of extracting relevant information from digital images. Recent developments in imaging techniques have made 3-dimensional volume images more common. This has created a need to extend existing 2D image analysis tools to handle images of higher dimensions. Such extensions are usually not straightforward. In many cases, the theoretical and computational complexity of a problem increases dramatically when an extra dimension is added. A fundamental problem in image analysis is image segmentation, i.e., identifying and separating relevant objects and structures in an image. Accurate segmentation is often required for further processing and analysis of the image can be applied. Despite years of active research, general image segmentation is still seen as an unsolved problem. This mainly due to the fact that it is hard to identify objects from image data only. Often, some high-level knowledge about the objects in the image is needed. This high-level knowledge may be provided in different ways. For fully automatic segmentation, the high-level knowledge must be incorporated in the segmentation algorithm itself. In interactive applications, a human user may provide high-level knowledge by guiding the segmentation process in various ways. The aim of the work presented here is to develop segmentation and analysistools for volume images. To limit the scope, the focus has been on two specic capplications of volume image analysis: analysis of volume images of fibrousmaterials and interactive segmentation of medical images. The respective image analysis challenges of these two applications will be discussed. While the work has been focused on these two applications, many of the results presented here are applicable to other image analysis problems.

Page generated in 0.0545 seconds