• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 419
  • 40
  • 12
  • 9
  • 6
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 515
  • 515
  • 429
  • 420
  • 417
  • 416
  • 414
  • 410
  • 410
  • 136
  • 85
  • 76
  • 75
  • 68
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Anomaly Detection for Product Inspection and Surveillance Applications / Anomalidetektion för produktinspektions- och övervakningsapplikationer

Thulin, Peter January 2015 (has links)
Anomaly detection is a general theory of detecting unusual patterns or events in data. This master thesis investigates the subject of anomaly detection in two different applications. The first application is product inspection using a camera and the second application is surveillance using a 2D laser scanner. The first part of the thesis presents a system for automatic visual defect inspection. The system is based on aligning the images of the product to a common template and doing pixel-wise comparisons. The system is trained using only images of products that are defined as normal, i.e. non-defective products. The visual properties of the inspected products are modelled using three different methods. The performance of the system and the different methods have been evaluated on four different datasets. The second part of the thesis presents a surveillance system based on a single laser range scanner. The system is able to detect certain anomalous events based on the time, position and velocities of individual objects in the scene. The practical usefulness of the system is made plausible by a qualitative evaluation using unlabelled data.
112

Photogrammetric methods for calculating the dimensions of cuboids from images / Fotogrammetriska metoder för beräkning av dimensionerna på rätblock från bilder

Lennartsson, Louise January 2015 (has links)
There are situations where you would like to know the size of an object but do not have a ruler nearby. However, it is likely that you are carrying a smartphone that has an integrated digital camera, so imagine if you could snap a photo of the object to get a size estimation. Different methods for finding the dimensions of a cuboid from a photography are evaluated in this project. A simple Android application implementing these methods has also been created. To be able to perform measurements of objects in images we need to know how the scene is reproduced by the camera. This depends on the traits of the camera, called the intrinsic parameters. These parameters are unknown unless a camera calibration is performed, which is a non-trivial task. Because of this eight smartphone cameras, of different models, were calibrated in search of similarities that could give ground for generalisations. To be able to determine the size of the cuboid the scale needs to be known, which is why a reference object is used. In this project a credit card is used as reference, which is placed on top of the cuboid. The four corners of the reference as well as four corners of the cuboid are used to determine the dimensions of the cuboid. Two methods, one dependent and one independent of the intrinsic parameters, are used to find the width and length, i.e. the sizes of the two dimensions that share a plane with the reference. These results are then used in another two methods to find the height of the cuboid. Errors were purposely introduced to the corners to investigate the performance of the different methods. The results show that the different methods perform very well and are all equally suitable for this type of problem. They also show that having correct reference corners is more important than having correct object corners as the results were highly dependent on the accuracy of the reference corners. Another conclusion is that the camera calibration is not necessary because different approximations of the intrinsic parameters can be used instead. / Det finns tillfällen då man undrar över storleken på ett föremål, men inte har något mätinstrument i närheten. Det är dock troligt att du har en smartphone på dig. Smartphones har oftast en integrerad digitalkamera, så tänk om du kunde ta ett foto på föremålet och få en storleksuppskattning. I det här projektet har olika metoder för att beräkna dimensionerna på ett rätblock utvärderats. En enkel Android-applikation som implementerar dessa metoder har också skapats. För att kunna göra mätningar på föremål i bilder måste vi veta hur vyn återskapas av kameran. Detta beror på kamerans egenskaper vilka kallas kameraparametrarna. Dessa parametrar kan man få fram genom att göra en kamerakalibrering, vilket inte är en trivial uppgift. Därför har åtta smartphonekameror, från olika tillverkare, kalibrerats för att se om det finns likheter mellan kamerorna som kan befoga vissa generaliseringar. För att kunna räkna ut storleken på rätblocket måste skalan vara känd och därför används ett referensobjekt. I detta projekt har ett kreditkort använts som referensobjekt. Referensen placeras ovanpå rätblocket och sedan används fyra av referensens hörn samt fyra av rätblockets hörn i beräkningarna. Två metoder, en beroende och en oberoende av kameraparametrarna, har använts för att beräkna längden och bredden, alltså längden på de två sidor som ligger i samma plan som referensobjektet. Detta resultat används sedan i ytterligare två olika metoder för att beräkna höjden på rätblocket. För att undersöka hur de olika metoderna klarade av fel manipulerades hörnen. Resultaten visar att de olika metoderna fungerar bra och är alla lika lämpliga för att lösa den här uppgiften. De visar också på att det är viktigare att referensobjektets hörn är korrekta än rätblockets hörn eftersom referensobjektets hörn hade större inverkan på resultaten. En slutsats som också kan dras är att kameraparametrarna kan approximeras och att kamerakalibrering därför inte nödvändigtvis behöver utföras.
113

FPGA-Accelerated Dehazing by Visible and Near-infrared Image Fusion

Karlsson, Jonas January 2015 (has links)
Fog and haze can have a dramatic impact on vision systems for land and sea vehicles. The impact of such conditions on infrared images is not as severe as for standard images. By fusing images from two cameras, one ordinary and one near-infrared camera, a complete dehazing system with colour preservation can be achieved. Applying several different algorithms to an image set and evaluating the results, the most suitable image fusion algoritm has been identified. Using an FPGA, a programmable integrated circuit, a crucial part of the algorithm has been implemented. It is capable of producing processed images 30 times faster than a laptop computer. This implementation lays the foundation of a real-time dehazing system and provides a significant part of the full solution. The results show that such a system can be accomplished with an FPGA.
114

An investigation into hazard-centric analysis of complex autonomous systems

Downes, C. G. January 2013 (has links)
This thesis proposes a hypothesis that a conventional, and essentially manual, HAZOP process can be improved with information obtained with model-based dynamic simulation, using a Monte Carlo approach, to update a Bayesian Belief model representing the expected relations between cause and effects - and thereby produce an enhanced HAZOP. The work considers how the expertise of a hazard and operability study team might be augmented with access to behavioural models, simulations and belief inference models. This incorporates models of dynamically complex system behaviour, considering where these might contribute to the expertise of a hazard and operability study team, and how these might bolster trust in the portrayal of system behaviour. With a questionnaire containing behavioural outputs from a representative systems model, responses were collected from a group with relevant domain expertise. From this it is argued that the quality of analysis is dependent upon the experience and expertise of the participants but this might be artificially augmented using probabilistic data derived from a system dynamics model. Consequently, Monte Carlo simulations of an improved exemplar system dynamics model are used to condition a behavioural inference model and also to generate measures of emergence associated with the deviation parameter used in the study. A Bayesian approach towards probability is adopted where particular events and combinations of circumstances are effectively unique or hypothetical, and perhaps irreproducible in practice. Therefore, it is shown that a Bayesian model, representing beliefs expressed in a hazard and operability study, conditioned by the likely occurrence of flaw events causing specific deviant behaviour from evidence observed in the system dynamical behaviour, may combine intuitive estimates based upon experience and expertise, with quantitative statistical information representing plausible evidence of safety constraint violation. A further behavioural measure identifies potential emergent behaviour by way of a Lyapunov Exponent. Together these improvements enhance the awareness of potential hazard cases.
115

Découverte de services et collaboration au sein d'une flotte hétérogène et hautement dynamique d'objets mobiles communicants autonomes / Service Discovery and Collaboration in a Heterogeneous and Highly Dynamic Swarm of Mobile Communicating and Autonomous Objects

Autefage, Vincent 26 October 2015 (has links)
Les systèmes autonomes sont des objets mobiles communicants capables de réaliser un certain nombre de tâches sans intervention humaine. Le coût (e.g. argent, poids, énergie) de la charge utile requise pour effectuer certaines missions est parfois trop important pour permettre aux engins d’embarquer la totalité des capacités nécessaires (i.e. capteurs et actionneurs). Répartir ces capacités sur plusieurs entités est une solution naturelle à ce problème. Un tel groupe d’entités constitue une flotte à laquelle il devient nécessaire de fournir un mécanisme de découverte permettant aux différents engins de partager leurs capacités respectives afin de résoudre une mission globale de façon collaborative. Ce mécanisme, outre l’affectation des tâches, doit gérer les conflits et les pannes potentielles qui peuvent survenir à tout moment sur tout engin de la flotte. Fort de ces constations, nous proposons un nouveau mécanisme collaboratif nommé AMiRALE qui apporte une solution aux problèmes ci-dessus pour les flottes hétérogènes d’engins mobiles autonomes. Notre système est entièrement distribué et repose uniquement sur des communications asynchrones. Nous proposonségalement un nouvel outil nommé NEmu permettant de créer des réseaux virtuels mobiles avec un contrôle important sur les propriétés de la topologie du réseau ainsi que sur la configuration des noeuds et des inter-connexions. Cet outil permet la réalisation d’expérimentations réalistes sur des prototypes d’applications réseaux. Enfin, nous proposons une évaluation de notre système collaboratif AMiRALE au travers d’un scénario de nettoyage de parc utilisant une flotte autonome de drones et de robots terrestres spécialisés. / We call autonomous systems, mobile and communicating objects which are able to perform several tasks without any human intervention. The overall cost (including price, weight and energy) of the payload required by some missions is sometimes too important to enable the entities to embed all the required capabilities (i.e. sensors and actuators). This is the reason why it is more suitable to spread all the capabilities among several entities. The team formed by those entities is called a swarm. It then becomes necessary to provide a discovery mechanism built into the swarm in order to enable its members to share their capabilities and to collaborate for achieving a global mission.This mechanism should perform task allocation as well as management of conflicts and failures which can occur at any moment on any entity of the swarm. In this thesis, we present a novel collaborative system which is called AMiRALE for heterogeneous swarms of autonomous mobile robots. Our system is fully distributed and relies only on asynchronous communications. We also present a novel tool called NEmu which enables to create virtual mobile networks with a complete control over the network topology, links and nodes properties. This tool is designed for performingrealistic experimentation on prototypes of network applications. Finally, we present experimental results on our collaborative system AMiRALE obtained through a park cleaning scenario which relies on an autonomous swarm of drones and specialized ground robots.
116

Market-based autonomous and elastic application execution on clouds / Gestion autonome des ressources et des applications dans un nuage informatique selon une approche fondée sur un marché

Costache, Stefania 03 July 2013 (has links)
Les organisations possédant des infrastructures pour le calcul à haute performance rencontrent des difficultés dans la gestion de leurs ressources. Ces difficultés sont dues au fait que des applications de différents types doivent pouvoir accéder concurremment aux ressources tandis que les utilisateurs peuvent avoir des objectifs de performance variés pour leurs applications. Les nuages informatiques apportent plus de flexibilité et un meilleur contrôle des ressources qui laissent espérer une amélioration de la satisfaction des utilisateurs en terme de qualité de service perçue. Cependant, les solutions de nuage informatique actuelles fournissent un support limité aux utilisateurs pour l'expression ou l'utilisation de politiques de gestion de ressources et elles n'offrent aucun support pour atteindre les objectifs de performance des applications. Dans cette thèse, nous présentons une approche qui aborde ce défi d'une manière unique. Notre approche offre un contrôle des ressources complètement décentralisé en allouant des ressources à travers un marché à pourcentage proportionnel tandis que les applications s'exécutent dans des environnements virtuels autonomes capable d'ajuster la demande de l'application selon les objectifs de performance définis par l'utilisateur. La combinaison de la politique de distribution de la monnaie et de la variation dynamique du prix des ressources assure une utilisation des ressources équitable. Nous avons évalué notre approche en simulation et expérimentalement sur la plate-forme Grid'5000. Nos résultats montrent que notre approche peut permettre la cohabitation des différentes politiques d'utilisation des ressources sur l'infrastructure, tout en améliorant l'utilisation des ressources. / Organizations owning HPC infrastructures are facing difficulties in managing their resources. These difficulties come from the need to provide concurrent resource access to different application types while considering that users might have different performance objectives for their applications. Cloud computing brings more flexibility and better resource control, promising to improve the user’s satisfaction in terms of perceived Quality of Service. Nevertheless, current cloud solutions provide limited support for users to express or use various resource management policies and they don't provide any support for application performance objectives.In this thesis, we present an approach that addresses this challenge in an unique way. Our approach provides a fully decentralized resource control by allocating resources through a proportional-share market, while applications run in autonomous virtual environments capable of scaling the application demand according to user performance objectives.The combination of currency distribution and dynamic resource pricing ensures fair resource utilization.We evaluated our approach in simulation and on the Grid'5000 testbed. Our results show that our approach can enable the co-habitation of different resource usage policies on the infrastructure, improving resource utilisation.
117

Navigability Assessment for Autonomous Systems Using Deep Neural Networks

Wimby Schmidt, Ebba January 2017 (has links)
Automated navigability assessment based on image sensor data is an important concern in the design of autonomous robotic systems. The problem consists in finding a mapping from input data to the navigability status of different areas of the surrounding world. Machine learning techniques are often applied to this problem. This thesis investigates an approach to navigability assessment in the image plane, based on offline learning using deep convolutional neural networks, applied to RGB and depth data collected using a robotic platform. Training outputs were generated by manually marking out instances of near collision in the sequences and tracing back the location of the near-collision frame through the previous frames. Several combinations of network inputs were tried out. Inputs included grayscale gradient versions of the RGB frames, depth maps, image coordinate maps and motion information in the form of a previous RGB frame or heading maps. Some improvement compared to simple depth thresholding was demonstrated, mainly in the handling of noise and missing pixels in the depth maps. The resulting networks appear to be mostly dependent on depth information; an attempt to train a network without the depth frames was unsuccessful,and a network trained using the depth frames alone performed similarly to networks trained with additional inputs. An unsuccessful attempt at training a network towards a more motion-dependent navigability concept was also made. It was done by including training frames captured as the robot was moving away from the obstacle, where the corresponding training outputs were marked as obstacle-free.
118

Object Detection and Semantic Segmentation Using Self-Supervised Learning

Gustavsson, Simon January 2021 (has links)
In this thesis, three well known self-supervised methods have been implemented and trained on road scene images. The three so called pretext tasks RotNet, MoCov2, and DeepCluster were used to train a neural network self-supervised. The self-supervised trained networks where then evaluated on different amount of labeled data on two downstream tasks, object detection and semantic segmentation. The performance of the self-supervised methods are compared to networks trained from scratch on the respective downstream task. The results show that it is possible to achieve a performance increase using self-supervision on a dataset containing road scene images only. When only a small amount of labeled data is available, the performance increase can be substantial, e.g., a mIoU from 33 to 39 when training semantic segmentation on 1750 images with a RotNet pre-trained backbone compared to training from scratch. However, it seems that when a large amount of labeled images are available (>70000 images), the self-supervised pretraining does not increase the performance as much or at all.
119

Generating synthetic brain MR images using a hybrid combination of Noise-to-Image and Image-to-Image GANs

Schilling, Lennart January 2020 (has links)
Generative Adversarial Networks (GANs) have attracted much attention because of their ability to learn high-dimensional, realistic data distributions. In the field of medical imaging, they can be used to augment the often small image sets available. In this way, for example, the training of image classification or segmentation models can be improved to support clinical decision making. GANs can be distinguished according to their input. While Noise-to-Image GANs synthesize new images from a random noise vector, Image-To-Image GANs translate a given image into another domain. In this study, it is investigated if the performance of a Noise-To-Image GAN, defined by its generated output quality and diversity, can be improved by using elements of a previously trained Image-To-Image GAN within its training. The data used consists of paired T1- and T2-weighted MR brain images. With the objective of generating additional T1-weighted images, a hybrid model (Hybrid GAN) is implemented that combines elements of a Deep Convolutional GAN (DCGAN) as a Noise-To-Image GAN and a Pix2Pix as an Image-To-Image GAN. Thereby, starting from the dependency of an input image, the model is gradually converted into a Noise-to-Image GAN. Performance is evaluated by the use of an independent classifier that estimates the divergence between the generative output distribution and the real data distribution. When comparing the Hybrid GAN performance with the DCGAN baseline, no improvement, neither in the quality nor in the diversity of the generated images, could be observed. Consequently, it could not be shown that the performance of a Noise-To-Image GAN is improved by using elements of a previously trained Image-To-Image GAN within its training.
120

FPGA acceleration of superpixel segmentation

Östgren, Magnus January 2020 (has links)
Superpixel segmentation is a preprocessing step for computer vision applications, where an image is split into segments referred to as superpixels. Then running the main algorithm on these superpixels reduces the number of data points processed in comparison to running the algorithm on pixels directly, while still keeping much of the same information. In this thesis, the possibility to run superpixel segmentation on an FPGA is researched. This has resulted in the development of a modified version of the algorithm SLIC, Simple Linear Iterative Clustering. An FPGA implementation of this algorithm has then been built in VHDL, it is designed as a pipeline, unrolling the iterations of SLIC. The designed algorithm shows a lot of potential and runs on real hardware, but more work is required to make the implementation more robust, and remove some visual artefacts.

Page generated in 0.0448 seconds