• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 454
  • 96
  • 8
  • 2
  • Tagged with
  • 560
  • 518
  • 472
  • 468
  • 458
  • 446
  • 443
  • 443
  • 443
  • 150
  • 97
  • 91
  • 90
  • 81
  • 77
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Photogrammetric methods for calculating the dimensions of cuboids from images / Fotogrammetriska metoder för beräkning av dimensionerna på rätblock från bilder

Lennartsson, Louise January 2015 (has links)
There are situations where you would like to know the size of an object but do not have a ruler nearby. However, it is likely that you are carrying a smartphone that has an integrated digital camera, so imagine if you could snap a photo of the object to get a size estimation. Different methods for finding the dimensions of a cuboid from a photography are evaluated in this project. A simple Android application implementing these methods has also been created. To be able to perform measurements of objects in images we need to know how the scene is reproduced by the camera. This depends on the traits of the camera, called the intrinsic parameters. These parameters are unknown unless a camera calibration is performed, which is a non-trivial task. Because of this eight smartphone cameras, of different models, were calibrated in search of similarities that could give ground for generalisations. To be able to determine the size of the cuboid the scale needs to be known, which is why a reference object is used. In this project a credit card is used as reference, which is placed on top of the cuboid. The four corners of the reference as well as four corners of the cuboid are used to determine the dimensions of the cuboid. Two methods, one dependent and one independent of the intrinsic parameters, are used to find the width and length, i.e. the sizes of the two dimensions that share a plane with the reference. These results are then used in another two methods to find the height of the cuboid. Errors were purposely introduced to the corners to investigate the performance of the different methods. The results show that the different methods perform very well and are all equally suitable for this type of problem. They also show that having correct reference corners is more important than having correct object corners as the results were highly dependent on the accuracy of the reference corners. Another conclusion is that the camera calibration is not necessary because different approximations of the intrinsic parameters can be used instead. / Det finns tillfällen då man undrar över storleken på ett föremål, men inte har något mätinstrument i närheten. Det är dock troligt att du har en smartphone på dig. Smartphones har oftast en integrerad digitalkamera, så tänk om du kunde ta ett foto på föremålet och få en storleksuppskattning. I det här projektet har olika metoder för att beräkna dimensionerna på ett rätblock utvärderats. En enkel Android-applikation som implementerar dessa metoder har också skapats. För att kunna göra mätningar på föremål i bilder måste vi veta hur vyn återskapas av kameran. Detta beror på kamerans egenskaper vilka kallas kameraparametrarna. Dessa parametrar kan man få fram genom att göra en kamerakalibrering, vilket inte är en trivial uppgift. Därför har åtta smartphonekameror, från olika tillverkare, kalibrerats för att se om det finns likheter mellan kamerorna som kan befoga vissa generaliseringar. För att kunna räkna ut storleken på rätblocket måste skalan vara känd och därför används ett referensobjekt. I detta projekt har ett kreditkort använts som referensobjekt. Referensen placeras ovanpå rätblocket och sedan används fyra av referensens hörn samt fyra av rätblockets hörn i beräkningarna. Två metoder, en beroende och en oberoende av kameraparametrarna, har använts för att beräkna längden och bredden, alltså längden på de två sidor som ligger i samma plan som referensobjektet. Detta resultat används sedan i ytterligare två olika metoder för att beräkna höjden på rätblocket. För att undersöka hur de olika metoderna klarade av fel manipulerades hörnen. Resultaten visar att de olika metoderna fungerar bra och är alla lika lämpliga för att lösa den här uppgiften. De visar också på att det är viktigare att referensobjektets hörn är korrekta än rätblockets hörn eftersom referensobjektets hörn hade större inverkan på resultaten. En slutsats som också kan dras är att kameraparametrarna kan approximeras och att kamerakalibrering därför inte nödvändigtvis behöver utföras.
102

FPGA-Accelerated Dehazing by Visible and Near-infrared Image Fusion

Karlsson, Jonas January 2015 (has links)
Fog and haze can have a dramatic impact on vision systems for land and sea vehicles. The impact of such conditions on infrared images is not as severe as for standard images. By fusing images from two cameras, one ordinary and one near-infrared camera, a complete dehazing system with colour preservation can be achieved. Applying several different algorithms to an image set and evaluating the results, the most suitable image fusion algoritm has been identified. Using an FPGA, a programmable integrated circuit, a crucial part of the algorithm has been implemented. It is capable of producing processed images 30 times faster than a laptop computer. This implementation lays the foundation of a real-time dehazing system and provides a significant part of the full solution. The results show that such a system can be accomplished with an FPGA.
103

Effektivitet hos navigering av autonoma agenter : En jämförelse mellan flödesfält och vägföljning / Efficiency of navigation of autonomous agents : A comparison between flow field and path following

Backman, Arvid January 2015 (has links)
I detta arbete undersöks två styrbeteenden som kan användas för att navigera grupper av agenter genom olika spelmiljöer. Teknikerna som arbetet har som syfte att utvärdera är vägföljnings- och flödesfältsbeteende. Arbetets undersökning har som avsikt att jämföra dessa tekniker med avseende på tids- och minneseffektivitet och utvärdera hur dessa tekniker presterar på dessa aspekter i olika gruppstorlekar och miljötyper. Resultaten från arbetets utförda tester visade att vägföljningsbeteendet klart är den mest minneseffektiva tekniken medan flödesfältsbeteendet var något mer tidseffektiv. I en slutgiltig diskussion presenteras arbetet ur en samhällelig och etisk synpunkt och även en diskussion över hur framtida forskning inom området kan se ut.
104

SVÄRM-AI FÖR TAKTISKA BESLUT HOS GRUPPER AV FIENDER / SWARM AI FOR TACTICAL DECISIONS IN GROUPS OF ENEMIES

Emanuelsson, Max January 2013 (has links)
I detta examensarbete tittas det närmare på taktiska beslut för större grupper av autonoma agenter. Arbetet ska försöka besvara följande frågeställning: Hur effektivt kan svärm-AI användas för att utföra taktiska beslut när det appliceras till ett spel med grupper av fiender? För att kunna besvara frågeställningen skapades en applikation där fyra kombinationer av styrbeteenden och beräkningsmodeller inom tekniken ”boids” användes. Inom två av kombinationerna användes traditionella styrbeteenden inom boids, och inom de andra introducerades ett styrbeteende för att flankera spelaren för att ge bättre resultat. Resultatet av arbetet visar att kombinationerna med de taktiska besluten fick överlägset bättre resultat och gav därmed goda förhoppningar för att besvara frågeställningen, men för att kunna ge ett definitivt svar på hur effektivt det är visade det sig att en större bredd av experiment hade behövts användas. Taktisk svärm-AI kan användas utanför datorspel, till exempel inom robotik och att simulera större militära slag.
105

Object Detection and Semantic Segmentation Using Self-Supervised Learning

Gustavsson, Simon January 2021 (has links)
In this thesis, three well known self-supervised methods have been implemented and trained on road scene images. The three so called pretext tasks RotNet, MoCov2, and DeepCluster were used to train a neural network self-supervised. The self-supervised trained networks where then evaluated on different amount of labeled data on two downstream tasks, object detection and semantic segmentation. The performance of the self-supervised methods are compared to networks trained from scratch on the respective downstream task. The results show that it is possible to achieve a performance increase using self-supervision on a dataset containing road scene images only. When only a small amount of labeled data is available, the performance increase can be substantial, e.g., a mIoU from 33 to 39 when training semantic segmentation on 1750 images with a RotNet pre-trained backbone compared to training from scratch. However, it seems that when a large amount of labeled images are available (>70000 images), the self-supervised pretraining does not increase the performance as much or at all.
106

Generating synthetic brain MR images using a hybrid combination of Noise-to-Image and Image-to-Image GANs

Schilling, Lennart January 2020 (has links)
Generative Adversarial Networks (GANs) have attracted much attention because of their ability to learn high-dimensional, realistic data distributions. In the field of medical imaging, they can be used to augment the often small image sets available. In this way, for example, the training of image classification or segmentation models can be improved to support clinical decision making. GANs can be distinguished according to their input. While Noise-to-Image GANs synthesize new images from a random noise vector, Image-To-Image GANs translate a given image into another domain. In this study, it is investigated if the performance of a Noise-To-Image GAN, defined by its generated output quality and diversity, can be improved by using elements of a previously trained Image-To-Image GAN within its training. The data used consists of paired T1- and T2-weighted MR brain images. With the objective of generating additional T1-weighted images, a hybrid model (Hybrid GAN) is implemented that combines elements of a Deep Convolutional GAN (DCGAN) as a Noise-To-Image GAN and a Pix2Pix as an Image-To-Image GAN. Thereby, starting from the dependency of an input image, the model is gradually converted into a Noise-to-Image GAN. Performance is evaluated by the use of an independent classifier that estimates the divergence between the generative output distribution and the real data distribution. When comparing the Hybrid GAN performance with the DCGAN baseline, no improvement, neither in the quality nor in the diversity of the generated images, could be observed. Consequently, it could not be shown that the performance of a Noise-To-Image GAN is improved by using elements of a previously trained Image-To-Image GAN within its training.
107

FPGA acceleration of superpixel segmentation

Östgren, Magnus January 2020 (has links)
Superpixel segmentation is a preprocessing step for computer vision applications, where an image is split into segments referred to as superpixels. Then running the main algorithm on these superpixels reduces the number of data points processed in comparison to running the algorithm on pixels directly, while still keeping much of the same information. In this thesis, the possibility to run superpixel segmentation on an FPGA is researched. This has resulted in the development of a modified version of the algorithm SLIC, Simple Linear Iterative Clustering. An FPGA implementation of this algorithm has then been built in VHDL, it is designed as a pipeline, unrolling the iterations of SLIC. The designed algorithm shows a lot of potential and runs on real hardware, but more work is required to make the implementation more robust, and remove some visual artefacts.
108

Simple feature detection inindoor geometry scanned with theMicrosoft Hololens

Björk, Nils January 2020 (has links)
The aim of this work was to determine whether line-type features(straight lines found in geometry considered interesting by auser) could be identified in spatial map data of indoorenvironments produced by the Microsoft Hololens augmented realityheadset. Five different data sets were used in this work onwhich the feature detection was performed, these data sets wereprovided as sample data representing the spatial map of fivedifferent rooms scanned using the Hololens headset which areavailable as part of the Hololens emulator. Related work onfeature detection in point clouds and 3D meshes were investigatedto try and find a suitable method to achieve line-type featuredetection. The chosen detection method used LSQ-plane fitting andrelevant cutoff variables to achieve this, which was inspired byrelated work on the subject of feature identification and meshsimplification. The method was evaluated using user-placedvalidation features and the distance between them and the detectedfeatures, defined using the midpoint diistance metric was used asa measure of quality for the detected measures. The resultingfeatures were not accurate enough to reliably or consistentlymatch the validation features inserted in the data and furtherimprovements to the detection method would be necessary to achievethis. A local feature-edge detection using the SOD & ESODoperators was considered and tested but was found to not besuitable for the spatial data provided by the Hololens emulator.The results shows that finding these features using the provideddata is possible, and the methods to produce them numerous. Thechoice of mehtod is however dependent on the ultimate applicationof these features, taking into account requirements for accuracyand performance.
109

Texture Enhancement in 3D Maps using Generative Adversarial Networks

Birgersson, Anna, Hellgren, Klara January 2019 (has links)
In this thesis we investigate the use of GANs for texture enhancement. To achievethis, we have studied if synthetic satellite images generated by GANs will improvethe texture in satellite-based 3D maps. We investigate two GANs; SRGAN and pix2pix. SRGAN increases the pixelresolution of the satellite images by generating upsampled images from low resolutionimages. As for pip2pix, the GAN performs image-to-image translation bytranslating a source image to a target image, without changing the pixel resolution. We trained the GANs in two different approaches, named SAT-to-AER andSAT-to-AER-3D, where SAT, AER and AER-3D are different datasets provided bythe company Vricon. In the first approach, aerial images were used as groundtruth and in the second approach, rendered images from an aerial-based 3D mapwere used as ground truth. The procedure of enhancing the texture in a satellite-based 3D map was dividedin two steps; the generation of synthetic satellite images and the re-texturingof the 3D map. Synthetic satellite images generated by two SRGAN models andone pix2pix model were used for the re-texturing. The best results were presentedusing SRGAN in the SAT-to-AER approach, in where the re-textured 3Dmap had enhanced structures and an increased perceived quality. SRGAN alsopresented a good result in the SAT-to-AER-3D approach, where the re-textured3D map had changed color distribution and the road markers were easier to distinguishfrom the ground. The images generated by the pix2pix model presentedthe worst result. As for the SAT-to-AER approach, even though the syntheticsatellite images generated by pix2pix were somewhat enhanced and containedless noise, they had no significant impact in the re-texturing. In the SAT-to-AER-3D approach, none of the investigated models based on the pix2pix frameworkpresented any successful results. We concluded that GANs can be used as a texture enhancer using both aerialimages and images rendered from an aerial-based 3D map as ground truth. Theuse of GANs as a texture enhancer have great potential and have several interestingareas for future works.
110

To drone, or not to drone : A qualitative study in how and when information from UxV should be distributed in rescue missions at sea

Laine, Rickard January 2020 (has links)
Swedish maritime rescue consists of a number of resources from various organizations that will work together and achieve a common goal, to save people in need. It turns out that information is a significant factor in maritime rescue missions. Whether you are rescuer at the accident scene or coordinating the rescue mission from the control center, information provides you better situation awareness and knowledge of the situation, which creates better conditions in order achieve the goal for the mission. Applying Unmanned Vehicles (UxV) for Swedish maritime rescue means another resource that can provide additional necessary information. In this study, several methods have been used to find out where in the mission information from UxVs can conceivably contribute. The study identifies three critical situations where there is a need for UxV. This result, in turn, leads to other questions, such as who should be the recipient of the new information and how it affects the information flow as a whole? Information visualization proves to be an important factor in this. Where you can help the recipient of the information in their work with the help of clear and easily understood visualization without affecting the flow or coordination in their work.

Page generated in 0.0481 seconds