• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 335
  • 31
  • 18
  • 11
  • 8
  • 8
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 486
  • 247
  • 201
  • 191
  • 163
  • 139
  • 127
  • 112
  • 105
  • 102
  • 90
  • 88
  • 85
  • 83
  • 72
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

Evaluation of In-Silico Labeling for Live Cell Imaging

Sörman Paulsson, Elsa January 2021 (has links)
Today new drugs are tested on cell cultures in wells to minimize time, cost, andanimal testing. The cells are studied using microscopy in different ways and fluorescentprobes are used to study finer details than the light microscopy can observe.This is an invasive method, so instead of molecular analysis, imaging can be used.In this project, phase-contrast microscopy images of cells together with fluorescentmicroscopy images were used. We use Machine Learning to predict the fluorescentimages from the light microscopy images using a strategy called In-Silico Labeling.A Convolutional Neural Network called U-Net was trained and showed good resultson two different datasets. Pixel-wise regression, pixel-wise classification, andimage classification with one cell in each image was tested. The image classificationwas the most difficult part due to difficulties assigning good quality labels tosingle cells. Pixel-wise regression showed the best result.
352

Comparing CNN methods for detection and tracking of ships in satellite images / Jämförelse av CNN-baserad machine learning för detektion och spårning av fartyg i satellitbilder

Torén, Rickard January 2020 (has links)
Knowing where ships are located is a key factor to support safe maritime transports, harbor management as well as preventing accidents and illegal activities at sea. Present international solutions for geopositioning in the maritime domain exist such as the Automatic Identification System (AIS). However, AIS requires the ships to constantly transmit their location. Real time imaginary based on geostationary satellites has recently been proposed to complement the existing AIS system making locating and tracking more robust. This thesis investigated and compared two machine learning image analysis approaches – Faster R-CNN and SSD with FPN – for detection and tracking of ships in satellite images. Faster R-CNN is a two stage model which first proposes regions of interest followed by detection based on the proposals. SSD is a one stage model which directly detects objects with the additional FPN for better detection of objects covering few pixels. The MAritime SATellite Imagery dataset (MASATI) was used for training and evaluation of the candidate models with 5600 images taken from a wide variety of locations. The TensorFlow Object Detection API was used for the implementation of the two models. The results for detection show that Faster R-CNN achieved a 30.3% mean Average Precision (mAP) while SSD with FPN achieved only 0.0005% mAP on the unseen test part of the dataset. This study concluded that Faster R-CNN is a candidate for identifying and tracking ships in satellite images. SSD with FPN seems less suitable for this task. It is also concluded that the amount of training and choice of hyper-parameters impacted the results.
353

Applying Machine Learning Methods to Predict the Outcome of Shots in Football

Hedar, Sara January 2020 (has links)
The thesis investigates a publicly available dataset which covers morethan three million events in football matches. The aim of the study isto train machine learning models capable of modeling the relationshipbetween a shot event and its outcome. That is, to predict if a footballshot will result in a goal or not. By representing the shot indifferent ways, the aim is to draw conclusion regarding what elementsof a shot allows for a good prediction of its outcome. The shotrepresentation was varied both by including different numbers of eventspreceding the shot and by varying the set of features describing eachevent.The study shows that the performance of the machine learning modelsbenefit from including events preceding the shot. The highestpredictive performance was achieved by a long short-term memory neuralnetwork trained on the shot event and six events preceding the shot.The features which were found to have the largest positive impact onthe shot events were the precision of the event, the position on thefield and how the player was in contact with the ball. The size of thedataset was also evaluated and the results suggest that it issufficiently large for the size of the networks evaluated.
354

Evaluating Response Images From Protein Quantification

Engström, Mathias, Olby, Erik January 2020 (has links)
Gyros Protein Technologies develops instruments for automated immunoassays. Fluorescent antibodies are added to samples and excited with a laser. This results in a 16-bit image where the intensity is correlated to concentration of bound antibody. Artefacts may appear on the images due to dust, fibers or other problems, which affect the quantification. This project seeks to automatically detect such artifacts by classifying the images as good or bad using Deep Convolutional Neural Networks (DCNNs). To augment the dataset a simulation approach is used and a simulation program is developed that generates images based on developed simulation models. Several classification models are tested as well as different techniques used for training. The highest performing classifier is a VGG16 DCNN, pre-trained on simulated images, which reaches 94.8% accuracy. There are many sub-classes in the bad class, and many of these are very underrepresented in both the training and test datasets. This means that not much can be said of the classification power of these sub-classes. The conclusion is therefore that until more of this rare data can be collected, focus should lie on classifying the other more common examples. Using the approaches from this project, we believe this could result in a high performing product.
355

ROOM CATEGORIZATION USING SIMULTANEOUS LOCALIZATION AND MAPPING AND CONVOLUTIONAL NEURAL NETWORK

Iman Yazdansepas (9001001) 23 June 2020 (has links)
Robotic industries are growing faster than in any other era with the demand and rise of in home robots or assisted robots. Such a robot should be able to navigate between different rooms in the house autonomously. For autonomous navigation, the robot needs to build a map of the surrounding unknown environment and localize itself within the map. For home robots, distinguishing between different rooms improves the functionality of the robot. In this research, Simultaneously Localization And Mapping (SLAM) utilizing a LiDAR sensor is used to construct the environment map. LiDAR is more accurate and not sensitive to light intensity compared to vision. The SLAM method used is Gmapping to create a map of the environment. Gmapping is one of the robust and user-friendly packages in the Robotic Operating System (ROS), which creates a more accurate map, and requires less computational power. The constructed map is then used for room categorization using Convolutional Neural Network (CNN). Since CNN is one of the powerful techniques to classify the rooms based on the generated 2D map images. To demonstrate the applicability of the approach, simulations and experiments are designed and performed on campus and an apartment environment. The results indicate the Gmapping provides an accurate map. Each room used in the experimental design, undergoes training by using the Convolutional Neural Network with a data set of different apartment maps, to classify the room that was mapped using Gmapping. The room categorization results are compared with other approaches in the literature using the same data set to indicate the performance. The classification results show the applicability of using CNN for room categorization for applications such as assisted robots.
356

LiDAR Point Cloud De-noising for Adverse Weather

Bergius, Johan, Holmblad, Jesper January 2022 (has links)
Light Detection And Ranging (LiDAR) is a hot topic today primarily because of its vast importance within autonomous vehicles. LiDAR sensors are capable of capturing and identifying objects in the 3D environment. However, a drawback of LiDAR is that they perform poorly under adverse weather conditions. Noise present in LiDAR scans can be divided into random and pseudo-random noise. Random noise can be modeled and mitigated by statistical means. The same approach works on pseudo-random noise, but it is less effective. For this, Deep Neural Nets (DNN) are better suited. The main goal of this thesis is to investigate how snow can be detected in LiDAR point clouds and filtered out. The dataset used is Winter Adverse DrivingdataSet (WADS). Supervised filtering contains a comparison between statistical filtering and segmentation-based neural networks and is evaluated on recall, precision, and F1. The supervised approach is expanded by investigating an ensemble approach. The supervised result indicates that neural networks have an advantage over statistical filters, and the best result was obtained from the 3D convolution network with an F1 score of 94.58%. Our ensemble approaches improved the F1 score but did not lead to more snow being removed. We determine that an ensemble approach is a sub-optimal way of increasing the prediction performance and holds the drawback of being more complex. We also investigate an unsupervised approach. The unsupervised networks are evaluated on their ability to find noisy data and correct it. Correcting the LiDAR data means predicting new values for detected noise instead of just removing it. Correctness of such predictions is evaluated manually but with the assistance of metrics like PSNR and SSIM. None of the unsupervised networks produced an acceptable result. The reason behind this negative result is investigated and presented in our conclusion, along with a model that suffers none of the flaws pointed out.
357

Evaluation of the CNN Based Architectures on the Problem of Wide Baseline Stereo Matching / Utvärdering av system för stereomatchning som är baserade på neurala nätverk med faltning

Li, Vladimir January 2016 (has links)
Three-dimensional information is often used in robotics and 3D-mapping. There exist several ways to obtain a three-dimensional map. However, the time of flight used in the laser scanners or the structured light utilized by Kinect-like sensors sometimes are not sufficient. In this thesis, we investigate two CNN based stereo matching methods for obtaining 3D-information from a grayscaled pair of rectified images.While the state-of-the-art stereo matching method utilize a Siamese architecture, in this project a two-channel and a two stream network are trained in an attempt to outperform the state-of-the-art. A set of experiments were performed to achieve optimal hyperparameters. By changing one parameter at the time, the networks with architectures mentioned above are trained. After a completed training the networks are evaluated with two criteria, the error rate, and the runtime.Due to time limitations, we were not able to find optimal learning parameters. However, by using settings from [17] we train a two-channel network that performed almost on the same level as the state-of-the-art. The error rate on the test data for our best architecture is 2.64% while the error rate for the state-of-the-art Siamese network is 2.62%. We were not able to achieve better performance than the state-of-the-art, but we believe that it is possible to reduce the error rate further. On the other hand, the state-of-the-art Siamese stereo matching network is more efficient and faster during the disparity estimation. Therefore, if the time efficiency is prioritized, the Siamese based network should be considered.
358

Televizní zpravodajství o koronavirové epidemii 2020 jako možný svět / Broadcast coverage of the coronavirus epidemic 2020 as a possible world

Bergerová, Michaela January 2021 (has links)
This diploma thesis explores the theory of possible worlds and its relation to broadcast during the coronavirus pandemic. It perceives news, especially broadcast, as a possible world. Using narrative analysis, this thesis describes the characteristics of two possible worlds that arose during the first wave of the coronavirus pandemic on Czech Television and television Prima. The main goal of this research is to examine what kind of possible worlds have these two TV stations constructed in their main news programs and to point out how these two possible worlds differed. My diploma thesis should primarily contribute to a clear comparison of how the depiction of the coronavirus pandemic differed by individual television channels in the first half of 2020.
359

Object detection for a robotic lawn mower with neural network trained on automatically collected data

Sparr, Henrik January 2021 (has links)
Machine vision is hot research topic with findings being published at a high pace and more and more companies currently developing automated vehicles. Robotic lawn mowers are also increasing in popularity but most mowers still use relatively simple methods for cutting the lawn. No previous work has been published on machine learning networks that improved between cutting sessions by automatically collecting data and then used it for training. A data acquisition pipeline and neural network architecture that could help the mower in avoiding collision was therefor developed. Nine neural networks were tested of which a convolutional one reached the highest accuracy. The performance of the data acquisition routine and the networks show that it is possible to design a object detection model that improves between runs.
360

An evaluation of using a U-Net CNN with a random forest pre-screener : On a dataset of hand-drawn maps provided by länsstyrelsen i Jönköping

Hellgren, Robin, Axelsson, Martin January 2021 (has links)
Much research has been done on the use of machine learning to extract features such as buildings, lakes et cetera from satellite imagery, and while this dataset is valuable for many use cases, it is limited to time periods in which satellites were used. Historical maps have a much greater range of available time periods but the viability of using machine learning to extract data from these has not been investigated to any great extent. This case study uses a real-world use case to show the efficacy of using a U-Net convolutional neural network to extract features drawn on hand-drawn maps. By implementing a random forest as a pre-screener to the U-Net the goal was to filter out noise that could lead to false positives. By filtering out the noise the hope was to increase the accuracy of the U-Net. The pre-screener in this study has not performed well on the dataset and has not improved the performance of the U-Net. The U-Nets ability to extrapolate the location of features not explicitly drawn on the map was not clearly established. The results of this study show that the U-Net CNN could be an invaluable tool for quickly extracting data from this typically cumbersome data source, allowing for easier access to a wealth of data. The fields of archeology and climate science would find this especially useful.

Page generated in 0.0711 seconds