• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 263
  • 36
  • 15
  • 10
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 415
  • 415
  • 415
  • 268
  • 204
  • 186
  • 123
  • 98
  • 88
  • 81
  • 76
  • 71
  • 62
  • 62
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

Hluboké neuronové sítě a jejich využití při zpracování ekonomických dat / Deep neural networks and their application for economic data processing

Witzany, Tomáš January 2017 (has links)
Title: Deep neural networks and their application for economic data processing Author: Bc. Tomáš Witzany Department: Department of Theoretical Computer Science and Mathematical Logic Supervisor: Doc. RNDr. Iveta Mrázová, CSc., Department of Theoretical Com- puter Science and Mathematical Logic Abstract: Analysis of macroeconomic time-series is key for the informed decisions of national policy makers. Economic analysis has a rich history, however when considering modeling non-linear dependencies there are many unresolved issues in this field. One of the possible tools for time-series analysis are machine learn- ing methods. Of these methods, neural networks are one of the commonly used methods to model non-linear dependencies. This work studies different types of deep neural networks and their applicability for different analysis tasks, including GDP prediction and country classification. The studied models include multi- layered neural networks, LSTM networks, convolutional networks and Kohonen maps. Historical data of the macroeconomic development across over 190 differ- ent countries over the past fifty years is presented and analysed. This data is then used to train various models using the mentioned machine learning methods. To run the experiments we used the services of the computer center MetaCentrum....
152

Image Augmentation to Create Lower Quality Images for Training a YOLOv4 Object Detection Model

Melcherson, Tim January 2020 (has links)
Research in the Arctic is of ever growing importance, and modern technology is used in news ways to map and understand this very complex region and how it is effected by climate change. Here, animals and vegetation are tightly coupled with their environment in a fragile ecosystem, and when the environment undergo rapid changes it risks damaging these ecosystems severely.  Understanding what kind of data that has potential to be used in artificial intelligence, can be of importance as many research stations have data archives from decades of work in the Arctic. In this thesis, a YOLOv4 object detection model has been trained on two classes of images to investigate the performance impacts of disturbances in the training data set. An expanded data set was created by augmenting the initial data to contain various disturbances. A model was successfully trained on the augmented data set and a correlation between worse performance and presence of noise was detected, but changes in saturation and altered colour levels seemed to have less impact than expected. Reducing noise in gathered data is seemingly of greater importance than enhancing images with lacking colour levels. Further investigations with a larger and more thoroughly processed data set is required to gain a clearer picture of the impact of the various disturbances.
153

Classification of tree species from 3D point clouds using convolutional neural networks

Wiklander, Marcus January 2020 (has links)
In forest management, knowledge about a forest's distribution of tree species is key. Being able to automate tree species classification for large forest areas is of great interest, since it is tedious and costly labour doing it manually. In this project, the aim was to investigate the efficiency of classifying individual tree species (pine, spruce and deciduous forest) from 3D point clouds acquired by airborne laser scanning (ALS), using convolutional neural networks. Raw data consisted of 3D point clouds and photographic images of forests in northern Sweden, collected from a helicopter flying at low altitudes. The point cloud of each individual tree was connected to its representation in the photos, which allowed for manual labeling of training data to be used for training of convolutional neural networks. The training data consisted of labels and 2D projections created from the point clouds, represented as images. Two different convolutional neural networks were trained and tested; an adaptation of the LeNet architecture and the ResNet architecture. Both networks reached an accuracy close to 98 %, the LeNet adaptation having a slightly lower loss score for both validation and test data compared to that of ResNet. Confusion matrices for both networks showed similar F1 scores for all tree species, between 97 % and 98 %. The accuracies computed for both networks were found higher than those achieved in similar studies using ALS data to classify individual tree species. However, the results in this project were never tested against a true population sample to confirm the accuracy. To conclude, the use of convolutional neural networks is indeed an efficient method for classification of tree species, but further studies on unbiased data is needed to validate these results.
154

Výpočet mapy disparity ze stereo obrazu / Disparity Map Estimation from Stereo Image

Tábi, Roman January 2017 (has links)
The master thesis focuses on disparity map estimation using convolutional neural network. It discusses the problem of using convolutional neural networks for image comparison and disparity computation from stereo image as well as existing approaches of solutions for given problem. It also proposes and implements system that consists of convolutional neural network that measures the similarity between two image patches, and filtering and smoothing methods to improve the result disparity map. Experiments and results show, that the most quality disparity maps are computed using CNN on input patches with the size of 9x9 pixels combined with matching cost agregation and correction algorithm and bilateral filter.
155

Sémantický popis obrazovky embedded zařízení / Semantic description of the embedded device screen

Horák, Martin January 2020 (has links)
Tato diplomová práce se zabývá detekcí prvků uživatelského rozhraní na obrázku displejetiskárny za použití konvolučních neuronových sítí. V teoretické části je provedena rešeršesoučasně používaných architektur pro detekci objektů. V praktické čísti je probrána tvorbagalerie, učení a vyhodnocování vybraných modelů za použití Tensorflow ObjectDetectionAPI. Závěr práce pojednává o vhodnosti vycvičených modelů pro zadaný úkol.
156

Evoluční návrh konvolučních neuronových sítí / Evolutionary Design of Convolutional Neural Networks

Piňos, Michal January 2020 (has links)
The aim of this work is to design and implement a program for automated design of convolutional neural networks (CNN) with the use of evolutionary computing techniques. From a practical point of view, this approach reduces the requirements for the human factor in the design of CNN architectures, and thus eliminates the tedious and laborious process of manual design. This work utilizes a special form of genetic programming, called Cartesian genetic programming, which uses a graph representation for candidate solution encoding.This technique enables the user to parameterize the CNN search process and focus on architectures, that are interesting from the view of used computational units, accuracy or number of parameters. The proposed approach was tested on the standardized CIFAR-10dataset, which is often used by researchers to compare the performance of their CNNs. The performed experiments showed, that this approach has both research and practical potential and the implemented program opens up new possibilities in automated CNN design.
157

Reconstruction of the ionization history from 21cm maps with deep learning

Mangena January 2020 (has links)
Masters of Science / Upcoming and ongoing 21cm surveys, such as the Square Kilometre Array (SKA), Hydrogen Epoch of Reionization Array (HERA) and Low Frequency Array (LOFAR), will enable imaging of the neutral hydrogen distribution on cosmological scales in the early Universe. These experiments are expected to generate huge imaging datasets that will encode more information than the power spectrum. This provides an alternative unique way to constrain the astrophysical and cosmological parameters, which might break the degeneracies in the power spectral analysis. The global history of reionization remains fairly unconstrained. In this thesis, we explore the viability of directly using the 21cm images to reconstruct and constrain the reionization history. Using Convolutional Neural Networks (CNN), we create a fast estimator of the global ionization fraction from the 21cm images as produced by our Large Semi-numerical Simulation (SimFast21). Our estimator is able to efficiently recover the ionization fraction (xHII) at several redshifts, z = 7; 8; 9; 10 with an accuracy of 99% as quantified by the coefficient of determination R2 without being given any additional information about the 21cm maps. This approach, contrary to estimations based on the power spectrum, is model independent. When adding the thermal noise and instrumental effects from these 21cm arrays, the results are sensitive to the foreground removal level, affecting the recovery of high neutral fractions. We also observe similar trend when combining all redshifts but with an improved accuracy. Our analysis can be easily extended to place additional constraints on other astrophysical parameters such as the photon escape fraction. This work represents a step forward to extract the astrophysical and cosmological information from upcoming 21cm surveys.
158

Automatic Dispatching of Issues using Machine Learning / Automatisk fördelning av ärenden genom maskininlärning

Bengtsson, Fredrik, Combler, Adam January 2019 (has links)
Many software companies use issue tracking systems to organize their work. However, when working on large projects, across multiple teams, a problem of finding the correctteam to solve a certain issue arises. One team might detect a problem, which must be solved by another team. This can take time from employees tasked with finding the correct team and automating the dispatching of these issues can have large benefits for the company. In this thesis, the use of machine learning methods, mainly convolutional neural networks (CNN) for text classification, has been applied to this problem. For natural language processing both word- and character-level representations are commonly used. The results in this thesis suggests that the CNN learns different information based on whether word- or character-level representation is used. Furthermore, it was concluded that the CNN models performed on similar levels as the classical Support Vector Machine for this task. When compared to a human expert, working with dispatching issues, the best CNN model performed on a similar level when given the same information. The high throughput of a computer model, therefore, suggests automation of this task is very much possible.
159

Three problems in imaging systems: texture re-rendering in online decoration design, a novel monochrome halftoning algorithm, and face set recognition with convolutional neural networks

Tongyang Liu (5929991) 25 June 2020 (has links)
<p>In this thesis, studies on three problems in imaging systems will be discussed.</p> <p>The first problem deals with re-rendering segments of online indoor room images with preferred textures through websites to try new decoration ideas. Previous methods need too much manual positioning and alignment. In the thesis, a novel approach is presented to automatically achieve a natural outcome with respect to indoor room geometry layout.</p> <p>For the second problem, the laser electrophotographic system is eagerly looking for a digital halftoning algorithm that can deal with unequal printing resolution, since most halftoning algorithms are focused on equal resolution. In the thesis, a novel monochrome halftoning algorithm is presented to render continuous tone images with limited numbers of tone levels for laser printers with unequal printing resolution.</p> <p>For the third problem, a novel face set recognition method is presented. Face set recognition is important for face video analysis and face clustering in multiple imaging systems. And it is very challenging considering the variation of image sharpness, face directions and illuminations for different frames, as well as the number and the order of images in the face set. To tackle the problem, a novel convolutional neural network system is presented to generate a fixed-dimensional compact feature representation for the face set. The system collects information from all the images in the set while having emphasis on more frontal and sharper face images, and it is regardless of the number and the order of images. The generated feature representations allow direct, immediate similarity computation for face sets, thus can be directly used for recognition. The experiment result shows that our method outperforms other state of-the-art methods on the public test dataset.</p>
160

Efektivní implementace hlubokých neuronových sítí / Efficient implementation of deep neural networks

Kopál, Jakub January 2020 (has links)
In recent years, algorithms in the area of object detection have constantly been improving. The success of these algorithms has reached a level, where much of the development is focused on increasing speed at the expense of accuracy. As a result of recent improvements in the area of deep learning and new hardware architectures optimized for deep learning models, it is possible to detect objects in an image several hundreds times per second using only embedded and mobile devices. The main objective of this thesis is to study and summarize the most important methods in the area of effective object detection and apply them to a given real-world problem. By using state-of- the-art methods, we developed a traction-by-detection algorithm, which is based on our own object detection models that track transport vehicles in real-time using embedded and mobile devices. 1

Page generated in 0.0936 seconds