• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 649
  • 84
  • 37
  • 26
  • 15
  • 15
  • 12
  • 8
  • 7
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 1015
  • 876
  • 596
  • 512
  • 460
  • 420
  • 409
  • 304
  • 209
  • 187
  • 185
  • 179
  • 168
  • 162
  • 154
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Convolutional Neural Network FPGA-accelerator on Intel DE10-Standard FPGA

Tianxu, Yue January 2021 (has links)
Convolutional neural networks (CNNs) have been extensively used in many aspects, such as face and speech recognition, image searching and classification, and automatic drive. Hence, CNN accelerators have become a trending research. Generally, Graphics processing units (GPUs) are widely applied in CNNaccelerators. However, Field-programmable gate arrays (FPGAs) have higher energy and resource efficiency compared with GPUs, moreover, high-level synthesis tools based on Open Computing Language (OpenCL) can reduce the verification and implementation period for FPGAs. In this project, PipeCNN[1] is implemented on Intel DE10-Standard FPGA. This OpenCL design acceleratesAlexnet through the interaction between Advanced RISC Machine (ARM) and FPGA. Then, PipeCNN optimization based on memory read and convolution is analyzed and discussed.
192

Object Identification Using Mobile Device for Visually Impaired Person

Akarapu, Deepika 09 August 2021 (has links)
No description available.
193

Applications of Graph Convolutional Networks and DeepGCNs in Point Cloud Part Segmentation and Upsampling

Abualshour, Abdulellah 18 April 2020 (has links)
Graph convolutional networks (GCNs) showed promising results in learning from point cloud data. Applications of GCNs include point cloud classification, point cloud segmentation, point cloud upsampling, and more. Recently, the introduction of Deep Graph Convolutional Networks (DeepGCNs) allowed GCNs to go deeper, and thus resulted in better graph learning while avoiding the vanishing gradient problem in GCNs. By adapting impactful methods from convolutional neural networks (CNNs) such as residual connections, dense connections, and dilated convolutions, DeepGCNs allowed GCNs to learn better from non-Euclidean data. In addition, deep learning methods proved very effective in the task of point cloud upsampling. Unlike traditional optimization-based methods, deep learning-based methods to point cloud upsampling does not rely on priors nor hand-crafted features to learn how to upsample point clouds. In this thesis, I discuss the impact and show the performance results of DeepGCNs in the task of point cloud part segmentation on PartNet dataset. I also illustrate the significance of using GCNs as upsampling modules in the task of point cloud upsampling by introducing two novel upsampling modules: Multi-branch GCN and Clone GCN. I show quantitatively and qualitatively the performance results of our novel and versatile upsampling modules when evaluated on a new proposed standardized dataset: PU600, which is the largest and most diverse point cloud upsampling dataset currently in the literature.
194

Depth Estimation Using Adaptive Bins via Global Attention at High Resolution

Bhat, Shariq 21 April 2021 (has links)
We address the problem of estimating a high quality dense depth map from a single RGB input image. We start out with a baseline encoder-decoder convolutional neural network architecture and pose the question of how the global processing of information can help improve overall depth estimation. To this end, we propose a transformer-based architecture block that divides the depth range into bins whose center value is estimated adaptively per image. The final depth values are estimated as linear combinations of the bin centers. We call our new building block AdaBins. Our results show a decisive improvement over the state-of-the-art on several popular depth datasets across all metrics. We also validate the effectiveness of the proposed block with an ablation study.
195

Automatický odhad nadmořské výšky z obrazu / Altitude Estimation from an Image

Vašíček, Jan January 2015 (has links)
This thesis is concerned with the automatic altitude estimation from a single landscape photograph. I solved this task using convolutional neural networks. There was no suitable training dataset available having information about image altitude, thus I  had to create a new one. To estimate human performance in altitude estimation task, an experiment was conducted counting 100 subjects. The goal of this experiment was to measure the accuracy of the human estimate of camera altitude from an image. The measured average estimation error of subjects was 879 m. An automatic system based on convolutional neural networks outperforms humans with an average elevation error 712 m. The proposed system can be used in more complex scenario like the visual camera geo-localization.
196

The Peru approach against the COVID-19 infodemic: Insights and strategies

Alvarez-Risco, Aldo, Mejia, Christian R., Delgado-Zegarra, Jaime, Del-Aguila-Arcentales, Shyla, Arce-Esquivel, Arturo A., Valladares-Garrido, Mario J., Del Portal, Mauricio Rosas, Villegas, León F., Curioso, Walter H., Sekar, M. Chandra, Yáñez, Jaime A. 01 August 2020 (has links)
The COVID-19 epidemic has spawned an "infodemic,"with excessive and unfounded information that hinders an appropriate public health response. This perspective describes a selection of COVID-19 fake news that originated in Peru and the government's response to this information. Unlike other countries, Peru was relatively successful in controlling the infodemic possibly because of the implementation of prison sentences for persons who created and shared fake news. We believe that similar actions by other countries in collaboration with social media companies may offer a solution to the infodemic problem. / Revisión por pares
197

Vizuální paměť při vnímání prototypických scén / Visual Memory in the perception of prototypical scenes

Děchtěrenko, Filip January 2019 (has links)
To be able to operate in the world around us, we need to store visual information for further processing. Since we are able to memorize a vast array of visual scenes (photographs of the outside world), it is still an open question of how we represent these scenes in memory. Research shows that perception and memory for visual scenes is a complex problem that requires contribution from many subfields of vision science. In this work we focused on the visual scene memory on the creation of perceptual prototypes. Using convolutional neural networks, we defined the similarity of scenes in the scene space, which we used in two experiments. In the first experiment, we validated this space using a paradigm for detecting an odd scene. In the second experiment, using the Deese-Roediger-McDermott paradigm, we verified the creation of false memories and thus visual prototypes. The results show that people intuitively understand the scene space (Experiment 1) and that a visual prototype is created even in the case of the complex stimuli such as scenes. The results have wide application either for machine evaluation of image similarities or for visual memory research.
198

A Reward-based Algorithm for Hyperparameter Optimization of Neural Networks / En Belöningsbaserad Algoritm för Hyperparameteroptimering av Neurala Nätverk

Larsson, Olov January 2020 (has links)
Machine learning and its wide range of applications is becoming increasingly prevalent in both academia and industry. This thesis will focus on the two machine learning methods convolutional neural networks and reinforcement learning. Convolutional neural networks has seen great success in various applications for both classification and regression problems in a diverse range of fields, e.g. vision for self-driving cars or facial recognition. These networks are built on a set of trainable weights optimized on data, and a set of hyperparameters set by the designer of the network which will remain constant. For the network to perform well, the hyperparameters have to be optimized separately. The goal of this thesis is to investigate the use of reinforcement learning as a method for optimizing hyperparameters in convolutional neural networks built for classification problems. The reinforcement learning methods used are a tabular Q-learning and a new Q-learning inspired algorithm denominated max-table. These algorithms have been tested with different exploration policies based on each hyperparameter value’s covariance, precision or relevance to the performance metric. The reinforcement learning algorithms were mostly tested on the datasets CIFAR10 and MNIST fashion against a baseline set by random search. While the Q-learning algorithm was not able to perform better than random search, max-table was able to perform better than random search in 50% of the time on both datasets. Hyperparameterbased exploration policy using covariance and relevance were shown to decrease the optimizers’ performance. No significant difference was found between a hyperparameter based exploration policy using performance and an equally distributed exploration policy. / Maskininlärning och dess många tillämpningsområden blir vanligare i både akademin och industrin. Den här uppsatsen fokuserar på två maskininlärningsmetoder, faltande neurala nätverk och förstärkningsinlärning. Faltande neurala nätverk har sett stora framgångar inom olika applikationsområden både för klassifieringsproblem och regressionsproblem inom diverse fält, t.ex. syn för självkörande bilar eller ansiktsigenkänning. Dessa nätverk är uppbyggda på en uppsättning av tränbara parameterar men optimeras på data, samt en uppsättning hyperparameterar bestämda av designern och som hålls konstanta vilka behöver optimeras separat för att nätverket ska prestera bra. Målet med denna uppsats är att utforska användandet av förstärkningsinlärning som en metod för att optimera hyperparameterar i faltande neurala nätverk gjorda för klassifieringsproblem. De förstärkningsinlärningsmetoder som använts är en tabellarisk "Q-learning" samt en ny "Q-learning" inspirerad metod benämnd "max-table". Dessa algoritmer har testats med olika handlingsmetoder för utforskning baserade på hyperparameterarnas värdens kovarians, precision eller relevans gentemot utvärderingsmetriken. Förstärkningsinlärningsalgoritmerna var i största del testade på dataseten CIFAR10 och MNIST fashion och jämförda mot en baslinje satt av en slumpmässig sökning. Medan "Q-learning"-algoritmen inte kunde visas prestera bättre än den slumpmässiga sökningen, kunde "max-table" prestera bättre på 50\% av tiden på både dataseten. De handlingsmetoder för utforskning som var baserade på kovarians eller relevans visades minska algoritmens prestanda. Ingen signifikant skillnad kunde påvisas mellan en handlingsmetod baserad på hyperparametrarnas precision och en jämnt fördelad handlingsmetod för utforsking.
199

Evaluating Tangent Spaces, Distances, and Deep Learning Models to Develop Classifiers for Brain Connectivity Data

Michael Siyuan Wang (9193706) 03 August 2020 (has links)
A better, more optimized processing pipeline for functional connectivity (FC) data will likely accelerate practical advances within the field of neuroimaging. When using correlation-based measures of FC, researchers have recently employed a few data-driven methods to maximize its predictive power. In this study, we apply a few of these post-processing methods in both task, twin, and subject identification problems. First, we employ PCA reconstruction of the original dataset, which has been successfully used to maximize subject-level identifiability. We show there is dataset-dependent optimal PCA reconstruction for task and twin identification. Next, we analyze FCs in their native geometry using tangent space projection with various mean covariance reference matrices. We demonstrate that the tangent projection of the original FCs can drastically increase subject and twin identification rates. For example, the identification rate of 106 MZ twin pairs increased from 0.487 of the original FCs to 0.943 after tangent projection with the logarithmic Euclidean reference matrix. We also use Schaefer’s variable parcellation sizes to show that increasing parcellation granularity in general increases twin and subject identification rates. Finally, we show that our custom convolutional neural network classifier achieves an average task identification rate of 0.986, surpassing state-of-the-art results. These post-processing methods are promising for future research in functional connectome predictive modeling and, if optimized further, can likely be extended into clinical applications.
200

The Effect of Batch Normalization on Deep Convolutional Neural Networks / Effekten av batch normalization på djupt faltningsneuronnät

Schilling, Fabian January 2016 (has links)
Batch normalization is a recently popularized method for accelerating the training of deep feed-forward neural networks. Apart from speed improvements, the technique reportedly enables the use of higher learning rates, less careful parameter initialization, and saturating nonlinearities. The authors note that the precise effect of batch normalization on neural networks remains an area of further study, especially regarding their gradient propagation. Our work compares the convergence behavior of batch normalized networks with ones that lack such normalization. We train both a small multi-layer perceptron and a deep convolutional neural network on four popular image datasets. By systematically altering critical hyperparameters, we isolate the effects of batch normalization both in general and with respect to these hyperparameters. Our experiments show that batch normalization indeed has positive effects on many aspects of neural networks but we cannot confirm significant convergence speed improvements, especially when wall time is taken into account. Overall, batch normalized models achieve higher validation and test accuracies on all datasets, which we attribute to its regularizing effect and more stable gradient propagation. Due to these results, the use of batch normalization is generally advised since it prevents model divergence and may increase convergence speeds through higher learning rates. Regardless of these properties, we still recommend the use of variance-preserving weight initialization, as well as rectifiers over saturating nonlinearities. / Batch normalization är en metod för att påskynda träning av djupa framåtmatande neuronnnätv som nyligt blivit populär. Förutom hastighetsförbättringar så tillåter metoden enligt uppgift högre träningshastigheter, mindre noggrann parameterinitiering och mättande olinjäriteter. Författarna noterar att den exakta effekten av batch normalization på neuronnät fortfarande är ett område som kräver ytterligare studier, särskilt när det gäller deras gradient-fortplantning. Vårt arbete jämför konvergensbeteende mellan nätverk med och utan batch normalization. Vi träner både en liten flerlagersperceptron och ett djupt faltningsneuronnät på fyra populära bilddatamängder. Genom att systematiskt ändra kritiska hyperparametrar isolerar vi effekterna från batch normalization både i allmänhet och med avseende på dessa hyperparametrar. Våra experiment visar att batch normalization har positiva effekter på många aspekter av neuronnät, men vi kan inte bekräfta att det ger betydelsefullt snabbare konvergens, speciellt när väggtiden beaktas. Allmänt så uppnår modeller med batch normalization högre validerings- och testträffsäkerhet på alla datamängder, vilket vi tillskriver till dess reglerande effekt och mer stabil gradientfortplantning. På grund av dessa resultat är användningen av batch normalization generellt rekommenderat eftersom det förhindrar modelldivergens och kan öka konvergenshastigheter genom högre träningshastigheter. Trots dessa egenskaper rekommenderar vi fortfarande användning av varians-bevarande viktinitiering samt likriktare istället för mättande olinjäriteter.

Page generated in 0.0317 seconds